Probabilistic Structural Mechanics
Probabilistic Structural Mechanics
M E C H A N I C S H A N D B O O K
P R O B A B I L I S T IC S T R U C T U R AL
M E C H A N I CS H A N D B O OK
T H E O RY A ND I N D U S T R I AL A P P L I C A T I O NS
E D I T ED BY C. ( R A J ) S U N D A RA RA J A N , P H . D.
CONSULTA
TN
HOUSTO
, NT E X AS
All rights reserved. No part of this book covered by the copyright hereon may be reproduced or used in any form
or by any means-graphic, electronic, or mechanical, including photocopying, recording, taping, or information
storage and retrieval systems-without the written permission of the publisher.
1 2 3 4 5 6 7 8 9 1 0 XXX 01 00 99 98 97 96 95
Sundararajan, C.
Probabilistic structural mechanics handbook/C. Sundararajan.
p. cm.
Includes bibliographical references and index.
ISBN 978-1-4613-5713-1 ISBN 978-1-4615-1771-9 (eBook)
DOI 10.1007/978-1-4615-1771-9
1. Structural stability-Statistical methods. 2. Stuctural
dynamics-Statistical methods. 3. Probabilities. 1. Title.
TA656.S86 1994
624. 17-dc20 94-18578
CIP
British Library Cataloguing in Publication Data available
CONTENTS
Preface viii
Contributors ix
1. Introduction 1
v
vi Contents
Index 737
PREFACE
The need for a comprehensive book on probabilistic structural mechanics that brings together the
many analytical and computational methods developed over the years and their applications in a wide
spectrum of industries-from residential buildings to nuclear power plants, from bridges to pressure
vessels, from steel structures to ceramic structures-became evident from the many discussions the
editor had with practising engineers, researchers and professors. Because no single individual has the
expertise to write a book with such a di.verse scope, a group of 39 authors from universities, research
laboratories, and industries from six countries in three continents was invited to write 30 chapters
covering the various aspects of probabilistic structural mechanics.
The editor and the authors believe that this handbook will serve as a reference text to practicing
engineers, teachers, students and researchers. It may also be used as a textbook for graduate-level courses
in probabilistic structural mechanics.
The editor wishes to thank the chapter authors for their contributions. This handbook would not have
been a reality without their collaboration.
viii
CONTRIBUTORS
ix
x Contributors
Dr. Truong V. Vo
Pacific Northwest Laboratory*
Richland, WA 99352
MECHANICS HANDBOOK
1
INTRODUCTION
c. (RAJ) SUNDARARAJAN
Probabilistic structural mechanics (PSM) is an evolving and expanding field within structural engi-
neering. The past four decades have seen significant advances in this field and still considerable research
and development activities are in progress. This handbook presents a comprehensive set of chapters
dealing with the wide spectrum of topics in the theory and applications of probabilistic structural
mechanics.
The first 20 chapters deal with basic concepts and methodologies of probabilistic structural mechan-
ics. Each of these chapters contains a tutorial-type discussion of the subject and highlights of advanced
developments. A comprehensive list of references is included in each chapter. Interested readers may
obtain more detailed information from these references. The final 10 chapters deal with the applications
of probabilistic structural mechanics in various industries and for various types of structures. A list of
references is provided in each of these applications chapters also.
The stress-strength interference method is one of the earliest methods of structural reliability
analysis. Although more advanced and less restrictive methods of reliability analysis have been devel-
oped in recent years, the stress-strength interference method is still widely used in many industries
because of its simplicity and ease of use. Chapter 2 discusses this method and provides a table of useful
formulas for the quick and easy computation of structural reliability.
The first-order and second-order reliability methods (FORM and SORM) provide attractive math-
ematical tools for the reliability analysis of a wide class of problems. Although these methods are
computationally more involved than the stress-strength interference method, they are less restrictive,
require less simplifying assumptions, and are valid for a broader class of problems. FORM and SORM
are the subjects of Chapter 3. The first-order second moment (FOSM) method and the advanced mean
value (AMV) method are also discussed therein.
Monte Carlo simulation (MCS) has long been used for the solution of probabilistic and statistical
problems in many fields of engineering, science, and mathematics. This method has also been used for
probabilistic structural analysis for many years. Although MCS is versatile and can solve virtually any
probabilistic structural mechanics problem that has an underlying deterministic solution, the cost of the
analysis is prohibitively high for complex problems, especially if very low probabilities are involved.
A number of variance reduction techniques (VRTs) have been developed during the past two decades
1
2 Introduction
to reduce the required computational effort. Advances in computer hardware have also brought down
the cost of computing. Thus the advances in computer hardware and developments in variance reduction
techniques have made it possible tOo perform probabilistic analyses of many complex structural engi-
neering problems at reasonable cost. Simulation-based reliability methods is the subject of Chapter
4. Direct Monte Carlo simulation and variance reduction techniques such as the importance sampling
method, stratified sampling method, adaptive sampling method, Latin hypercube sampling method,
antithetic variates method, conditional expectation method, generalized conditional expectation method,
and response surface method are discussed.
Probabilistic analysis techniques such as FORM, SORM, and simulation can be combined with
classic finite element analysis to solve a variety of probabilistic structural mechanics problems. Chapter
5 discusses the probabilistic finite element method. Applications of the method to linear and nonlinear
response analysis and reliability assessment are discussed. A brief discussion of the probabilistic bound-
ary element method is also presented, with an application to reliability assessment.
Methods of structural reliability analysis discussed in Chapters 2 to 5 are applicable to any type of
failure mode-yielding, plastic collapse, excessive deformation, buckling, fracture, fatigue, creep, etc.
Some surveys indicate that approximately 80% of all structural failures may be attributed to fracture
and fatigue. The random scatter in fracture and fatigue properties of structural materials is usually even
wider than that in other material properties. So probabilistic methods are especially apt for fracture and
fatigue analyses. Probabilistic fracture mechanics and probabilistic fatigue analysis are the topics
of Chapters 6 and 7, respectively. Material properties and methods of analysis are discussed with
applications.
The preceding chapters deal primarily with the reliability of individual structural components. How-
ever, there are many structures that consist of a number of structural components (structural members).
A typical example is an offshore oil platform consisting of many dozens of tubular members. Damage
or failure of a single member due to accident or deterioration may not necessarily mean the failure of
the structure. If the structure has redundancies, it may still be able to carry the loads, but at a reduced
level of reliability (residual reliability). Chapter 8 discusses the probabilistic analysis of structural
systems, that is, structures composed of many members. In addition to describing methods of reliability
analysis, the chapter also discusses wide-ranging topics such as the development of design code rules
to include redundancy effects, requalification of existing structures to carry higher than design loads,
reliability optimization, and the assessment of residual reliabilities of structures after an accident or
after years of aging in a corrosive or other hostile environment.
Chapter 9 considers structural reliability within the context of the reliability and risk assessment
of engineering systems. An engineering system, for example, an industrial plant, consists not only of
structures but also of mechanical, electrical, and electronic components and equipment. The structural
reliabilities should be considered within the "global framework" of the engineering system. Failure of
a single structure in a system may not necessarily produce system malfunction or failure because of the
redundancies built into most systems. 1\vo or more structural failures or a structural failure and one or
more nonstructural component failures may be necessary. Therefore structural failures and their prob-
abilities are best considered within the totality of the system and not as an isolated incidence. Reductions
in structural failure probabilities and the benefits of such reductions in terms of increased system reli-
ability or reduced system risk should be considered within the context of the system as a whole. In
fact, structures in a system or plant can be ranked according to their importance to system reliability
and the higher-ranked structures can be designed to higher reliability levels or inspected and maintained
at more frequent intervals to achieve higher reliabilities. Methods of system reliability analysis such as
failure modes and effects analysis, fault tree analysis and event tree analysis, and methods of ranking
structures are discussed in Chapter 9. A number of applications in which structural reliabilities play an
important role in the overall engineering system reliability are also presented.
Introduction 3
Chapters 2 to 8 consider only (or at least principally) the random variations in physical quantities
such as material properties, structural parameters and loads, and their effects on structural performance
and reliability. Human errors in design, construction, fabrication, and maintenance can also affect
structural performance and reliability. In fact, human error could be a more significant factor than
random vari3:tions in physical properties. Human errors and their effects on structural reliability is the
subject of Chapter 10.
Structural performance and reliability can be improved by a preservice inspection and then periodic
in-service inspections during the life of the structure. Nondestructive examination techniques such
as magnetic particles, radiography, acoustic emissions, eddy currents, and ultrasonics are used. These
examination techniques are not 100% correct every time. They may miss a flaw present in the struc-
ture or give a false alarm. In order to use the results of nondestructive examination effectively and
correctly in the reliability assessment of structures, a knowledge of the reliability of nondestructive
examination techniques is essential. Reliability of nondestructive examination techniques is discussed
in Chapter 11.
Probabilistic structural mechanics is expanding and evolving by adapting new theories and techniques
emerging in other fields of engineering and sciences. Chapters 12 to 14 discuss three such areas of
development, namely, expert opinion surveys, fuzzy set theory, and neural networks.
Expert opinion has been used in military intelligence, economics, medicine, and weather forecasting
with differing levels of success. Expert opinion is used in probabilistic structural mechanics when
structural failure probability prediction through statistical analysis of historical failure data or through
structural reliability analysis techniques such as those discussed in Chapters 2 to 8 is impossible, im-
practical, or prohibitively expensive. The considerable amount of research done in other fields of ap-
plications has been adapted for use in probabilistic structural mechanics. Much of the use of expert
opinion in probabilistic structural mechanics is for applications in the nuclear power industry, but the
gas industry has also used expert opinion to estimate failure probabilities of interior gas piping in
residential and commercial buildings. Expert opinion surveys are the subject of Chapter 12. Methods
of conducting expert opinion surveys and the analysis and aggregation of expert opinions are discussed.
Fuzzy set theory is a new branch of mathematics (circa 1965). Although the classic deterministic
and probability theories of mathematics are suited for the analysis of quantitative (numerical) infor-
mation, fuzzy set theory is best suited for the analysis of qualitative information. For example, it is
difficult, if not impossible, to provide a probability distribution for the quality of workmanship in a
construction project. But an experienced construction engineer may be able to characterize it qualita-
tivelyas "excellent," "good," "acceptable," or "poor." Such subjective, qualitative information can-
not be incorporated in the structural reliability analysis, using probability theory. But fuzzy set theory
can be used for this purpose. Chapter 13 discusses the fundamental concepts of fuzzy set theory and
its applications in probabilistic structural mechanics. The first impression many structural engineers
have of fuzzy set theory is that it is of no practical use in probabilistic structural mechanics. But if one
approaches it with an open mind, he or she may find it to be a useful tool to complement probability
theory. We purposely included a chapter on fuzzy set theory and its applications in this handbook in
order to create an interest in the subject among probabilistic structural mechanics researchers. This area
of probabilistic structural mechanics is still in early stages of development and much work is yet to be
done.
The most recent advance in computer software technology is neural networks. Use of this new
technology in probabilistic structural mechanics is the subject of Chapter 14. This area of probabilistic
structural mechanics is still in its infancy and much work is yet to be done. The pioneering application
and results presented in the Chapter show the potential of neural networks in probabilistic structural
mechanics.
Chapters 15 to 18 discuss applications of probabilistic structural mechanics in design codes devel-
4 Introduction
opment, structural optimization, in-selrvice inspection planning, and life expectancy prediction. These
generic applications cross industry lines and structural types. They are applicable to any type of structure
in any industry-whether buildings or bridges, nuclear plants or naval vessels, equipment supports or
aircraft structures.
Most of the current design codes are based on deterministic principles. However, the random vari-
abilities in structural strength and loads are recognized and are implicitly considered by specifying a
safety factor between nominal strengths and nominal loads. Safety factors are specified on the basis of
the collective judgment of the code developers. Although these safety factors have served society well
by providing for the design of safe structures, failure probabilities of structures designed according to
these codes are not known (without performing a structural reliability analysis). There is no one-to-one
relationship between the safety factor and structural reliability; the latter depends not only on the safety
factor but also on the load-response relationships, failure criteria, and random variabilities in material
properties and loads. Therefore structures designed to the· same code specifications do not necessarily
have the same level of reliability; some structures are overdesigned (higher reliability) and some are
underdesigned (lower reliability). Probability-based design codes attempt to derive code specifications
that would result in an approximately uniform level of reliability for all structures designed according to
the code. Chapter 15 discusses the basic philosophy and development of probability-based design codes.
Code-based designs are acceptable for the vast majority of structures. But there are special situations
in which minimum weight designs 01 other types of optimal designs are important. For example, min-
imum weight design is of interest in aircraft structures, not only because of the initial material savings
but also because of fuel savings throughout the life of the aircraft. Special structures such as space
stations, which are not governed by any design code, may also be designed to achieve maximum
reliability within budget constraints. Reliability-based structural optimization is the subject of Chapter
16. Optimization techniques are described with illustrative examples. Although reliability optimization
techniques are well developed, they have not yet found inroads into industrial applications.
Periodic in-service inspections are an important part of maintaining the reliability of operating struc-
tures above specified levels even as time progresses and the structures age. Because the very purpose
of in-service inspections is to maintain an adequate level of reliability over the service life of the
structure, setting the inspection interval on the basis of reliability analysis is a logical step. Unlike the
conventional practice of specifying inspection intervals on the basis of past experience and engineering
judgment, reliability-based (or risk-based) inspection strategies are more rational and the structure is
neither overinspected nor underinspected; the inspection interval is just sufficient to maintain the re-
quired level of reliability. Chapter 17 discusses the use of probabilistic structural mechanics in inspection
and maintenance planning.
The infrastructure and industrial facilities built during the 1950s and 1960s in the United States and
many other countries are aging and deteriorating. Estimation of the remaining life and methods for
extending the life are becoming increasingly important. Probabilistic methods are well suited for life
expectancy prediction and life extension planning. Even if a structure has reached the end of its design
life (on the basis of the original design calculations), it does not necessarily mean that the structure is
unsafe or unfit for service. Probabilistic methods can be used to compute the reliability at the end of
design life, and if this reliability is at an acceptable level the structure could continue in service. Even
if the structural reliability is reachilllg close to unacceptable levels, life extension strategies such as
improved or more frequent inspection and maintenance or strengthening of selected structural members
could be instituted and their beneficilal effects on the reliability of the structure could be quantified by
probabilistic structural analysis. Probability-based life prediction is the subject of Chapter 18.
Chapters 19 and 20 deal with the reliability of structures during natural disasters such as earth-
quakes, tornadoes, and hurricanes. Although earthquake loads and wind loads are included in most
design codes, severe earthquakes and extreme wind conditions well above design levels can occur,
Introduction 5
although very infrequently, and cause widespread damage to structures. Consequences are not only
property damage but also injuries and loss of life. Insurance companies have to consider the probabilities
and consequences of natural disasters. Government agencies and industries have to consider the damage
to critical facilities and the potential consequences to public health and the environment. Thus the
estimation of the probability of occurrence of natural disasters, probability of structural damage during
such events, and the overall risk due to such damage are of interest. These topics are discussed in
Chapters 19 and 20 with reference to earthquakes and extreme-wind events, respectively.
Probabilistic methods of structural analysis are now used in a broad spectrum of industries. Some
industries have probabilistic concepts integrated into the design codes whereas in many other industries
probabilistic methods are used to resolve special problems. Chapters 21 to 26 discuss applications of
probabilistic structural mechanics in a number of industries.
The nuclear power industry is one example in which probabilistic structural mechanics is used to
resolve special problems and licensing issues. Also, structural failure probabilities have been combined
with mechanical, electrical, and electronic component failure probabilities to predict the public risks
due to commercial nuclear plant operations. Probabilistic structural analysis is also used to investigate
the adequacy of the codes, regulations, and procedUl;es used in the design of nuclear power plant
structures. In-service inspection planning is yet another application. These and other applications in the
nuclear power industry are discussed in Chapter 21.
Chapter 22 discusses applications to pressure vessels and piping. The impetus for the use of prob-
abilistic structural mechanics in pressure vessels and piping came from safety concerns in the nuclear
power industry. The probabilistic methods developed by the nuclear power industry have also been
adapted for applications to nonnuclear pressure vessels and piping. Both nuclear and nonnuclear appli-
cations are discussed in Chapter 22. Applications discussed include the resolution of safety issues in
nuclear power plants, remaining life prediction, evaluation of life extension strategies, minimum weight
design, and in-service inspection planning.
The use of new, advanced composite materials, the ever-increasing performance demands, and the
need for high reliability and safety during missions have all been the impetus for the application of
probabilistic structural mechanics in the military aircraft industry. The commercial aircraft industry is
also following suit. Applications of probabilistic structural mechanics in the aircraft industry is the
subject of Chapter 23. In addition to the usual reliability computation from load and material property
statistics, reliability evaluation of complex, built-up structures on the basis of certification test results
and the failure probability analysis of a fleet of aircraft using flight hour and field inspection data are
also discussed. With cuts in military budgets and economic crunch in the aviation industry, aircraft are
being used beyond their initial design life. Life prediction is also discussed in the chapter.
As with military aircrafts, military naval vessels have also to meet increasing performance and
reliability demands. A number of research projects on the use of probabilistic methods are ongoing.
The commercial ship industry is also taking notice. Chapter 24 discusses the probabilistic analysis of
ship structures. Applications in design, in-service inspection, and life prediction are discussed.
The offshore oil production industry has been in the forefront of developing and using probabilistic
methods of structural analysis and design. As oil platforms move into deeper and deeper waters, new
structural concepts and construction technologies are being used. Together with concerns about oil spills,
workers safety, and the economic impact of platform damage, these factors prompted interest and
research in the use of probabilistic structural mechanics to design and operate safer, more reliable, and
more economical offshore platforms. Chapter 25 discusses this subject. Fatigue reliability assessment,
incorporation of in-service inspection findings to update reliability estimates, reliability optimization,
requalification of older platforms, and probability-based design codes are some of the topics discussed
in the chapter. It is estimated that the use of probabilistic methods has saved the oil industry hundreds
of millions of dollars in the design and operation of offshore platforms.
6 Introduction
Use of probabilistic methods in the analysis, design, and maintenance of bridges has been the subject
of research for many years. Many thousands of bridges in the United States and other countries are
aging. Their remaining lives have to be estimated and in the majority of cases the older bridges have
to be renovated or new bridges built. Probabilistic structural mechanics could play a vital role in the
life expectancy prediction, renovation, and new construction. A comprehensive discussion of the appli-
cations of probabilistic methods to design, reliability assessment, inspection planning, and life prediction
of bridges is provided in Chapter 26.
Chapters 27 to 30 discuss probabilistic structural mechanics applications in steel, concrete, timber,
and ceramic structures, respectively.
Use of probabilistic methods in steel structure design is now well matured. Probability-based codes
(load and resistance factor design [LRFD] codes) are in use in the United States, Canada, and many
European countries. Chapter 27 discusses and comments on LRFD code rules for steel structures.
Material properties data and some results from simulation-based reliability assessment are also presented.
Probabilistic approaches to the design of concrete structures is also well developed and LRFD
codes are in use in the United States and elsewhere. Concrete structures are the subject of Chapter 28.
In addition to a discussion of LRFD code rules, a Bayesian approach for estimating the compressive
strength of in situ concrete in existing structures is presented. Safety assessment of aging infrastructure
and industrial facilities requires an t:stimate of the strength of existing concrete structures and this
Bayesian approach is an effective and economical method for such estimates.
Studies on the use of probabilistic methods for timber structure design have been ongoing for about
two decades and probability-based design codes have been developed. TImber structures are the subject
of Chapter 29. Material properties, probability-based design codes, and reliability assessment of struc-
tural members, connections, wood joist floors, wood stud walls, trusses, bridges, and transmission poles
are discussoo.
Ceramic structures are being used increasingly in many applications in which high temperature and
corrosion resistance are important. AlPplications of probabilistic methods to ceramic structures are not
as mature as in steel or concrete stru(:tures. However, probabilistic methods are well suited for ceramic
structures because of the wide scatter in material properties relevant to the dominant failure mode-
brittle fracture. The application of probabilistic structural mechanics to ceramic structures is the subject
of Chapter 30. Material properties, probabilistic analysis of brittle fracture, and development of lifetime
diagrams are discussed.
The editor believes that this handbook will serve not only as a useful reference book but also as a
catalyst for interactions between researchers and applications engineers, and among applications engi-
neers in different industries. Such interactions should be conducive to creating an environment for basic
and applied research that would meet the current and projected needs of the applications engineers. We
have purposely included both theoretical and industrial applications chapters in this handbook, so that
practising engineers would be exposed not only to applications in their respective industries but also to
recent advances in probabilistic methodologies and computational tools and be tempted to use them in
their projects. Also, this book could promote cross-industry fertilization whereby engineers from one
industry learn about applications in other industries and adapt them for their own applications. As an
example, there are methods and software used in offshore structures design and maintenance that would
lend themselves easily to applications in aircraft structures or pressure vessels and piping.
This book also exposes researchers, professors, and graduate students to probabilistic structural me-
chanics applications in a wide spectlUm of industries. This exposure would help them identify future
research and training needs, as applications of probabilistic structural mechanics are broadening in scope
and increasing in numbers.
The initial impetus for probabilistic structural mechanics applications has been safety concerns. With
increasing public demand for safer products and safer industrial operations, use of probabilistic methods
Introduction 7
in this direction should increase. Although safety assessment of new designs and aging structures is a
primary application of probabilistic structural mechanics, economic benefits of using probabilistic meth-
ods in design, licensing, inspection planning, life prediction, and life extension are also being recognized.
In fact, as noted in the preceding discussions and as will be described in several of the chapters in this
handbook, many recent applications of probabilistic structural mechanics are economy-driven.
Thus, with the double impetus from safety and economic perspectives, the use of probabilistic meth-
ods in structural engineering should broaden and increase in the coming years. This handbook should
serve as a comprehensive reference book for researchers, professors, students, and practising engineers
interested in the development and application of probabilistic structural mechanics.
1
STRESS-STRENGTH
INTERFERENCE METHOD
1. INTRODUCTION
The stress-strength interference method is one of the oldest methods of structural reliability analysis.
Although more powerful methods of reliability analysis such as the first-orderlsecond-order reliability
methods and simulation techniques (which are applicable to a broader class of problems and with less
restrictive assumptions) are now available, the stress-strength interference method continues to be a
popular method of reliability analysis among practicing engineers in many industries. The attractiveness
of the method lies in its simplicity, e:ase, and economy. A major drawback is the assumption that the
strength and stress are statistically independent, which may not be valid for some problems. If this
assumption can be justified, then reliability can be computed relatively quickly, using stress-strength
interference methods; analytical solutions are available for a wide range of situations.
The name "stress-strength interference method" seems to imply that structural reliability is com-
puted from the stress and strength distributions. But the name is in a sense a misnomer because the
method is applicable to a broader class of problems. The term "stress" should be considered in a
broader sense as any applied load or load-induced response quantity that has the potential to cause
failure. Examples are stress, force, moment, torque, pressure, temperature, shock, vibration, stress in-
tensity, strain, and deformation. The term "strength" should be considered in a broader sense as the
capacity of the component or system to withstand the applied load ("stress"). Examples are yield stress,
ultimate stress, yield moment, collapse moment, buckling load, and permissible deformation, depending
on the type of applied load (stress, force, moment, deformation, etc.) and the failure criterion (yield
failure, collapse, fatigue, excessive deformation, etc.). Some authors use the term "load-capacity in-
terference method", instead of "stress-strength interference method", to indicate the broader scope of
the method.
Within the context of the stress-strength interference method, failure is said to occur if the stress
(load) exceeds the strength (capacity). Failure probability or unreliability is the probability that the stress
is greater than the strength. The stress-strength interference method may be used in conjunction with
a variety of failure modes such as yielding, buckling, fracture, and fatigue.
8
Stress-Strength Interference Method
2. NOTATIONS
When dealing with random variables, the random variables are denoted by capital letters (X, Y, etc.)
and the specific values they take are denoted by the corresponding lower case letters (x, y, etc.).
A A random variable
a A specific value of random variable A
a Mean value of random variable A
Ii Median value of random variable A
B,(m, n) Incomplete Beta function
C, c Capacity (or strength)
FA(') Cumulative distribution function of random variable A
U·) Probability density function of random variable A
fA!.A2. ..• A'(' .. ) Joint probability density function of random variables Ah A 2 , ••• ,A.
g Performance function
K K-factor
L,t Load (or stress)
Nc Number of data points used to determine strength probability distribution
NL Number of data points used to determine stress probability distribution
n. Effective sample size
Pr Failure probability
R Reliability
U, u Difference variable
v, v Ratio variable
J3 Reliability index; also, slope parameter in Weibull distribution
-y Lower one-sided confidence level
f(.) Gamma function
cj>(.) Cumulative distribution function of the standard normal variable
n Failure domain
CI"A Standard deviation of random variable A
The stress-strength interference equation may be derived from a general failure criterion (Sundararajan,
1986). This derivation also brings forth the assumptions inherent in the stress-strength interference
method.
Let Xi' with i = 1, 2, ... ,n, be the basic variables that describe the structure and loads. These
variables could be structural dimensions, material properties, and loads. Let the failure criterion be given
by
(2-2)
(2-3)
where C is the strength (capacity) parameter and L is the stress (load) parameter.
Probability of failure is given by
(2-5)
where Ix ,x2, ... ,xn(xh X2, ••• ,xn ) is the joint probability density function of the basic variables Xi' and n
j
is the failure domain where the inequality of Eq. (2-1) is satisfied. The integral on the right-hand
represents the multidimensional volume of the joint probability density function within the failure
domain.
The stress-strength equation may be derived from Eq. (2-5) under certain conditions. Consider the
failure function given by Eq. (2-3). If there is only one stress and strength parameter each, and the
stress and strength parameters are statistically independent, then the failure function may be written as
where C and L are the strength (capacity) and stress (load) parameters, respectively. For example, L is
the maximum stress in a structural member and C is the ultimate stress of the structural material; or L
is the load on a column and C is the buckling load of the column.
Substitution of Eq. (2-6) into Eq. (2-5) yields
Pc = L A,c(C, e) dC de (2-7)
where /r..c(f, c) is the joint probability density function of Land C. Because Land C are statistically
independent, we have
where A(f) and fc(c) are the probability density functions of L at L =f and C at C = c, respectively,
The failure domain n is given by
L>C (2-9)
Stress-Strength Interference Method 11
Substituting Eqs. (2-8) and (2-9) into Eq. (2-5), and providing the appropriate limits to the integral, we
obtain the failure probability as
Pf = i~ [f A(f) d€ ] Me) de
(2-10)
= i~ [1 - FL(e)]fc(e) de
or
= i~ FC(€)A(€) d€
where F L(.) and Fe(.) are the cumulative distribution functions of Land C, respectively. FL(C) is the
cumulative distribution function of L at L = c, and Fc(f) is the cumulative distribution function of C
at C = i.
The foregoing two equations (Eqs. [2-10] and [2-11]) form the basis for the stress-strength inter-
ference method. It is important to remember that these equations are applicable only if the stress and
strength are statistically independent.
Analytical expressions for the integrals in Eqs. (2-10) and (2-11) are available for a variety of
probability density functions (discussed in Section 6). Tabulated results are available for a few cases
(discussed in Section 7). When neither analytical expressions nor tabulated results are available, recourse
has to be taken to numerical integration procedures (discussed in Section 8).
4. GRAPmCAL REPRESENTATION
It is a common practice to represent stress-strength interference by Fig. 2-1. The figure shows the
probability density functions of stress and strength and their interference (overlap). It should be pointed
out that the overlapped area (shown shaded in Fig. 2-1) is not equal to the failure probability. However,
this area is qualitatively proportional to the failure probability (the larger the area, the higher the failure
probability; the smaller the area, the lower the failure probability) as long as the mean value of stress
is less than the mean value of strength.
The failure probability is equal to the black area in Fig. 2-1. This area will lie within the shaded
area (overlapped area) as long as the mean value of stress is less than the mean value of strength. The
curve under which this "failure area" falls is nothing but the integrand of the outer integral in
Eq. (2-11). That is,
where E(i) is the curve under which the failure area falls. Kececioglu and Li (1984), who introduced
this function in the structural reliability literature, call it the failure function.
U Stress-Strength Interference Method
Stress, strength
5. ALTERNATE FORMULATIONS
In the foregoing discussions, the failure probability was computed in terms of the probability density
functions of stress and strength. It may also be computed in terms of the probability density function
of the difference between the strength and stress, or the probability density function of the ratio of the
strength by stress. Such formulations may make the reliability computation easier in some problems.
U=C-L (2-13)
where fu(u) is the probability density function of U, and is known as the difference distribution. Fu(O)
is the cumulative distribution function of U at U = O. The probability density function of U is shown
in Fig. 2-2. The shaded area (area to the left of the vertical axis) is equal to the failure probability.
Even if the probability density functions of C and L are standard distributions (Gaussian, lognormal,
Weibull, etc.), the probability density function of U may not necessarily be a standard distribution. For
example, if both C and L are, say, lognormally distributed, U is not lognormally distributed. The
probability density function of U may have to be determined numerically and then the integral in Eq.
(2-14) must be computed by numeril;al integration (discussed in Section 8.2).
There are exceptions. One notabk exception is when C and L are normally distributed. If C and L
are normally distributed, then U is also normally distributed with mean value u and standard deviation
Stress-Strength Interference Method 13
(J' u given by
Ii=c-€ (2-15)
(2-16)
where c and l are the mean values of C and L, respectively and (J'c and (J'L are the standard deviations
of C and L, respectively.
The failure probability is given by (per Eq. [2-14])
Pc = fo _ 1 exp [
-00 <Tv vz;rr
_! (u
2
-
<Tv
Ii) 2] du (2-17)
y = (U - Ii)/<Tv (2-18)
Because U is normally distributed with mean U and standard deviation (J'u, Y is the standard normal
variable, which is normally distributed with mean value equal to 0 and standard deviation equal to 1.
The above transformation transforms Eq. (2-17) to
Pc = f-~ _1_
vz;rr
-00
exp (- ~) dy = 4>(-13)
2
(2-19)
(2-20)
and </>(-13) is the cumulative distribution function of the standard normal variable. Values of this
function </>(x) for various values of x are available in tabular form in many probability and statistics
textbooks (e.g., Benjamin and Cornell, 1970).
Difference Variable
0.0
Ratio Variable
Thus we have a simple expression for the failure probability when both stress and strength are
normally distributed.
where fv(v) is the probability density function of V. The probability density function of V is shown in
Fig. 2-3. The shaded area (area betwf:en v = 0 to 1) is equal to the failure probability.
The discussion in Section 5.1 (that the form of the probability density function of U may not
necessarily follow a standard distribulion) holds for V also. One exception is the case in which both C
and L are lognormally distributed. In such a case, the ratio V is also lognormally distributed and an
analytical expression for the failure probability may be derived from Eq. (2-22). This analytical ex-
pression is given later in Section 6. for most other cases, the probability density function of V has to
be determined numerically and the integral in Eq. (2-22) must be computed by numerical integration
(discussed in Section 8.3).
6. ANALYTICAL SOLUTIONS
Analytical solutions are available for certain combinations of stress and strength probability density
functions. When such a solution is available, it makes the reliability computation easy and economical.
First we define a number of probahility density functions that are commonly used to represent stress
and strength distributions.
Stress-Strength Interference Method 15
1
aA~
1 a-a
!A(a)=--exp - - - -
2 aA
[ ( )2] ; (-00 < a < +00) (2-23)
where A is a random variable (say, stress or strength), a is its mean value, and a A is its standard deviation.
Lognormal distribution
!A(a) = 1 [1
• PC.
aaA v 21T ~A
2]
exp - ,,_2(ln a - a) ; (a> 0) (2-24)
Exponential distribution
where AA = (l/a).
Rayleigh distribution
where AA is the scale parameter, n is the shape parameter, and f(.) is the Gamma function. Note that the Gamma
distribution reduces to exponential distribution when n = 1.
Weibull distribution
~(a - flo)~
!A(a) = (6 - ao)1l
_exp[(- 6 - aoao)Il] ;
1 a -
(a ~ flo ~ 0) (2-28)
where flo is the truncation parameter, 9 - ao is the scale parameter, and ~ is the slope parameter. Note that the
Weibull distribution reduces to an exponential distribution when ao = 0 and ~ = 1.
Analytical expressions for the failure probability for a number of combinations of stress and strength
probability density functions are tabulated in Table 2-1. These expressions are compiled from Lipson
et al. (1967, 1969), Kapur and Lamberson (1977). Haugen (1980), and Kececioglu and Li (1984).
Derivations of these expressions may be found in one or more of these references.
7. TABULATED RESULTS
Lipson et al. (1967, 1969) have considered the combinations of (1) normally distributed stress and
Weibull-distributed strength, and (2) Weibull-distributed stress and Weibull-distributed strength. No an-
....
~
X [1 c - ALO"~)]
- <I> ( - ----"-...:::.
O"e
5. Exponential: Eq. (2-25) with A = L, Gamma: Eq. (2-27) with A = e, a = c, Ae )i
a = €, and AA = AL n = j, and AA = Ae Pc = (
Ae + AL
6. Gamma: Eq. (2-27) with A = L, a = Exponential: Eq. (2-25) with A = e, a = AL )i
P - 1 - (
€, n = i, and AA = AL c, and AA = Ae c- AL + Ae
7. Gamma: Eq. (2-27) with A = L, a = Gamma: Eq. (2-27) with A = e, a = c, qi + ") ]
€, n = i, and AA = AL n = j, and AA = Ae Pc = 1 - [ r(i)r~)Bt(i,j)
where t = AJ(AL + Ae)
Stress-Strength Interference Method 17
alytical solution was possible. But they reduced the stress-strength interference equation to the follow-
ing forms.
PI =
(2-29)
X L~ exp{-
0 t' - [(
19c-co
2 O'L ) + (~-) J2}
Z
co-i
dz
where z is a dummy variable within the integral and <j>(x) is the cumulative distribution function of the standard
normal variable in which x = (co - t)/O'L.
(2-30)
Lipson et al. (1967, 1969) have computed the failure probabilities given by Eqs. (2-29) and
(2-30) by numerical integration and tabulated the results for various parameter values. (If the tabu-
lated results are not available, the integrals in Eqs. [2-29] and [2-30] may be computed by numerical
integration.)
8. NUMERICAL SOLUTIONS
If analytical expressions or tabulated results are not available for a particular combination of stress and
strength probability density functions of interest, then a numerical solution procedure has to be used to
compute the failure probability. There are also cases in which the stress and strength data (derived from
theoretical models, experimental measurements or field measurements) may not fit well to any standard
probability density function. In such cases the probability density functions of stress and strength are
presented in the form of histograms. Numerical solution procedures have to be used in these cases also.
There are four approaches to a numerical solution.
i = 1, 2, ... , N
The N values of Ui are then fitted to a suitable probability density function (Gaussian, lognormal,
Weibull, etc.). This probability density function is then used in Eq. (2-14) and the integral in that
equation is computed analytically or numerically.
Monte Carlo and other simulation Itechniques are described in Sundararajan (1985) as well as in
Chapter 4 of this book. Generation of random variables according to their probability density functions
is also discussed in these references. Methods of fitting standard probability density functions to a set
of random numbers are discussed in many probability and statistics textbooks (e.g., Benjamin and
Cornell, 1970).
(2-31)
where the coefficients ao, a h a2, a3, and a4 are determined by maximizing the Shannon logarithmic
entropy function (Siddall and Daib, 1974). Failure probability is then computed by using Eq. (2-31) in
Eq. (2-22).
dH = -Mf) df (2-34)
0.0 H 1.0
Substitution of Eqs. (2-32) to (2-35) into the stress-strength interference equation (Eq. (2-11]) yields
PI =- f (1 - G) dH =1 - L
G dH (2-36)
R = 1 - PI (2-37)
we have
R=fGdH (2-38)
The above integral is nothing but the area of a graph of G vs. H from H = 0 to 1 (see Fig. 2-4).
Thus the area under the curve from H = 0 to 1 is equal to the reliability and the area above the curve
(shown shaded in Fig. 2-4) is the failure probability.
The integral in Eq. (2-38) may be computed as follows. First compute G and H for different values
e
of ranging from -00 (a very large negative number) to +00 (a very large positive number). Let these
values be Gi and Hi' where i = 1, 2, ... , N. These are the coordinates of the curve shown in Fig. 2-4
at discrete points. Knowing the coordinates of the curve at discrete points, the area under the curve
may be computed by numerical integration procedures. This area is the integral in Eq. (2-38).
9. CONFIDENCE LEVELS
The stress and strength probability density functions are either obtained directly from experimental or
field measurements, or are derived from other data. (For example, the probability density function of
the stress in a beam may be derived from the data on forces acting on the beam.) Irrespective of how
the stress and strength probability density functions are obtained, they are based on a finite amount of
data. This introduces an uncertainty in the probability density functions as well as in the failure prob-
20 Stress-Strength Interference Method
abilities (or reliabilities) computed from the probability density functions. The larger the amount of
data, the lower the uncertainty and the higher the confidence we can place on the computed reliability.
The uncertainty associated with a computed reliability is expressed in terms of lower one-sided confidence
limits. The confidence one can place on a computed reliability is stated as: The reliability is R with -y%
confidence level, or the reliability at -y% confidence level is R. What does it mean? It means that there is
a -y% chance (-y% probability) that the exact value of the reliability is not less than the value R. (The exact
value of the reliability is unknown and can be computed only if we have infinite data points.)
The reliabilities (or failure probabilities) computed from the equations presented so far do not take
into account the effect of the number of data points. The failure probabilities and reliabilities provided
by those equations are the average values. Usually the confidence level associated with the average
reliability (or average failure probability) is approximately 50%.
If one is interested in computing the reliability at a specified confidence level, the method developed
by Kececioglu and Lamarre (1978) may be used. They have presented a set of graphs to aid in the
computation. (These graphs are reproduced here in Figs. 2-5 to 2-12.) The method and graphs are
applicable only when both the stress and strength are normally distributed (Gaussian). Confidence levels
for other types of probability distributions are not available in the literature.
Let NL be the number of data points used to determine the mean and standard deviation of stress
(which is assumed to be normally distributed). Similarly, let Ne be the number of data points used to
determine the mean and standard deviation of strength (which is also assumed to be normally distrib-
uted). The mean and standard deviation of stress computed from the NL data points are and <h, e
respectively. The mean and standard deviation of strength computed from the Ne data points are c and
(J e, respectively.
10 10
Conlldence Level=50% Conlldence Level=60%
9
"'- i'-r-..
l'---. ~ -
K
8 RLl=·9145 RLl-·9145
r-I-
I'-..... r-I-
RLl~·9135
~ ~r-..
I"---r-..
- R~I,;.9J3?
-:::r-I- --
RI.1~.9125 RLl=.9.25
7 RL1~·9115 RLl=.9115
r-
RLl~·910
~ i'-r-.. r- Rr,.I,;·9JoS
i'--
--
RU=·9,s I"--- RL1=.9;5
6 t-- RLl=·985
~ I"---
Rr,.I-·9~
'-- r- RLl=·975
I"---r-..
Rr,.I,;·'75
'"
K K
5
'"I-- Rr,r· 965
"-
1'--1-
RI,.c·9 fi
4
r-
Rr,.I=·9~5
RU=·95
4
I"- I--
RI,.I=·9f
RI.1~.945
r::::::
R 1=.9 5
r=::::: 1--1-
r-I-
Rr.-l=·9~5
R9-·;m Rr.-l~·~
r----.:: r-I-
3
I-- RLl=·995 RLl=·995
RLl=.99 r-I- RLl -·99
2
RLl=·95
2
-I- RLl-·95
RI. =.90 1- RI.1=.90
1
K] =.'"
1 R 1=.85
5 10 20 30 50 100 200 500 1000 5 10 20 30 50 100 200 500 1000
Figure 2-5. Curves relating K, ne, and lower one-sided Figure 2-6. Curves relating K, ne, and lower one-sided
confidence limit on the reliability, RLl , for a confidence confidence limit on the reliability, R Ll , for a confidence
level of 50%. (Source: Kececioglu and Lamarre [1978]. level of 60%. (Source: Kececioglu and Lamarre [1978].
Reprinted with permission.) Reprinted with permission.)
Stress-Strength Interference Method 21
1\vo new parameters are defined. The first parameter, called the K-factor, is defined by
K= I~I (2-39)
(2-40)
Once the parameters K and ne are determined, the reliability at any level of confidence may be
computed using Figs. 2-5 to 2-12. For example, if we are interested in the reliability at the 90%
confidence level, we use Fig. 2-9. The ne value is entered from the horizontal axis and the K value is
entered from the vertical axis, and the reliability is read from the graphs. For example, if ne = 70 and
K = 5, then R = 0.99995 (the value 0.945 in the graph means "four 9's followed by 5"; i.e., 0.99995).
The following example illustrates the computation of reliabilities at prescribed confidence levels.
Mean and standard deviation of the stress were computed from 20 data points as 100,000 and 10,000
psi, respectively. Mean and standard deviation of the strength were computed from 30 data points as
150,000 and 8000 psi, respectively. A goodness-of-fit test indicated that the probability density functions
of stress and strength may be considered Gaussian.
We are interested in computing the average reliability and the reliabilities at 50, 70, 90, and 99.9%
10
,~
CoDlldenj iff'll> 10
~ ~~ J UI f' "
~ ~ f'1' r-- JJ,l4~
1\\,,'1"-1'"
r-- 1\1=.'145 ~ I-
8 8
KU=JI'135 r-- t- ~ '::!135
r--
~ r:::~
~
l- t- &.1=.:!u5
I"- All=' .2'
KU=" .15
r-- l- t- I!u.=·!u5
7
AU="
7
"- I' &.1=.lIJ,
~ 1'" l~ l"- '" "
I"- t - I - &.1=.l1Q5
_Al =.
f'-" r-- KU=.'85 I"- t-- i!.I.I=:'iS
6 6
r-
1"'-" i"- r--
I~ 1"--1--- l~
K~I=.,,:5 t- ~1=·'75
K
r-- K~I=JI.
K
r-- t-t- ~·'f
5
t---r-- K~17!1l
l~ I'-
l- t- 1!I:1=·'~5
r--
h:::-- I'--...
r--..
t- _KiI=·'f5 r-- t- ~=·'f5
R
4 4
r- K 1=.' 5 K I=.!I 5
I"-
r- Ll=. ............... r- ~=. '!I!I
~ .......,
3 Rf.l~·995 3 R~.99s
...........
r-- r-
b:'-
--
r-
::::: -
KLJ=.99 R~=.99
'-.... .......
RL~=.k
I- fHfi'5
r--
I--
':tLJ=·90
KLJ=.85
~ I- KLJ=·90
1 1 =~--
5 10 20 30 50 100 200 500 1000 5 10 20 30 50 100 200 500 1000
Figure 2-7. Curves relating K, n., and lower one-sided Figure 2-8. Curves relating K, n., and lower one-sided
confidence limit on the reliability, RLl , for a confidence confidence limit on the reliability, RLl , for a confidence
level of 70%. (Source: Kececioglu and Lamarre [1978]. level of 80%. (Source: Kececioglu and Lamarre [1978].
Reprinted with permission.) Reprinted with permission.)
22 Stress-Strength Interference Method
lower one-sided confidence levels. First we compute 13 and the average reliability, using Eqs. (2-20)
and (2-19), respectively. Then we compute K and ne, using Eqs. (2-39) and (2-40), respectively. Next
we determine the reliabilities at 50, 70, 90, and 99.9% confidence levels from Figs. 2-5, 2-7, 2-9, and
2-12, respectively. The results are 13 = 3.90, K = 3.90, and ne = 41.29. The average reliability is
0.9999519 and the reliabilities at 50, 70, 90, and 99.9% confidence levels are 0.99995, 0.9998, 0.9996,
and 0.995, respectively. We see that the reliability at the 50% confidence level is very close to the
average reliability in this numerical example.
It should be noted that the average reliability computed by methods described in Sections 2 to 8 is
used in most reliability projects. Only in critical reliability projects does an analyst compute confidence
levels.
10. APPLICATIONS
The stress-strength interference method has been used widely in structural reliability assessment and
many dozens of papers have been published over the years. Representative references covering a wide
spectrum of applications are discussed in this section.
Williams (1981) uses the stress-strength interference method in piping systems reliability analysis.
Tresca's maximum shear stress theory is used as the failure criterion. Interference between the proba-
bility density functions of the stress intensity at the piping cross-section (twice the maximum shear
stress at the cross-section) and the yield stress of the piping material is used to compute the failure
probability.
--
90% 5%
CODftdeDj,r CODfideDj , v r r
i'-
-
8
\\\\ \\ "1\
8 Ll=~
1\\\ '\\
i'
" ........ RL.t~
-
r-ti1i~s
f'.. r- ........ r-~I
1\\ """"
7
U".lr,;
-'!!..1".91o: 7 " r- RL1=.9"
i'-.. r- ~1".90s
\~\ 1\ ,,1\ ........ I--~
--- r- &'1".985 " 1--~"'.98S
---
6
........ r-~
~ "" " "
1--r_~11 ".975
r-- ~\ l\ "'r--
--- r-~".~5
K K
1--~6S
r- !!:L".9SS
5
~ ~""
r--
--- r-~t1s
~".¥
I--
5
""'"......
...... R'i".93S
---
...... r-..". ..1"1
.... ~=.~ I-- ~".999
~=.99S
3 3
I--~~
~ """"' ~.99
~ ",r--r-. --- 1--"'&-"".99
--
....... .......
...... RL!".~S 1 ~
2 ........
-::: RLJ "·90
RL ".85
r-.rffp r- RI,L".95
RL1"·90
1 1
5 10 20 30 SO 100 200 500 1000 5 10 20 30 50 100 200 500 1000
Figure 2-9. Curves relating K, n., and lower one-sided Figure 2-10. Curves relating K, ne, and lower one-sided
confidence limit on the reliability, RLl , for a confidence confidence limit on the reliability, RLl , for a confidence
level of 90%. (Source: Kececioglu and Lamarre [1978]. level of 95%. (Source: Kececioglu and Lamarre [1978].
Reprinted with permission.) Reprinted with permission.)
Stress-Strength Interference Method 23
Witt and Zemanick (1976), Witt et al. (1978), and Witt (1980) use the method in conjunction with
fracture mechanics to compute leak probabilities in nuclear power plant piping. Results from these
studies are in good agreement with those from more complex and expensive analyses (Nuclear Regu-
latory Commission, 1981). Balkey et al. (1982) also use the method to compute fracture failure prob-
abilities of liquid metal fast breeder reactor (LMFBR) piping elbows.
Becher and Pedersen (1974) employ the stress-strength interference method to determine the fracture
failure probabilities of pressure vessels. Bloom (1984) also illustrates the use of the method in fracture
reliability analyses. He uses the stress-strength interference method to develop relationships between
fracture failure probability and factor of safety for two types of flaw-size distributions.
Fatigue reliability is the subject of Kececioglu et al. (1969). They use the stress-strength interference
method to perform fatigue reliability analysis of shafts and spindles. This paper also illustrates the fitting
of test data to stress and strength probability distributions. Kececioglu and Lamarre (1979) consider
failure probabilities of structures subjected to combined alternating bending and steady torque with
nonconstant stress ratio. Other studies on fatigue reliability includes those of Lalli and Kececioglu
(1970), Kececioglu et al. (1970, 1975), and Kececioglu (1972, 1977b). Hooke (1974) also uses the
stress-strength interference method for the fatigue reliability analysis of aircraft structures. Wear reli-
ability of aircraft splines and inspection optimization are the subjects of Kececioglu and Koharcheck
(1977). Use of test data in stress-strength interference analysis is also discussed in this context.
Stress-strength interference is used by Smith (1978) in the probabilistic shrink-fit analysis of com-
posite cylinders. Smith (1979) also uses the stress-strength interference method to develop probability-
based design methods for ellipsoidal and toroidal pressure vessels.
Christian et al. (1986), Saff et al. (1987), and Smith et al. (1990) use the stress-strength interference
CoDfideDj Iii'''' 10
~~
"- ~~il~ 1\\ i\l\
\ 1"'-,
~ ~"·,Il~~ r"-,
~ \\ ~,,~
f.!t.t=.lIw '- ~
~
"
"'- 8
\\
~:\\ \1\
~"'-
'-.. ~tifj;.~
'-.. 1"--hM,~'''ij!
...,l:,!,='''~ 7
\ '-..,
~'-..
'-.. R =. ~i:!
.......... "'rI';.~~
1"--~=''sS
\~ \ \'
'-.., '-..
~
'-.., ~ ......... RI.1=.'7s
6
~
r--~ \ '-..
..........
....... I"--~.,s
K
s
~\'\ .\1\ \
.........
1"--_ ~l".
---
I"--~ \ .......... R~I=.'.Jl
4
r--~ \ \
:~
.......... r- RLJ =.99j
r-I-.~I=' [\\ "- ~r- Ru=.lI9!
3
--I"--
"I\-
r-~I~'~ RU ='"
- "
~I"'" ...... ~r-
RLI""~!
,.... RU=.115 2 ""- au"'.lIO
t-- RU =.90 r-- RL1=.~
'"
J(l, -.Jlc
lL--L-L~~LW~~~~~~~
5 10 20 30 50 100 zoo 500 1000 10 ZO 30 50 100 zoo 500 1000
Figure 2-11. Curves relating K, n., and lower one-sided Figure 2-12. Curves relating K, n., and lower one-sided
confidence limit on the reliability, RLl , for a confidence confidence limit on the reliability, R Ll , for a confidence
level of 99%. (Source: Kececioglu and Lamarre [1978]. level of 99.9%. (Source: Kececioglu and Lamarre
Reprinted with permission.) [1978]. Reprinted with permission.)
24 Stress-Strength Interference Method
method for the reliability analysis of air,::raft structures and the development of probability-based aircraft
maintenance strategies. More applications in the aircraft industry are discussed in Chapter 23 of this
book. Herrmann et al. (1970) use the stress-strength interference method for computing the failure
probability of solid rocket motors. Chou and Fischer (1978) use the method for a probabilistic lique-
faction analysis of nuclear power plant sites.
Bratt et al. (1969) present a time-dependent stress-strength interference approach. Time dependence
arises in structural reliability problems because of structural degradation due to aging, cyclic damage,
or cumulative damage. Shaw et al. (1973) and Schatz et al. (1974) perform nuclear rocket fuel element
and tube reliability analyses, using a time-dependent stress-strength interference approach.
Readers might have noticed that many of these applications papers were published in the 1970s.
Although the method is used in many industries even today, few of these applications are published
now because the stress-strength interference method has become a well-established and mature method
of reliability analysis.
With the increasing popularity of first-order reliability methods (FORMs), use of the stress-strength
interference method will surely decline:. But the recent development of the generalized conditional
expectation method of simulation opens a way for combining the stress-strength interference method
with simulation (see Chapter 4). The m~my analytical expressions for failure probability listed in Table
2-1 could be used in conjunction with simulation (Sundararajan and Gopal, 1992). Ajudicious merging
of the analytical expressions with simulation could drastically reduce the computational effort and cost
of simulation-based reliability assessment.
REFERENCES
BALKEY, K. R., I. T. WALLACE, and J. K. VAURIO (1982). Probabilistic assessment of critically flawed LMFBR
PHTS piping elbows. In: Reliability and Safety of Pressure Components. C. Sundararajan, Ed. New York:
American Society of Mechanical Engineers, pp. 35-51.
BECHER, P. E., and A. PEDERSEN (1974). Application of linear elastic fracture mechanics to pressure vessel relia-
bility analysis. Nuclear Engineering and Design 27:413-425.
BENJAMIN, J. R., and C. A. CORNELL (1970). Probability, Statistics, and Decision for Civil Engineers. New York:
McGraw-Hill.
BLOOM, J. M. (1984). Probabilistic fracture mechanics-a state-of-the-art review. In: Advances in Probabilistic
Fracture Mechanics. C. Sundararajan, Ed. New York: American Society of Mechanical Engineers, pp. 1-19.
BRAIT, M.1., G. REErnoFF, and G. W. WIEBER (1969). A model for time varying and interfering stress/strength
probability density distributions with <:onsideration for failure incidence and property degradation. In: Pro-
ceedings of the Third Annual Aerospace Reliability and Maintainability Conference, pp. 566-575. New York:
Institute of Electrical and Electronics Engineers.
CHou, I.-H., and 1. A. FISCHER (1978). liquefaction and probability. In: Probabilistic Analysis and Design of
Nuclear Power Plant Structures. C. Sundararajan, Ed. New York: American Society of Mechanical Engineers,
pp.39-50.
CHRISTIAN, T. F., H. G. SMI1H, and C. R. SAFF (1986). Structural risk assessment using damage tolerance analysis
and Bight useage data. Paper presentfd at the Winter Annual Meeting, American Society of Mechanical
Engineers, New York.
HAUGEN, E. B. (1980). Probabilistic Mechanical Design. New York: John Wiley & Sons.
Stress-Strength Interference Method 2S
HERRMANN, C. R., G. E. INGRAM, and E. L. WALKER (1970). NASA Contractor Report NASA-CR-1503. Wash-
ington, D.C.: National Aeronautics and Space Administration.
HooKE, F. H. (1974). Probabilistic Design and Structural Fatigue. Structures and Materials Report No. 352. Mel-
bourne, Australia: Aeronautical Research Laboratories.
KAPUR, K. C., and L. R. LAMBERSON (1977). Reliability in Engineering Design. New York: John Wiley & Sons.
KECECIOGLU, D. (1972). Reliability analysis of mechanical components and systems. Nuclear Engineering and
Design 19:259-290.
KECECIOGLU, D. (1977a). Probabilistic design methods for reliability and their data and research requirements. In:
Failure Prevention and Reliability. S. B. Bennett, A. L. Ross, and P. Z. Zemanick, Eds. New York: American
Society of Mechanical Engineers, pp. 285-309.
KECECIOGLU, D. (1977b). Fatigue reliability of structural members under combined axial mean and alternating
stresses for AISI 1018, 1038, 4130 and 4340 steels. In: Transactions of the Fourth International Conference
on Structural Mechanics in Reactor Technology. Paper No. L-4/6.
KECECIOGLU, D., L. CHESTER, and C. F. NOLF (1975). Combined bending-torsion fatigue reliability. In: Proceedings
of the 1975 Annual Reliability and Maintainability Symposium, pp. 511-518. New York: Institute of Elec-
trical and Electronics Engineers.
KECECIOGLU, K., and L. DINGJUN (1984). Exact solutions for the prediction of the reliability of mechanical com-
ponents and structural members (unpublished manuscript).
KECECIOGLU, D., and A. KOHARCHECK (1977). Wear reliability of aircraft splines. In: Proceedings of the Annual
Reliability and Maintainability Conference, pp. 155-163. New York: Institute of Electrical and Electronics
Engineers.
KECECIOGLU, D., and G. LAMARRE (1978). Designing mechanical components to a specified reliability and con-
fidence level. Nuclear Engineering and Design 50:149-162.
KECECIOGLU, D., and G. LAMARRE (1979). Reliability of mechanical components subjected to combined alternating
and mean stresses with nonconstant stress ratio. In: Proceedings of the Fifth International Structural Me-
chanics in Reactor Technology Conference, Paper No. M-8/8. Amsterdam, Netherlands: North Holland Pub-
lishing Company.
KECECIOGLU, K., and D. LI (1984). Aspects of unreliability and reliability determination by the stress-strength
interference approach. In: Probabilistic Structural Analysis. C. Sundararajan, Ed. New York: American So-
ciety of Mechanical Engineers, pp. 75-100.
KECECIOGLU, D., R. E. SMITII, and E. A. FELSTED (1969). Distribution of cycles-to-failure in simple fatigue and
the associated reliabilities. In: Proceedings of the Annual Reliability and Maintainability Conference, pp.
357-374. New York: Institute of Electrical and Electronics Engineers.
KECECIOGLU, D., R. E. SMITII, and E. A. FELSTED (1970). Distribution of strength in simple fatigue and the
associated reliabilities. In: Proceedings of the Annual Reliability and Maintainability Conference, pp. 659-
672. New York: Institute of Electrical and Electronics Engineers.
LALLI, V. R., and D. KECECIOGLU (1970). An approach to reliability determination of a rotating component sub-
jected to complex fatigue. In: Proceedings of the Annual Reliability and Maintainability Conference, pp.
534-548. New York: Institute of Electrical and Electronics Engineers.
UPSON, C., N. 1. SHETII, and R. L. DISNEY (1967). Reliability Prediction-Mechanical Stress/Strength Interference.
Technical Report No. RADC-TR-66-710. New York: Rome Air Development Center (U.S. Air Force).
UPSON, C., N. 1. SHETII, R. L. DISNEY, and M. ALTUM (1969). Reliability Prediction-Mechanical Stress/Strength
Interference. Technical Report No. RADC-TR68-403. New York: Rome Air Development Center (U.S. Air
Force).
NUCLEAR REGUlATORY CoMMISSION (1981). Probability of Pipe Fracture in the Primary Coolant Loop of a PWR
Plant. NUREG/CR-2189, Vols. 1-9. Washington, D.C.: Nuclear Regulatory Commission.
SAFF, C. R., H. G. SMITII, and T. F. CHRISTIAN (1987). Application of damage tolerance analysis to structural risk
assessment. Paper presented at the 28th Structures, Structural Dynamics and Materials Conference, American
Institute of Aeronautics and Astronautics, New York.
26 Stress-Strength Interference Method
SCHATZ, R., M. SHOOMAN, and L. SHAW (1974). Application of time dependent stress-strength models of non-
electrical and electrical systems. In: Proceedings of the Annual Reliability and Maintainability Symposium,
pp. 540-547. New York: Institute of Electrical and Electronics Engineers.
SHAW, L., M. SHOOMAN, and R. SCHATZ (1973). Time-dependent stress-strength models for non-electrical and
electrical systems. In: Proceedings of the Annual Reliability and Maintainability Symposium, pp. 186-197.
New York: Institute of Electrical and Electronics Engineers.
SIDDALL, 1. N., and Y. DAIB (1974). The use in probabilistic design of probability curves generated by maximizing
the Shannon entropy function constra.ined by moments. Journal of Engineering for Industry.
SMITH, C. O. (1978). Shrink fit stresses in probabilistic form. In: Probabilistic Analysis and Design of Nuclear
Power Plant Structures. C. Sundararajan, Ed. New York: American Society of Mechanical Engineers, pp.
51-63.
SMITH, C. O. (1979). Design of Ellipsoidal and Toroidal Pressure Vessels to Probabilistic Criteria. Paper No. 79-
DET-llO. New York: American Society of Mechanical Engineers.
SMITH, H. G., C. R. SAFF, and T. F. CHRISTIAN (1990). Structural risk assessment and aircraft fleet maintenance.
Paper presented at the 31st Structures, Structural Dynamics, and Materials Conference, American Institute
of Aeronautics and Astronautics, New York.
STANCAMPIANO, P. A. (1977). Monte Carlo approaches to stress-strength interference. In: Failure Prevention and
Reliability. S. B. Bennett, A. L. Ross, .and P. Z. Zemanick, Eds. New York: American Society of Mechanical
Engineers, pp. 197-212.
SUNDARARAJAN, C. (1985). Probabilistic structural analysis by Monte Carlo simulation. In: Decade of Progress in
Pressure Vessel and Piping Technology. C. Sundararajan, Ed. New York: American Society of Mechanical
Engineers, pp. 743-759.
SUNDARARAJAN, C. (1986). Probabilistic ass.!ssment of pressure vessel and piping reliability. Journal of Pressure
Vessel Technology 108:1-13.
SUNDARARAJAN, C., and K. R. GOPAL (1992). Fracture reliability of piping. Humble, Texas: EDA Consultants.
WILLIAMS, B. E. (1981). The probability of failure for piping systems. In: Failure Prevention and Reliability.
F. T. C. Llo, Ed. New York: American Society of Mechanical Engineers, pp. 147-150.
WIlT, F. J. (1980). Practical applications of probabilistic structural reliability analyses to primary pressure systems
of nuclear power plants. In: Proceedings of the Fourth International Conference on Pressure Vessel Tech-
nology, Vol. 1. London: Institute of Mechanical Engineers, pp. 63-70.
WIlT, F. J., and P. P. ZEMANICK (1976). Structural Reliability Assessment of Nuclear Piping under Worst Case
Loading. ASME Paper No. 76-PVP-21. New York: American Society of Mechanical Engineers.
WIlT, F. J., W. H. BAMFORD, and T. E. ESSELMAN (1978). Integrity of the Primary Piping Systems of Westinghouse
Nuclear Power Plants during Postulated Seismic Events. Report No. WCAP-9283. Pittsburgh: Westinghouse
Electric Corporation.
3
FIRST-ORDER AND SECOND-
ORDER RELIABILITY METHODS
1. INTRODUCTION
The need to incorporate uncertainties in an engineering design has long been recognized. The absolute
safety of a structure cannot be guaranteed, because of the unpredictability of future loading conditions;
the inability to obtain and express the in-place material properties accurately; the use of simplified
assumptions in predicting the behavior of the structure due to the loading under consideration; the
limitations in the numerical methods used; and human factors (e.g., errors and omissions). However,
the probability of structural failure can be limited to a reasonable level. The estimation of structural
failure probability is an important task for an engineer.
The area of structural reliability has grown at a tremendous rate in the last decade. Many methods
have been proposed, considering the type of problem, the parameters involved, and the uncertainty
associated with these parameters. Uncertainties are modeled in terms of the mean (the central tendency),
the variance (the dispersion about the mean), and the probability density and distribution functions.
Various reliability estimation techniques use part or all of this information in different ways. These
variations give a particular method its own specific advantages and limitations.
Structural reliability can be classified into two types: element reliability and system reliability. The
term element reliability (component reliability) refers to the probability of survival of an individual
element of a structure corresponding to a performance criterion. The term system reliability refers to
the probability of survival of the structural system as a whole. This chapter emphasizes the available
methods for the estimation of element-level reliability, particularly the first-order reliability method
(FORM) and the second-order reliability method (SORM).l
27
28 First-Order and Second-Order Reliability Metlwds
2.1. Notations
2.2. Abbreviations
Structural design, in general, consists of proportioning the elements of a structure such that it satisfies
various criteria of safety, serviceability, and durability under the action of loads. In other words, the
structure should be designed such that it has a higher strength or resistance than the effects caused by
the loads. However, as stated earlier, there are numerous sources of uncertainty in the load and resis-
tance-related parameters. A simple case considering two variables (one relating to the load, S, on the
structure and the other to the resistance, R, of the structure) is shown in Fig. 3-1. Both Sand Rare
random in nature; their randomness is characterized by the corresponding probability density functions
fs(s) and fR(r), respectively. Figure 3-1 also identifies the deterministic (nominal) values of these para-
meters, SN and RN , used in a conventional safety factor-based approach.
In a deterministic approach, the design safety is assured by requiring that RN be greater than SN with
a specified margin of safety. The allowable stress design methods use a safety factor to compute the
allowable stresses in members from the ultimate stress, and successful design ensures that the stresses
caused by the nominal values of the loads do not exceed the allowable stresses. In other words, referring
to Fig. 3-1, RN is multiplied by a safety factor to compute the allowable resistance R., and safe design
requires that the condition SN < R. be satisfied. In the ultimate strength design method, the loads are
multiplied by certain load factors to determine the ultimate loads, and the members are required to resist
various design combinations of the ultimate loads. That is, in Fig. 3-1, SN is multiplied by a load factor
to obtain the ultimate load Su, and safe design requires the satisfaction of the condition, Su < RN •
The intent of these conventional approaches can be understood by considering the area of overlap
.~
UI
C
Q)
o
~
:c10
.lJ
e
Q..
Ps
between the two curves (the shaded region), which provides a qualitative measure of the probability of
failure. This area of overlap depends 011 three factors.
1. The relative positions of the two curves: As the distance between the two curves increases, the probability
of failure decreases. The positions of the curves may be represented by the means (J-LR and J-Ls) of the two
variables.
2. The dispersion of the two curves: If the two curves are narrow, then the area of overlap and the probability
of failure are small. The dispersion may be characterized by the standard deviations (ITR and ITs) of the two
variables.
3. The shapes of the two curves: The shapes are represented by the probability density functions A(r) and
fs(s).
To achieve a safe design, the design variables must be chosen so that the area of overlap between the
two curves is as small as possible, within the constraints of economy. Both the conventional design
approaches discussed earlier achieve this objective by shifting the positions of the curves through the
use of safety factors. A more rational approach is to compute the actual probability of failure by taking
all three factors into account, and to choose the design variables so that an acceptable probability of
failure is achieved. In the implementation of this approach, however, the information about the proba-
bility density functions is usually difficult to obtain, and engineers are faced with the task of formulating
an acceptable design methodology, using only the information on the means and standard deviations.
The basic concept of the classical th(:ory of structural reliability and risk-based design can now be
presented more formally. The first step toward evaluating the reliability of a structure is to decide on
the relevant load and resistance parameters, called the basic variables X;, and the functional relationship
among them. Mathematically, this relationship can be described as
(3-1)
The failure surface or performance function of the limit state of interest can then be defined as Z = O.
This is the boundary between the safe and unsafe regions in the design parameter space, and it also
represents a state beyond which a structure can no longer fulfill the function for which it was designed.
The limit state function (performance function) plays an important role in the development of structural
reliability analysis methods. A limit state function can be an explicit or implicit function of basic random
variables, and it can be in a simple or complicated form. Reliability analysis methods were developed
corresponding to limit state functions of different types and complexity, as is discussed in the following
sections (Ang and Tang, 1975, 1984; Madsen et al., 1986; Melchers, 1987; Thoft-Christensen and Baker,
1982).
Using Eq. (3-1), failure occurs when Z < O. Therefore, the probability of failure, Ph is given by the
integral
(3-2)
in which /X(Xl' X 2 , ••• ,xn) is the joint probability density function for Xl> Xz, ... Xn and the integration
is performed over the region in which g(.) < O. If the random variables are statistically independent,
then the joint density function may be r{~placed by the product of the individual density functions in
the integral.
The computation of Pf by Eq. (3-2) is called the full distributional approach and can be considered
to be the fundamental equation of structural reliability analysis. In general, the joint probability density
First-Order and Second-Order Reliability Methods 31
function of random variables is practically impossible to obtain. Even if this function is available, the
evaluation of the multiple integral is extremely complicated. Therefore, one possible approach is to use
analytical approximations of this integral that are simpler to compute. For clarity of presentation, all
these methods can be grouped into two types, namely, first- and second-order reliability methods (FORM
and SORM).
The limit state functions of interest can be linear or nonlinear functions of the basic variables. FORM
can be used to evaluate Eq. (3-2) when the limit state function is a linear function of uncorrelated
normal variables or when the nonlinear limit state function is represented by the first-order (linear)
approximation, that is, by a tangent at the design point, as defined in Section 4.2. The SORM estimates
the probability of failure by approximating the nonlinear limit state function, including a linear limit
state function with correlated nonnormal variables, by a second-order representation. They are discussed
below.
4. SECOND-MOMENT MEmODS
The development of the FORM can be traced historically to second-moment methods, which used the
information on first and second moments of the random variables. These are first-order second-moment
(FOSM) and advanced first-order second-moment (AFOSM) methods.
Z=R-S (3~3)
Assuming that Rand S are statistically independent normally distributed random variables, the variable
Z is also normally distributed. Its mean and variance can be determined readily as
(3-4)
where <I» is the cumulative distribution function for a standard normal variable. This formula can be
rewritten as
(3-6)
(Ang, 1973). Thus the probability of failure depends on the ratio of the mean value of Z to its standard
deviation. Cornell (1969) named this ratio the safety index (reliability index) and denoted it as 13:
(3-7)
32 First-Order and Second-Order Reliability Methods
Pr = <I>(-~) (3-8)
An alternative formulation proposed by Rosenbleuth and Esteva (1972) may also be used, assuming
that the variables Rand S are statistically independent lognormal random variables. For physical reasons,
these variables are restricted to positive values; hence it is more reasonable to assume that they are
lognormally distributed. The limit state function in this case can be defined as
Z = In(R/S) (3-9)
Z is again a normal random variable, and the probability of failure may be computed using Eq. (3-5).
It can be shown that
lnfLR 1 + O~ }
-1-<1> { fLs 1+0i (3-10)
Pr - V In(1 + fii) In(1 + fi~)
(Ang, 1973), where fiR and fis are the coefficients of variation of Rand S, respectively.
These above formulations may be generalized to many random variables, denoted by a vector X.
Let the performance function be written as
A Taylor series expansion of the performance function about the mean values gives
_ ~ ag _ 1 ~ ~ a2g _ _
Z = g(X) + L..J - (Xi - Xi) + - L..J L..J - - (Xi - Xi)(Xj - Xj) + ... (3-12)
i:1 aXi 2 i:1 j:1 axiaXj
where the derivatives are evaluated at the mean values of the random variables X, and Xi is the mean
value of Xi' Truncating the series at the linear terms, the first-order approximate mean and variance of
Z are obtained as
(3-13)
and
n n
2 ~ ~ ag ag
CTz "" (;t f:t aXi aXj COV(Xi' Xj) (3-14)
(Ang and Tang, 1975), where COV(Xi, Xj) is the covariance of Xi and Xj'
If the variables are statistically independent, then the variance is simply
(3-15)
The safety index is again defined as in Eq. (3-7). Because the limit state is linearized at the mean values
of the random variables, the method is also known as the mean value first-order second-moment
(MVFOSM) method.
First-Order and Second-Order Reliability Methods 33
The second-order mean (considering the square term in the Taylor series) can be used to improve
the accuracy of the estimation of the mean and is shown (Ang and Tang, 1975; Benjamin and Cornell,
1970) to be
- - - -
Z "" g(Xb X 2,···, X.)
1
+ -2 2:.=1 2:
• •
,=1
[( CJg
2
. y.• y.
CJAi CJA,
)
CoV(Xi'~) ] (3-16)
Again, the partial derivatives are evaluated at the mean values of all parameters. The estimation of the
second-order variance is much more involved, and cannot be estimated using only the information on
mean and variance. Higher order moments are necessary, that is, the third and fourth central moments
of the variable. For most practical cases, these higher moments are unavailable. The use of the second-
order mean and the first-order variance are adequate for most engineering applications (Haldar, 1981).
Using the safety index 13, the exact probability of failure can be found only in a few cases. For
example, if all the X;'s are statistically independent normal variables and if Z is a linear combination
of the X; values, then Z is normal and the probability of failure is given by Eq. (3-5). Similarly, if all
the X;'s are statistically independent lognormal variables and if g(X) is a multiplicative function of the
X;'s, then Z =In g(X) is normal and the probability of failure is again given by Eq. (3-5). However, in
most cases, it is not likely that all the variables are statistically independent normals or lognormals.
Nor is it likely that the performance function is a simple additive or multiplicative function of these
variables. Hence, in such cases the safety index cannot be directly related to the probability of failure.
Nevertheless, it does provide a rough idea of the level of risk or reliability in the design. Lind (1973)
showed that Cornell's approach could be used to derive a set of safety factors on loads and resistances,
thereby establishing the consideration of design uncertainty on a logically sounder basis. Thus the FOSM
method was used to derive new reliability-based design formats (Ravindra et ai., 1974) such as AlSC
(1986), CSA (1974), and CEB (1976), to cite just a few examples.
However, the first-order second-moment approach has other, more serious problems. The method
does not use the distribution information about the variables, even if they are available. More impor-
tantly, Cornell's safety index fails to be constant under different but "mechanically equivalent" for-
mulations of the same performance function. For example, the safety margins defined in Eqs. (3-3) and
(3-9) are mechanically equivalent. Yet the probabilities of failure given by Eqs. (3-6) and (3-10) are
different for the two formulations. This problem of the lack of invariance was observed by Ditlevsen
(1973) and Lind (1973). It was overcome by Hasofer and Lind (1974), as discussed below.
.
X ,=Xi-Xi,
Ui
(i = 1, 2, ... ,n) (3-17)
where X: is a random variable with zero mean and unit standard deviation. Equation (3-17) is used to
= =
transform the original limit state, g(X) 0, to the reduced limit state, g(X') O. It is important to note
that the X coordinate system is referred to as the original coordinate system. The X' coordinate system
is referred to as the transformed or reduced coordinate system. The Y coordinate system is referred to
as the reduced, uncorrelated, standard normal coordinate system. These notations are used throughout
this chapter to denote different coordinate systems. A safety index I3HL is defined as the minimum
distance from the origin of the axes in the reduced coordinate system, to the limit state surface (failure
34 First-Order and Second-Order Reliability Methods
(3-18)
The minimum distance point on the limit state surface is called the design point or checking point. It
is denoted by x· in the original space or x'· in the reduced variable space.
This method can be explained easily with the help of Fig. 3-2. Consider the linear limit state equation
in two variables,
Z=R-S=O (3-19)
which is similar to Eq. (3-3). Note that Rand S need not be normal variables. A set of reduced variables
is introduced as
R' = R - I-'-R
(3-20)
(J"R
and
S' = S - I-'-s
(3-21)
(J"s
(Ang and Tang, 1984). Substituting these in the limit state equation, we obtain
(3-22)
In the space of the reduced variables, this new form of the limit state equation may be represented as
shown in Fig. 3-2. The safe and failure regions are also shown. From Fig. 3-2 it is apparent that if the
failure line (limit state line) is closer to the origin, the failure region is larger,jind if it is farther away
from the origin, the failure region is smaUer. Thus the position of the limit state surface relative to the
origin is a measure of the reliability of the system. The distance of the limit state line (Eq. [3-22]) from
s·
Failure State
B
----~----~----~--------__.R·
the origin is
~ _ f.LR-f.LS
(3-23)
HL - Y(J~ + (J~
Note that this is the same as the safety index defined by Cornell (1969) in Eq. (3-7) for normal variables
Rand S.
In general, for many random variables represented by the vector X = (XI> Xz, ... , X n ), the limit
state g(X) = 0 is a nonlinear function as shown for two variables in Fig. 3-3. Here, g(X) > 0 denotes
the safe state and g(X) < 0 denotes the failure state. Again, the Hasofer-Lind reliability index i3HL is
defined as the minimum distance from the origin to the limit state and can be expressed by Eq. (3-18),
where x'* is the point of minimum distance from the origin on the limit state. Note that in this definition
the reliability index is invariant, because regardless of the form in which the limit state equation is
written, its geometric shape and the distance from the origin remain constant. For the limit state surface
where the failure region is away from the origin, as shown in Fig. 3-3, it is easy to see that x'* is the
most probable failure point. As shown in Section 5, the Hasofer-Lind reliability index can be used to
calculate a first-order approximation to the failure probability as Pf = <1>( -i3HL). This is the integral of
the standard normal density function along the ray joining the origin and x'*. It is obvious that the
nearer x'* is to the origin, the larger is the failure probability. Thus the minimum distance point on the
limit state surface is also the most probable failure point. The point of minimum distance from the
origin on the limit state surface, x' *, represents the worst combination of the stochastic variables and
is appropriately named the design point.
For nonlinear limit states, the computation of the minimum distance becomes an optimization
problem:
Minimize D = YX,TX' (3-24)
Subject to the constraint g(X) = 0
Using the method of Lagrange multipliers (Shinozuka, 1983), the minimum distance is obtained as
~"i-I Xi,* ( ~
0
oX; )*
(3-25)
X' 2
g(X) < 0
g(X) = 0
X' 1
where the derivatives (ag/aX;)* are evaluated at (x;*, x~*, ... , x~*). The asterisk after the derivative
indicates that it is evaluated at (x; *, x~ *, ... , x~ *). The design point is given by
(i = 1, 2, ... , n) (3-26)
where
( Og)*
ax;
(3-27)
are the direction cosines along the coordinate axes X;. In the space of the original variables, the design
point is
(3-28)
This algorithm is shown geometrically in Fig. 3-4. The algorithm constructs a linear approximation to
the limit state at every search point and finds the distance from the origin to the linearized limit state.
This is a first-order approach, similar to the FOSM method, with the important difference that the limit
state is finally linearized at the most probable failure point rather than at the mean values of the random
variables.
Haldar and Ayyub (1984) proposed an improvement to this procedure. In their method, steps 2 and
3 in the above algorithm are repeated at each checking point xt until the value of at stabilizes; then
steps 4 through 6 are performed to obtain the new checking point.
Ditlevsen (1979a) showed that for a nonlinear limit state surface, ~HL lacks comparability: the or-
dering of ~HL values may not be consistent with the ordering of actual reliabilities. An example of this
is shown in Fig. 3-5 with two limit state surfaces: one flat and the other curved. The shaded region to
the right of each limit state represents the corresponding failure region. Clearly, the structure with the
flat limit state surface has higher reliability than the one with the curved limit state surface; however,
the ~HL values are identical for both surfaces and suggest equal reliability. To overcome this inconsis-
tency, Ditlevsen (1979a) introduced the generalized reliability index, ~g, defined as
G2 (x') =0
x;
where <I> and <I> are the cumulative distribution function and the probability density function of a standard
normal variable, respectively. Because the reliability index in this definition includes the entire safe
region, it provides a consistent ordering of second-moment reliability. The integral in the above equation
appears similar to that in Eq. (3-2), and is difficult to compute directly. Hence, Ditlevsen (1979b)
proposed the approximation of the nonlinear limit state by a polyhedral surface consisting of tangent
hyperplanes at selected points on the s.urface (e.g., the locally minimum distance points). Veneziano
(1979) proposed an alternative reliability index to overcome the ordering problem of the Hasofer-Lind
reliability index, in terms of the upper 'tchebycheff bound of the failure probability. Although this index
can incorporate information other than the first and second moments, its application in practice appears
to be difficult.
The Hasofer-Lind definition of the reliability index as the minimum distance of the limit state surface
from the origin may be extended to estimate the probability of failure. The information on the distri-
butions of the random variables may also be included in this computation. The probability of failure
has been estimated using two types of approximations to the limit state at the design point: (1) first
order (leading to the name FORM), and (2) second order (leading to the name SORM).
The Hasofer-Lind reliability index can IDe exactly related to the failure probability by using Eq. (3-8)
if all the variables are normally distributed and the limit state surface is linear. For the nonlinear limit
state surface, the FORM uses a linear approximation to the limit state at the design point, and estimates
the probability of failure as Pr = <1>( - J3Hd, as illustrated in Fig. 3-5. If all the variables are not normally
distributed, as is common in structural problems, then it is difficult to relate J3HL to the exact probability
of failure. Rackwitz and Fiessler (1976) suggested that this problem could be solved by transforming
the nonnormal variables into equivalent normal variables.
(3-30)
(3-31)
in which Fi and /; are the nonnormal cumulative distribution and density functions of Xi; and <I> and <I>
are the cumulative distribution and density function of the standard normal variate, respectively. Having
determined x~ and (f~ and proceeding sinlilarly to the case in which all random variables are normal,
First-Order and Second-Order Reliability Methods 39
the system of equations (Eqs. [3-27] and [3-28]), can be solved to obtain the value of ~HL- Then Eq.
(3-8) can be used to calculate the failure probability. This approach became well known as the Rack-
witz-Fiessler algorithm and has been used extensively in the literature.
This approximation of nonnormal distributions can become more and more inaccurate if the original
distribution becomes increasingly skewed. For highly skewed distributions, for example, Frechet (type
11), the conditions represented in Eqs. (3-30) and (3-31) need to be modified. In this case, the mean
value and the probability of exceedence of the equivalent normal variable are made equal to the median
value, and the probability of exceedence of the original random variable, respectively, at the checking
point (Rackwitz and Fiessler 1978). xf and O'~ can be estimated as
and
(3-33)
Xi*
(J'~ = <I>-l[F (xnJ
i
(3-34)
and
x~ = 0 (3-35)
function be equal for both the original and the transformed, normal distributions. Because the additional
parameter was introduced to control the transformation between the original and equivalent normal
distributions, it was anticipated that the Chen-Lind method would produce more accurate estimates of
probability of failure than the Rackwilz-Fiessler method. However, some research has shown (Wu,
1984) that the two methods generally have the same estimates of the probability of failure; only in
some cases does the Chen-Lind method perform better than the Rackwitz-Fiessler method. The Rack-
witz-Fiessler method and the Chen-Lin method are also called advanced first-order second-moment
(AFOSM) methods.
The limit state could be nonlinear either because of the nonlinear relationship between the random
variables and the limit state function, or because of some variables being nonnormal. Even a linear
limit state in the original space becoml~s nonlinear when transformed to the standard normal space
(which is where the search for the minimum distance point is conducted) if any of the variables is
nonnormal. Also, the transformation from correlated to uncorrelated variables might induce nonlinearity;
this transformation is discussed in detail in Section 7. If the joint probability density function (pdt) of
the random variables decays rapidly as one moves away from the minimum distance point, then the
above first-order estimate of failure probability is quite accurate. If the decay of the joint pdf is slow,
and the limit state is highly nonlinear, then one must use a higher order approximation for the failure
probability computation. Ditlevsen (1979a) suggested the use of a polyhedral envelope for the nonlinear
limit state (Fig. 3-6), consisting of tangent hyperplanes at selected points on the limit state (e.g., locally
minimum distance points). Then the failure probability is obtained through the union of failure regions
O(Y) < 0
O(Y) > 0
defined by the individual hyperplanes. This provides a better estimate of failure probability than a single
linear approximation at the global minimum distance point. Such a strategy, which may be referred to
as a multiple-point FORM, results in bounds on the failure probability, because of the difficulty in
computing the joint probability of multiple failure regions. The following second-order bounds have
been derived (Ditlevsen 1979b):
(3-36)
where PI is the probability of the most probable failure region, Pi is the probability of the ith failure
region, and Pij is the joint probability of the ith and jth failure regions. (These second-order bounds are
also used in series system reliability analysis, in which system failure is defined as the union of indi-
vidual failure events.)
An alternative to the polyhedral surface is the construction of a second-order approximation at the
minimum distance point. Such computation has been referred to as a second-order reliability method
(SORM), which takes into account the curvature of the limit state around the minimum distance point.
Fiessler et al. (1979) explored the use of various quadratic approximations. A closed-form solution for
the probability of failure of a region bounded by a quadratic limit state was given by Breitung (1984),
using asymptotic approximations as
n-I
where Ki denotes the ith main curvature of the limit state at the minimum distance point. Breitung
showed that this second-order probability estimate asymptotically approaches the first-order estimate as
13 approaches infinity, if I3Ki remains constant. Refer to Hohenbichler et al. (1987) for a theoretical
explanation of FORM and SORM, using the concept of asymptotic approximations.
Tvedt (1983, 1990) proposed three formulas to include curvatures in the probability estimate. To
understand these formulas, it is convenient to consider a rotated standard normal space Y' in which the
y~ axis coincides with the perpendicular from the origin to the tangent hyperplane at the minimum
distance point y* (Der Kiureghian et al., 1987). This is achieved by an orthogonal transformation,
where the nth row of R is selected to be y*/(y*Ty*)1!2. A standard Gram-Schmidt algorithm may be
used to determine R. Consider a second-order surface in this rotated standard space as
(3-39)
(RDRT)ij
(i, j = 1, 2, ... , n - 1) (3-40)
a ij = IVG(y*)I;
42 First-Order and Second-Order Reliability Methods
where D is the n X n second-derivative matrix of the limit state surface in the standard normal space
evaluated at the design point, R is the rotation matrix, and VG(y*) is the gradient vector in the standard
space. (The main curvatures Kj, used in Breitung's formula above, are the eigenvalues of the matrix A.)
With these definitions, Tvedt's three-term (Tf) formula is
where I is the identity matrix, <I> is the standard normal density function, and i =(-1)112. The first term
is equivalent to the result of Breitung's formula, but is expressed in determinant form.
Tvedt's single-integral formula is
where the summation represents a k-point Gauss-Laguerre quadrature approximation with weights Wj
(3-43)
where Tp {.} denotes the root with positive real part. For a quadratic safe set, this expression is exact
when the main curvatures K j are positive. In other cases, it provides a better approximation to the
probability than the previous two formulas.
Der Kiureghian et al. (1987) obtained a quadratic approximation by fitting the limit state at (2n - 2)
discrete points in the design point neighborhood in the rotated space mentioned above. The principal
directions of the approximating paraboloid are selected to coincide with the coordinate axes of the
rotated space. The approximating paraboloid then becomes
.-1
y~ = ~ +!
2
2: a;)l;2
i=1
(3-44)
1 1( 1 1) (3-45)
VI + ~ai = 2 VI + ~a_i + VI + ~+i
where a:';i = 2(1b - (3)/(K(3)2 are the curvatures of the two semiparabolas.
Because the principal directions of this point-fitted paraboloid are assumed to coincide with the
coordinate axes in the rotated space, the effect of second-order cross-derivative terms is ignored, thus
requiring less computation. For a problem with n variables, at most four deterministic runs per fitting
point are needed, with a total of 8(n - 1) to define the paraboloid completely. On the other hand, the
fomiulas of Breitung and Tvedt imply curvature-fitted paraboloids, and require the complete second-
order derivative matrix. This requires a total of 2(n - 1)2 computations, using a central difference
scheme, and n(n + 1)/2 computations, using a forward difference scheme. This difference in the amount
of computation becomes significant for problems with a large number of random variables.
It should be emphasized here that in all the methods mentioned above, the original variables (some
of which may be correlated and nonnormal) are transformed to an equivalent uncorrelated standard
normal space to search for the minimum distance point on the limit state surface. It is not necessary to
make such a transformation; Breitung (1989) developed a procedure that maximizes the loglikelihood
function of the probability distribution in the original space. Second-order approximation to the limit
state surface is then constructed at these maximum likelihood points.
7. CORRELATED VARIABLES
The first- and second-order reliability methods described in the previous sections implicitly assume that
the basic variables Xl> X 2 , ••• , Xn are uncorrelated. In general, some varibles are correlated. Consider
TANGENT
PLANE
"-I ~+I
Figure 3-7. Fitting of paraboloid in rotated standard space. (Source: Der Kiureghian, A., H. Z. Lin, and S. F. Hwang
[1987]. Second-order reliability approximations. Journal of Engineering Mechanics, ASCE 113(8):1208-1225. Re-
printed with permission from the American Society of Civil Engineers.)
44 First-Order and Second-Order Reliability Methods
theX;'s in Eq. (3-1) to be correlated variables with means !--LXi' standard deviations (JXi' and the covariance
matrix represented as
eov(XI' Xn)
eOV(X2' Xn)
[C] = (3-46)
then it can be shown that the covariance matrix [C'] of the reduced variables X: is
1 Px1,x, Px1,x.
Px"x1 1 Px,,x.
[e'l = (3-48)
PX.,x1 Px.,x, 1
in which !--L~i and ~i are the equivalent normal mean and standard deviation, respectively, of the Xi
variable evaluated at the checking point on the failure surface using Eqs, (3-30) and (3-31), and T is
a transformation matrix to convert the correlated reduced X; variables to uncorrelated reduced Y var-
iables. The T matrix can be shown to be
6(1)
1
6(2)
1
6\n)
6(1) 6(2) 6~n)
2 2
[T] = (3-50)
[T] is basically an orthogonal transformation matrix consisting of the eigenvectors of the correlation
matrix [C'] (Eq, [3-48]). {6(i)} is the eige:nvector of the ith mode.
Using Eq. (3-49), Eq. (3-1) can be rewritten in terms of reduced uncorrelated normal Y variables.
For this case, the estimation of the probability of structural failure is simple as outlined in this section.
For practical large problems, the correlated variables may also be transformed into uncorrelated
variables through an orthogonal transformation of the form
Y = L-I(X')' (3-51)
First-Order and Second-Order Reliability Methods 45
where L is the lower triangular matrix obtained by Cholesky decomposition of the correlation matrix
[C' ]. If the original variables are nonnormal, their correlation coefficients change on transformation to
equivalent normal variables. Der Kiureghian and Liu (1985) developed semiempirical formulas for fast
and reasonably accurate computation of [C ' ].
The procedure discussed here can be applied when the marginal distributions of all the variables as
well as the covariance matrix are known. When the joint distributions of all the correlated nonnormal
variables are available, an equivalent set of independent normal variables can be obtained using the
Rosenblatt transformation (Rosenblatt, 1952). From a practical point of view, this situation would be
extremely rare unless all the variables are either normal or lognormal. Furthermore, it can also be shown
that it is not possible to define the joint probability density function uniquely, using the information on
marginal distributions and the covariance matrix (Bickel and Doksum, 1977).
Another level of complexity arises in reliability analysis because the performance function g(X) is
generally not available as an explicit, closed-form function of the input variables. For most realistic
structures, the response has to be computed through a numerical procedure such as finite element
analysis. Several computational approaches have been pursued during the past decade for the reliability
analysis of structures with implicit performance functions. These can be broadly divided into three
categories, on the basis of their essential philosophy, as (1) Monte Carlo simulation (including efficient
sampling methods and variance reduction techniques), (2) response surface approach, and (3) sensitivity-
based probabilistic finite element analysis.
Monte Carlo simulation relies on random generation of input variables for each deterministic analysis
and estimation of response distribution statistics or reliability based on numerous repetitions. Such an
approach is obviously expensive for complicated structures. The efficiency of the simulation can be
improved by variance reduction techniques. Monte Carlo simulation and variance reduction techniques
are discussed in Chapter 4.
The response surface approach constructs a first- or second-order polynomial approximation for g(X)
through (1) a few selected simulations in the neighborhood of the most likely failure point, and (2)
regression analysis of these results (Wu, 1984; Schueller et al., 1987). The closed-form (polynomial)
expression thus obtained is then used to search for the design point and the failure probability is
computed using first-order (FORM) or second-order (SORM) reliability methods, as described earlier.
Although the idea of polynomial approximation of the limit state is conceptually simple and applicable
to a wide variety of problems, it, too, like the Monte Carlo simulation, could end up requiring many
deterministic analyses. For n random variables, and excluding the interaction terms in the quadratic
equation, the number of deterministic analyses required in Wu's method is 2n - 1, whereas that reported
by Schueller et al. requires 4n + 3 runs. If the interaction terms are included, Wu's method requires
n(n + 1)/2 deterministic analyses. As the number of random variables is increased, the number of
deterministic analyses increases greatly in these methods, thus making them expensive.
The third approach is based on sensitivity analysis. It is important to know the sensitivity of structural
response to the basic random variables from a design point of view. If the uncertainty in a certain basic
variable is found to have a large effect on structural failure, then more testing could be done to reduce
the uncertainty in that variable. Thus the confidence in design will be increased. Also, it is possible to
use different design safety factors for different random variables, based on their uncertainty and on their
influence on structural behavior. Sensitivity analysis can also be used to ignore the uncertainty in those
variables that do not show a significant influence on structural reliability; this saves a great amount of
computational effort. Thus sensitivity information is useful in probabilistic analysis and design.
46 First-Order and Second-Order Re/.iability Methods
(3-52)
where fl is the perturbed force vector, KI is the perturbed stiffness matrix, and u is the displacement
vector of the original unperturbed structure. The perturbed displacement is then obtained by solving
(3-53)
and
(3-54)
Note that the original K matrix is used to obtain the perturbed solution. A new residual vector is defined,
and the above analysis is repeated in a predictor-corrector sequence until convergence to the perturbed
solution. This approach is similar to the Neumann expansion approach used by Yamazaki et al. (1988).
This iterative perturbation technique has been extended to mixed-iterative finite element techniques and
has been observed to result in efficient sensitivity computation in the presence of material and geometric
nonlinearities (Dias, 1990).
in the reliability analysis. The NESSUS program uses the former approach to construct a closed-form
relationship between the input and output variables. This is based on the perturbation of each random
variable about its mean value and computation of the corresponding variation in response. The closed-
form expression is then combined with the analytical reliability methods (FOSM method, or FORM or
SORM) discussed earlier. The perturbation sensitivities can be refined in subsequent iterations to con-
struct updated closed-form approximations to the performance function.
It is, however, not necessary to construct a closed-form expression for the performance function to
determine the probability of failure. This is because the Rackwitz-Fiessler algorithm needs only the
value and gradient of the performance function at each iteration to search for the most probable point.
The value of g(X) is simply obtained from deterministic analysis of the structure. The gradient vector
Vg(X) is evaluated through sensitivity analysis. Using the transformation relationships from original
variables X to uncorrelated standard normal variables Y, it is easy to apply the chain rule to compute
the gradient VG(Y) in the uncorrelated, standard normal space, which is where the search for the
minimum distance is being performed. Because the search involves only first-order derivatives, the
transformation from X to Y is approximated as
Y=Bo+BX (3-55)
(3-57)
where VG(Yk) is the gradient of the output function and Yk is the kth iteration point. This algorithm
proceeds iteratively until convergence is achieved. 1\\'0 convergence criteria are used:
where ~k is the distance at the kth iteration, and E and 8 are both small numbers (specified by the
analyst). The above algorithm has been found to have fast convergence to the minimum distance even
for complex output functions of many random variables, but it is not guaranteed to converge. Liu and
Der Kiureghian (1986) investigated the use of other optimization algorithms to solve this problem, and
also discussed several improvements, such as the use of a merit function to monitor convergence and
modify step sizes. However, stability and convergence may still not be guaranteed by these
improvements.
a = -a~/ay* (3-58)
48 First-Order and Second-Order Reliability Methods
Thus the elements of the vector a are directly related to the derivatives of ~ with respect to the standard
normal variables. Relating these to the original variables and their statistical variation, a unit sensitivity
vector can be derived as
SHIn
(3-59)
'Y = ISHlnl
(Der Kiureghian and Ke, 1985), where S is the diagonal matrix of standard deviations of the input
variables. The elements of the vector 'Y may be referred to as sensitivity indices of individual variables.
The sensitivity indices can be used to improve computational efficiency. The variables with very low
sensitivity indices at the end of the first few iterations are treated as deterministic at their mean values
for the subsequent iterations of the search for the minimum distance. This significantly reduces the
amount of computation because practically only a few variables have been observed to have a significant
effect on the probability of failure. These sensitivity indices are also useful in reducing the size of
problems with random fields, in which the random fields are discretized into sets of correlated random
variables (Mahadevan and Haldar, 1991), and in reliability-based optimization (Mahadevan, 1992).
9. COMPUTER PROGRAMS
Numerous computer programs have been developed by researchers to implement the FORM/SORM
algorithms described in this chapter. Three of the commercially available programs are described here.
9.1. NESSUS
NESSUS (numerical evaluation of stochastic structures under stress), developed at the Southwest
Research Institute (1991) in San Antonio, Texas under sponsorship by NASA Lewis Research Center,
combines probabilistic analysis with a general-purpose finite element/boundary element code (Cruse et
ai., 1988; Southwest Research Institute, 1991). Structural analysis is performed using either the dis-
placement method, or the mixed-iterative formulation or boundary element method, and iterative per-
turbation is used for sensitivity analysis. Solution capabilities include transient, nonlinear analysis for
classic von Mises plasticity, thermoviscoplasticity, and large deformation/displacement conditions, and
fatigue/fracture problems. The probabilistic analysis features an advanced mean value (AMV) technique:
(1) A closed-form linear approximation to the performance function is constructed in the original space,
using perturbation about the mean values of the random variables; (2) this closed-form approximation
is combined with an extension of the Rackwitz-Fiessler and Chen-Lind algorithms to find the most
probable point (design point) in the standard normal space. The transformation to equivalent normal
variables improves on the Chen-Lind three-parameter approximation with a least-squares scheme (WU
and Wirsching, 1987); (3) deterministic analysis of the structure at the most probable point, coupled
with the result of the previous step, is used to estimate a second-order approximation of either the
failure probability or the cumulative distribution function of the structural response. The option is also
provided to repeat the above steps and obtain more accurate estimates of the most probable point and
failure probability. The method has been extensively demonstrated for space propulsion components
such as turbine blades and high-pressure ducts (e.g., Rajagopal et ai., 1989). The program also includes
techniques such as fast convolution, and curvature-based adaptive importance sampling. System relia-
bility and risk assessment capabilities in the program use either (1) fault tree analysis combined with
adaptive importance sampling, or (2) a structural reanalysis procedure to account for progressive dam-
age. The program is available on Vax mainframes and SUN workstations.
First-Order and Second-Order Reliability Methods 49
9.2. PROBAN
PROBAN (pROBability ANalysis) was developed at Det Norske Veritas (Hft'Wik, Norway), through
A.S. Veritas Research (Veritas Sesam Systems, 1991). It is available for APOLLO/HP, DEC, IBM, and
SUN computers.
PROBAN was designed to be a general probabilistic analysis tool. Particularly efficient methods are
available for computing small probabilities, which often arise in structural reliability problems. It can
be applied in many different areas, including marine and offshore structures, mechanical and aerospace
structures, civil engineering problems, and many other applications. PROBAN is capable of estimating
the probability of failure, using the FORM and SORM for a single event, unions, intersections, and
unions of intersections. PROBAN also contains a mean-based FORM, intended primarily for CPU-
intensive models. It has a library of standard probability distributions. The approximate FORM/SORM
results can be updated through importance sampling. The probability of general events can be computed
by Monte Carlo simulation and directional sampling. Probability distribution computations can be per-
formed by Monte Carlo simulation or Latin hypercube sampling. Sensitivity analysis by simulation is
also available.
9.3. CALREL
CALREL (CAL-RELiability) is a general-purpose structural reliability analysis program designed to
compute probability integrals of the form given by Eq. (3-2).
CALREL was developed at the University of California at Berkeley by Liu et al. (1989). It incor-
porates four general techniques for computing the probability of failure: (1) FORM, (2) SORM, (3)
directional simulation with exact or approximate surfaces, and (4) Monte Carlo simulation. CALREL
has a large library of probability distributions for independent as well as dependent random variables.
Additional distributions can be included through a user-defined subroutine. CALREL is written in FOR-
TRAN-77 and operates on IBM-PC or compatible personal computers, as well as on computers with
the Unix operating system.
The fundamental concept of reliability analysis and the historical development of reliability methods
are discussed in this chapter. Several reliability methods within the family of FOSM, FORM, and SORM
are discussed. Some of the commercially available computer programs are also identified.
The state of the art in the area of structural reliability analysis has improved significantly in the last
two decades. A considerable amount of work has been conducted in the areas of element-level and
system-level reliability estimations. The general area of risk-based engineering design is still growing
at a rapid rate. This area is being advanced significantly by the introduction of several risk-based design
codes that can be applied routinely in the design office. It needs to be pointed out that a considerable
amount of research work is still being conducted in the areas of system reliability, simulation, time-
dependent reliability analysis, and stochastic finite element analysis. The results of these investigations
need to be synthesized and adapted to simple, practical methods for realistic engineering applications.
REFERENCES
AMERICAN INSTITUTE OF STEEL CONSTRucnON (AISC) (1986). Manual of Steel Construction: Load and Resistance
Factor Design. Chicago: American Institute of Steel Construction.
50 First-Order and Second-Order Reliability Methods
MG, A H.-S. (1973). Structural risk analysis and reliability-based design. Journal of Structural Division of Amer-
ican Society of Civil Engineers 99(S1'9):1891-191O.
MG, A H.-S., and W. H. TANG (1975). Probability Concepts in Engineering Planning and Design, Vol. 1: Basic
Principles. New York: John Wiley & Sons.
MG, A H.-S., and W. H. TANG (1984). Probability Concepts in Engineering Planning and Design, Vol. II: Design,
Risk, and Reliability. New York: John Wiley & Sons.
AYYUB, B. M., and A HALDAR (1984). Practical structural reliability techniques. Journal of Structural Division of
American Society of Civil Engineers 110(8): 1707-1724.
BENJAMIN, J. R., and C. A CORNELL (1970). Probability, Statistics and Decisions for Civil Engineering. New York:
McGraw-Hill.
BICKEL, P. J., and K. A DOKSUM (1977). Mathematical Statistics: Basic Ideas and Selected Topics. San Francisco:
Holden-Day.
BREITIJNG, K. (1984). Asymptotic approximations for multinormal integrals. Journal of Engineering Mechanics
Division of American Society of Civil Engineers, 110(3):357-366.
BREITIJNG, K. (1989). Probability Approximations by Loglikelihood Maximization. Seric Sto Nr. 6, Seminar fur
angewandte stochastik, Institute fur Statistic and Wissenschaftstheoric. Munich, Germany: University of
Munich.
CANADIAN STANDARDS AsSOCIATION (CSA) (1974). Standards for the Design of Cold-Formed Steel Members in
Buildings. CSA S-136. Ottawa, Canada: Canadian Standards Association.
CHEN, X., and N. C. LIND (1982). A New Method for Fast Probability Integration. Paper No. 171. Waterloo,
Canada: University of Waterloo.
COMITE EUROPEAN DE BETON (CEB), Joint Committee on Structural Safety CEB-CECM-FIP-IABSE-IASS-RILEM
(1976). First Order Reliability Concepts for Design Codes. CEB Bulletin No. 112. Paris, France: Comite
European de Beton.
CORNELL, C. A (1969). A probability-based structural code. Journal of the American Concrete Institute 66(12):
974-985.
CRUSE, T. A, o. H. BURNSIDE, Y.-T. Wu, E. Z. POLCH, and J. B. DIAS (1988). Probabilistic structural analysis
methods for select space propUlsion system structural components (PSAM). Computers and Structures 29(5):
891-901.
DER KIUREGHIAN, A, and J.-B. KE (1985). Finite element-based reliability analysis of framed structures. In:
Proceedings of the 4th International Conference on Structural Safety and Reliability, Vol. 1. New York:
International Association for Structural Safety and Reliability, pp. 395-404.
DER KIUREGHIAN, A, and P.-L. LIU (1985). Structural Reliability under Incomplete Probability Information. Report
No. UCB/SESM-85/01. Berkeley, California: University of California.
DER KIUREGHIAN, A, H. Z. LIN, and S. F. HWANG (1987). Second-order reliability approximations. Journal of
Engineering Mechanics Division of American Society of Civil Engineers 113(8):1208-1225.
DlAs, J. B. (1990). Probabilistic Finite Element Methods for Problems in Solid Mechanics. Ph.D. Thesis. Palo
Alto, California: Stanford University.
DlAs, J. B., and J. C. NAGTEGAAL (1985). Efficient algorithms for use in probabilistic finite element analysis. In:
Advances in Aerospace Structural Analysis. O. H. Burnside and C. H. Parr, Eds. New York: American Society
of Mechanical Engineers, pp. 37-50.
DITLEVSEN, o. (1973). Structural Reliability and the Invariance Problem. Research Report No. 22. Waterloo,
Canada: University of Waterloo.
DITLEVSEN, O. (1979a). Generalized second moment reliability index. Journal of Structural Mechanics 7(4):435-
451.
DITLEVSEN, o. (1979b). Narrow reliability bounds for structural systems. Journal of Structural Mechanics 7(4):
453-472.
FIESSLER, B., H. J. NEUMANN, and R. RAcKWITZ (1979). Quadratic limit states in structural reliability. Journal of
Engineering Mechanics Division of American Society of Civil Engineers 105(4):661-676.
First-Order and Second-Order Reliability Methods 51
HAillAR, A. (1981). Statistical methods. In: Numerical Methods in Geomechanics, NATO Advanced Study Institute
Series. J. B. Martins, Ed. Boston: D. Reidel Publishing, pp. 471-504.
HAillAR, A, and B. M. AYYUB. (1984). Risk models for correlated non-normal variables. In: Proceedings of the
5th ASCE-EMD Specialty Conference. New York: American Society of Civil Engineers, pp. 1237-1240.
HANDA, K., and K. ANDERSON (1981). Application of finite element methods in the statistical analysis of structures.
In: Proceedings of the 3rd International Conference on Structural Safety and Reliability. Amsterdam, The
Netherlands: Elsevier, pp. 409-417.
HAsOFER, A M., and N. C. LIND (1974). Exact and invariant second moment code format. Journal of the Engi-
neering Mechanics Division of American Society of Civil Engineers 100(EM1):111-121.
HIsADA, T., and S. NAKAGIRI (1985). Role of the stochastic finite element method in structural safety and reliability.
In: Proceedings of the 4th International Conference on Structural Safety and Reliability. New York: Inter-
national Association for Structural Safety and Reliability, pp. 385-394.
HOHENBICHLER, M., S. GOLLWITZER, W. KRUSE, and R RACKWITZ (1987). New light on first- and second-order
reliability methods. Structural Safety 4:267-284.
LIND, N. C. (1973). The design of structural design norms. Journal of Structural Mechanics 1(3):357-370.
LIu, P.-L., and A DER KIUREGIDAN (1986). Optimization Algorithms for Structural Reliability Analysis. Report
UCB/SESM-86/09. Berkeley, California: University of California.
LIu, P.-L., H.-Z. LIN, and A DER KIUREGIDAN (1989). CALREL. Berkeley, California: University of California.
LIu, W. K., T. BELYfSCHKO, and A MANI (1985). Probabilistic finite elements for transient analysis in nonlinear
continua. In: Proceedings of the ASME Winter Annual Meeting. New York: American Society of Mechanical
Engineers, pp. 9-24.
MADSEN, H. 0., S. KRENK, and N. C. LIND (1986). Methods of Structural Safety. Englewood Cliffs, New Jersey:
Prentice-Hall.
MAHADEVAN, S. (1988). Stochastic Finite Element-Based Structural Reliability Analysis and Optimization. Ph.D.
Thesis. Atlanta, Georgia: Georgia Institute of Technology.
MAHADEVAN, S. (1992). Probabilistic optimum design of framed structures. Computers and Structures 42(3):365-
374.
MAHADEVAN, S., and A HALDAR (1991). Practical random field discretization in stochastic finite element analysis.
Structural Safety 9:283-304.
MELCHERS, R E. (1987). Structural Reliability Analysis and Prediction. New York: Halsted Press.
PALOHEIMO, E. (1973). Eine Bemessungsmethod, die sich auf variierende Fraktile griindet. In: Sicherheit von
Betonbauten. Berlin, Germany: Arbeitstagung des Deutsche Beton-Verein, pp. 91-100.
RACKWITZ, R (1976). Practical Probabilistic Approach to Design. Bulletin No. 112. Paris, France: Comite Eur-
opean du Beton.
RACKWITZ, R, and B. FIESSLER (1976). Note on Discrete Safety Checking When Using Non-Normal Stochastic
Models for Basic Variables. Loads Project Working Session. Cambridge, Massachusetts: Massachusetts In-
stitute of Technology.
RACKWITZ, R, and B. FIESSLER (1978). Structural reliability under combined random load sequences. Computers
and Structures 9:489-494.
RAIAGOPAL, K. R, A DEBCHAUDHURY, and 1. F. NEWELL (1989). Verification of NESSUS code on space propul-
sion components. In: Proceedings of the 5th International Conference on Structural Safety and Reliability.
New York: American Society of Civil Engineers, pp. 2299-2306.
RAVINDRA, M. K., N. C. LIND, and W. W. SIU (1974). illustrations of reliability-based design. Journal of Structural
Division of American Society of Civil Engineers 100(S1'9):1789-1811.
ROSENBLATT, M. (1952). Remarks on a multivariate transformation. Annals of Mathematical Statistics 23(3):470-
472.
ROSENBLEUTII, E., and L. ESTEVA (1972). Reliability Bases for Some Mexican Codes. ACI Publication SP-31:1-
41. Detroit, Michigan: American Concrete Institute.
52 First-Order and Second-Order Reliability Methods
1. INTRODUCTION
The interest in engineering simulation started in the early 1940s for the purpose of developing inex-
pensive techniques for analytically testing engineering systems by imitating their real behavior. These
methods are commonly called Monte Carlo simulation techniques. The principle behind the methods is
to develop an analytical model, which is computer based, that predicts the behavior of a system. Then
the model is evaluated, and therefore the behavior is predicted. If one or more parameters of the system
are random variables, the model is evaluated several times. Each evaluation (called simulation cycle or
run or trials) is based on a certain randomly selected set of input parameters of the system. Certain
analytical tools are used to assure the random selection of the input parameters according to their
respective probability distributions for each evaluation. As a result, several predictions of the system
behavior are obtained. Then, statistical methods are used to evaluate the moments and distribution types
of the output variables that describe the behavior of the system. This chapter discusses the use of Monte
Carlo simulation and advanced simulation methods with variance reduction techniques in structural
reliability assessment.
The analytical and computational steps that are needed for performing Monte Carlo simulation are
(1) definition of the system, (2) generation of input random variables, (3) evaluation of the model, (4)
statistical analysis of the resulting behavior, and (5) study of efficiency and convergence. The definition
of the system should include its boundaries, input parameters, output (or behavior) parameters, and
models that relate the input parameters to the output parameters. The accuracy of the results of simu-
lation is highly dependent on having an accurate definition for the system. It is common to assume the
system model in Monte Carlo simulation to be nonrandom. However, modeling uncertainty can be
incorporated in the analysis in the form of bias factors and additional variabilities, for example, coef-
ficients of variation. All critical parameters should be included in the model. The definition of the input
parameters should include their statistical or probabilistic characteristics, that is, knowledge of their
moments and distribution types. The input parameters are generated and these values should then be
substituted in the model to obtain output parameters. By repeating the procedure N times (for N sim-
ulation cycles), N sets of output parameters are obtained. Statistical methods can now be used to obtain,
53
54 Simulation-Based Reliability Methods
for example, the mean value, variance, or distribution types for the output parameters. The accuracy of
the results is expected to increase by increasing the number of simulation cycles. The convergence of
the simulation methods can be investigated by studying their limiting behavior. Also, the efficiency, and
thus the accuracy, of simulation methods can be increased by using variance reduction techniques. These
techniques are discussed in Sections 4.2 to 4.9.
2.1. Notations
When dealing with random variables, the random variables are denoted by capital letters (e.g., X, Y,
and Z) and the specific values they take are denoted by corresponding lower-case letters (e.g., x, y,
and z).
A Constant
a Model parameter
B Length
b Model parameter
c Model parameter
F Cumulative distribution function
I Probability density function
Ix Joint density function
g(.) Performance function, also a function used in random generation
h Sampling density function or importance function
I Integer value
Ir Failure indicator
Int Integer function
k Number of regions or control variables
L, I Load effect
M Margin of safety or performance function
Me Applied bending moment
m Number of random variables
N Number of simulation cycles
Nr Number of failures
n Number of random variables
Pr Probability of failure
Pr Estimator of P r
P Probability
R, r Resistance or strength or region
S Elastic section modulus
U, u Uniform random value
W, w Load intensity
X, x Random variable
X Vector of random variables
Y, y Yield stress
Simulation-Based Reliability Methods 55
a Parameter of distribution
<I> Cumulative distribution function of standard normal variate
'Y Parameter of distribution
J.L Mean value
CI Standard deviation
2.2. Abbreviations
AS Adaptive sampling
ASCE American Society of Civil Engineers
ASM Advanced second moment
ASNE American Society of Naval Engineers
AV Antithetic variates
CDF Cumulative distribution function
CE Conditional expectation
COV Coefficient of variation
FORM First-order reliability method
FOSM First-order second moment
GCE Generalized conditional expectation
ICOSSAR International Conference on Structural Safety and Reliability
IS Importance sampling
LHS Latin hypercube sampling
MCS Monte Carlo simulation
RS Response surface
SORM Second-order reliability method
SS Stratified sampling
Var Variance
VRT Variance reduction techniques
As noted in Section 1, input random variables need to be generated according to their respective
probability distributions. Such random variable generations require random numbers that are uniformly
distributed between 0 and 1. Therefore the generation of uniformly distributed random numbers is
discussed first, followed by the generation of random variables.
Ii = (aIi - 1 + b) - Int (
aIi - 1c + b)c; (for i = 1, 2, 3, ...) (4-1)
where Int[(aIi - 1 + b)/c] is the integer of the result of the division, a is the multiplier, b is the increment,
and c is the modulus. These model constants a, b, and c are nonnegative integers. The starting value
10 is called the seed (an arbitrary number), which should be provided by the user of the model. The
value Ii is obtained by dividing (aIi - 1 + b) and letting Ii be the remainder of the division. The random
number Ui is defined as
The Ii value is normalized by dividing by c, because 0 ~ Ii ~ c. The parameters of this recursive model
should satisfy the following conditions: 0 < c, a < c, b < c, and 10 < c. It is evident from this recursive
model that the process is not random, because it can be repeated with the same results all the time. For
this reason, this process is commonly called pseudorandom number generation. Although the values are
not truly random, they would pass all statistical tests for randomness. The period of this generator is
less than or equal to c. For this reason and others, the value of c should be very large, for example,
c ~ 109 •
In the special case when b = 0, linear congruential generators are called multiplicative generators.
If b > 0, they are called mixed generators.
In addition to some computers and calculators that have random number generators, tables of random
numbers have also been published (Rand Corporation, 1955).
(4-3)
x
where F 1 is the inverse of the cumulative distribution function of the random variable X. This procedure
is repeated as many times as required, using a different value of u each time. Because the range of
Fx(x) is [0, 1], a unique value for x is obtained all the time.
If X is a discrete random variable, then the value of the generated random variable, X, is determined
as follows:
where Xi (i = 1, 2, 3, ... , m) are m discrete values of the random variable X with a cumulative mass
distribution function Fx(x).
Simulation-Based Reliability Methods 57
(4-5)
If this condition is not satisfied, the value of y is rejected and a new pair of u and y values needs to
be generated. The procedure is repeated until an acceptable value of y is obtained. Then the x value is
taken equal to the acceptable y value. The overall procedure needs to be repeated until the required
number of random values for X is generated.
The performance function or safety margin for a structural member according to a specified failure
mode is given by
in which the Xi (i = 1, ... , n) are the n basic random variables (input parameters), with g(.) being the
functional relationship between the basic random variables and failure (or survival). The performance
function can be defined such that the limit state, or failure surface, is given by M = O. The failure event
is defined as the space where M < 0, and the survival event is defined as the space where M > O. Thus,
the probability of failure can be evaluated by the following integral:
(4-7)
where /x is the joint density function of Xl' X2 , ••• , Xno and the integration is performed over the region
where M < O. Because each of the basic random variables has a unique distribution and they interact,
the integral of Eq. (4-7) cannot be easily evaluated. Monte Carlo computer simulation with or without
variance reduction techniques (VRT) can be used to estimate the probability of failure (discussed in
Sections 4.1 to 4.9). Other reliability assessment methods are described in Chapters 2, 3, and 5 of this
book.
Pf =Nr/N (4-8)
58 Simulation-Based Reliability Methods
where Nr is the number of simulation cycles in which g(.) < 0, and N is the total number of simulation
cycles. As N approaches infinity, then Pr approaches the true probability of failure. The accuracy of Eq.
(4-8) can be evaluated in terms of its variance. For a small probability of failure and/or a small number
of simulation cycles, the variance of Pr can be quite large. Consequently, it may take a large number
of simulation cycles to achieve a specified accuracy. The variance of the estimated probability of failure
can be computed by assuming each simulation cycle to constitute a Bernoulli trial. Therefore, the number
of failures in N trials can be considered to follow a binomial distribution. Then the variance of the
estimated probability of failure can be computed approximately as
(4-9)
It is recommended to measure the statistical accuracy of the estimated probability of failure by com-
puting its coefficient of variation as
(4-10)
The smaller the coefficient of variation, the better the accuracy of the estimated probability of failure.
It is evident from Eqs. (4-9) and (4-10) that as N approaches infinity, Var(Pr) and COV(Pr) approach
zero.
Additional information about direct simulation for structural reliability assessment is provided by
Ayyub and Haldar (1984), Ang and Tang (1984), Harbitz (1983), and Melchers (1987).
Example 1: Flexural reliability of a beam using direct simulation. The performance function
that describes the flexural behavior of a simply supported beam of span length V2B supporting a
uniform load W is given by
(4-11)
where Y is the yield stress of the material of the beam, and S is the elastic section modulus. In this
example, failure is defined as yielding at the extreme material fibers of the cross-section of the beam.
This is a nonlinear performance function. The mean values and standard deviations of the basic random
variables are given in Table 4-1. Using direct Monte Carlo simulation, the random variables Y, S, W,
and B were randomly generated and substituted in the performance function. Failures were then counted
by monitoring the sign of the resulting evaluations of the performance function (negative means failure).
Then the probability of failure was estimated as Nr/N. Also, the COV(Pr) was computed. The number
Coefficient of Distribution
Random variable Mean value variation type
y 38 ksi 0.05 Normal
S 100 in. 2 0.05 Normal
W 0.3 kip/in. 0.25 Normal
B 180 in. 0.05 Normal
Simulation-Based Reliability Methods 59
of simulation cycles N was varied from 100 to 20000 to illustrate convergence of the simulation process.
The results are shown in Fig. 4-l.
In the classic use of simulation-based methods (Le., direct simulation), all the basic random variables
are randomly generated and Eq. (4-6) is evaluated. Failures are then counted depending on the resulting
sign of Eq. (4-6). The probability of failure is estimated as the ratio of the number of failures to the
total number of simulation cycles. Therefore, for smaller probabilities of failure, larger numbers of
simulation cycles are needed to estimate the probability of failure within an acceptable level of statistical
(a)
0.035
0.03
;a ~
'S 0.025
~
:a 0.02
i 0.015
J
Q,
0.01
~ 0.005
o
o 5000 10000 15000 20000
Number of simulation cycles
(b)
1
0.9
s:: 0.8
0
'::2
·cto 0.7
~ 0.6
.....
0
0.5
5
'0 0.4
IS 0.3
8 0.2 \
0.1 '- ~
0
o 5000 10000 15000 20000
Number of simulation cycles
Figure 4-1. (a) Estimated probability of failure for example 1. (b) Coefficient of variation of estimated failure
probability for example 1.
60 Simulation-Based Reliability Methods
error. The amount of computer time needed for this method is relatively large, whereas the computational
effort per simulation cycle is relatively small.
Variance reduction techniques offer an increase in the efficiency and accuracy of the simulation-
based assessment of structural reliability for a relatively small number of simulation cycles, in addition
to expediting convergence. However, the level of computational difficulty for each simulation cycle
increases. According to these variance reduction techniques, the variance of the estimated probability
of failure is reduced. In the following sections, some commonly used variance reduction methods are
described.
when N is the number of simulation cycles, /X(xl" X2i, ••• , X.i) is the original joint density function of
the basic random variables evaluated at the ith generated values of the basic random variables, hX(Xli'
X2i, ••• , XIIi) is the selected joint density function of the basic random variables evaluated at the ith
generated values of the basic random variables, and If is the failure indicator function that takes values
of either 0 for failure or 1 for survival. In Eq. (4-12), hx(x) is called the sampling density function or
the importance function.
Efficiency (and thus the required number of simulation cycles) depends on the choice of the sampling
density function. A number of procedures for selecting the sampling density functions have been sug-
gested (Madsen et ai., 1986; Harbitz, 1983; Melchers, 1987, 1989; Ang et ai., 1989; Bourgund and
Bucher, 1986). For an example problem with a failure probability of 0.001, about 100,000 simulation
cycles may be required by direct simulation, but only 200 cycles may be sufficient using importance
sampling (Melchers, 1987).
(4-13)
Simulation-Based Reliability Methods 61
where P(Rj ) is the probability of region Rj , ~ is the number of simulation cycles performed in region
Rj , and Ifi is the indicator function as defined in Eq. (4-12) and evaluated at the ith simulation cycle.
This method allows the analyst to concentrate the simulation effort (i.e., perform more simulation cycles)
in important regions, for example, the failure regions, or to concentrate the effort on important basic
random variables. The failure region may not be known in advance; only a "guess" can be made.
Additional information about this method is provided by Law and Kelton (1982) and Schueller et al.
(1989).
n i-I
+--
u· = - (4-14)
'NN
where u is a random number in the range [0, 1], and ui(i = 1, 2, ... , N) is the random value for the
ith interval. Once the ui(i = 1, 2, ... , N) values are obtained, then inverse transformation can be used
to generate values for the input random variables. Thus a set of N random values is generated for each
input random variable. One value from each set is picked randomly and substituted into the performance
function to decide whether the structure survives [g(X) ;::: 0] or fails [g(X) < 0]. This procedure is
repeated N times to determine the failure probability. Ayyub and Lai (1989, 1991) provide illustrative
examples of this method.
10-9, the direct simulation would require well over 109 cycles, whereas the adaptive sampling method
required only 400 cycles.
where R is the structural strength or resistance, and L is the corresponding load effect. Therefore, the
probability of failure, Pr, is given by
Pf =P(M < 0) =P(R < L) (4-16)
For a randomly generated value of L (or R), say, Ii (or ri), the probability of failure is given by,
respectively,
Pf, = P(R < Ii) = Fa(li) (4-17a)
and
Pf, = P(L > ri) = 1 - Fdri) (4-17b)
where FR and FL are cumulative distribution functions of Rand L, respectively. In this formulation R
and L are assumed to be statistically uncorrelated random variables. Thus, for N simulation cycles, the
mean value of the probability of failure is given by the following equation:
N
-
2:
i=1
P f,
Pf = - - (4-18)
N
The variance (Var) and the coefficient of variation (COV) of the estimated probability of failure are
given by
N
-
2: (P
i-I
f, -lW
Var(Pf) = -JV,-(N---1-)- (4-19)
where Xk is the control variable. The failure state according to Eq. (4-22) is given by X k < gb the
survival state is given by X k ~ gk. For the ith simulation cycle, the probability of failure can be computed
as
where FXk is the cumulative distribution function of Xk. This value of P fj is used in Eqs. (4-18) to
(4-20) to compute the mean, variance, and coefficient of variation of the failure probability. According
to this method, the variance of the estimated quantity is reduced by removing the variability of the
control variable. In addition, the method converges to the correct probability of failure in a relatively
small number of simulation cycles.
Example 2: Flexural reliability of a beam using conditional expectation. Example 1 is solved
here again, using the conditional expectation method. The control variable was selected to be W, because
it has the largest coefficient of variation. Therefore, Y, S, and B were randomly generated, and the
cumulative distribution function of W was used to compute the probability of failure at each simulation
cycle. Then the probability of failure was estimated as average probability of failure based on all the
simulation cycles (Eq. [4-18]). Also, the COV(Pf ) was computed by Eq. (4-20). The number of simu-
lation cycles N was varied from 100 to 10000 to illustrate convergence of the simulation process. The
results are shown in Fig. 4-2.
Ayyub and Haldar (1984), White and Ayyub (1985), and Law and Kelton (1982) provide additional
information on the conditional expectation method.
p(I) +p(2)
p _ Ii Ii
fj - 2 (4-24)
Then the mean value of probability of failure can be calculated by Eq. (4-18). The AV technique can
be used in combination with the nongeneralized or generalized conditional expectation VRT described
in Sections 4.6 and 4.8, respectively. The negative correlation can be achieved by using, for example,
U and 1 - U the inverse transformation method for generating the random variables as previously
discussed. The use of the antithetic variates method results in additional reduction in the variance of
the estimated quantity and expedited convergence. This method can be considered a special case of the
stratified sampling technique with two strata. This method has been used in conjunction with the im-
portance ·sampling technique by Schueller et al. (1989). The antithetic variates VRT is described in
detail by Ayyub and Haldar (1984), Law and Kelton (1982), and White and Ayyub (1985).
64 Simulation-Based Reliability Methods
(a)
•
0_045
~ 0.04
:! Ii
0.035
.....
-
"0>. 0.03 ....
:.:=
:s~
0.025
'8 0.02
IS.
'Q 0.015
£
8 0.01
~ 0.005
o
o 2000 4000 6000 8000 10000
Number of simulation cycles
(b)
0.16
0.14
r::
•1
0
''= 0.12
~
·c
0.1
....0~
0.08
5
'13 0.06 ~
~
~ 0.02
0.04 ........
r---.-
0
o 2000 4000 6000 8000 10000
Number of simulation cycles
Figure 4-2. (a) Estimated probability of failure for example 2. (b) Coefficient of variation of estimation failure
probability for example 2.
3. In the jth simulation cycle, the conditional random variables (Xi: n; i = 1, 2, ... , n; i e k) are generated
randomly. The probability of failure in the jth simulation cycle is given by
Plj = P[g(x h X2, ••• , Xu, X/i2, ... ,Xbn , ... , x.) < 0] (4-25)
Here only the Xlh X/i2, ... , Xbn are random variables, and the remaining (n - m) variables are deterministic
(generated) values of the conditional variables. The function g(.) in the above equation is usually a simpler
expression consisting of only m random variables and the probability expression can be evaluated using
any suitable (or convenient) method. For example, the reliability methods described in Chapters 2 and 3
of this book, such as the first-order second-moment (FOSM) method, or the advanced second-moment
(ASM) method (Hasofer and Lind 1974), or the various simulation techniques described in Sections 4.1 to
4.7, or other structural reliability methods can be used for this purpose. The choice of the m random variables
of ~ should be based on the intended method for the evaluation of the probability expression. Care should
be exercised in selecting the m variables that result in simplifying the reliability evaluation in addition to
reducing the dimensionality of the problem from n to (n - m). The simplification can be, for example, in
the form of (1) reducing a nonlinear performance function into a linear function that is used in the proba-
bility expression, (2) a closed-form expression for evaluating the probability expression, and/or (3) removing
random variables with nonnormal probability distributions from the expression. These concepts are best
explained using Example 3.
4. The failure probability and its coefficient of variation can be determined by Eqs. (4-18) and (4-20),
respectively.
Ayyub and Chia (1992) provide additional information on the generalized conditional expectation
method.
Example 3: Flexural reliabUity of a beam using generalized conditional expectation. Consider
the first-yield failure mode of a structural steel section subjected to a bending moment loading. The
performance function is
M=YS-M. (4-26)
where Y is the yield stress of material, S is the elastic section modulus, and Me is the moment effect
due to applied loading. 1\vo cases, one in which all variables are normally distributed and a second in
which the variables are nonnormal, are considered. The statistical characteristics of the variables are
shown in Table 4-2 (Ang and Tang, 1984). These variables are assumed to be statistically uncorrelated.
This example is first solved using the conditional expectation method, and then solved using the gen-
eralized conditional expectation method.
Conditional Expectation Method: The probability of failure of the structural component according
to the first-yield failure mode can be expressed as
Coefficient of
Variable Mean value variation Case 1 Case 2
y 40.00 ksi 0.125 Normal Lognormal
S 50.00 in. 3 0.050 Normal Lognormal
M. 1,000.0 kip . in. 0.200 Normal Type I (largest)
66 Simulation-Based Reliability Methods
The control random variable, in this case, is selected as the random variable Me, because it has the
largest coefficient of variation (COV).
In this example two cases are considered, normal random variables (case 1) and nonnormal random
variables (case 2), as shown in Table 4-2. For the normal case, the probability of failure for the ith
simulation cycle is given by
(4-28)
where FMe is the cumulative distribution function of Me, and cI> is the cumulative distribution function
of a normal distribution with mean of 1000 and coefficient of variation of 0.2 (see Table 4-2). The
generated values of Y and Sin Eq. (4-28) are denoted Yi and Si, respectively.
Similarly, for the nonnormal case,
in which ex and 'Yare the parameters of type I largest extreme value distribution for Me. The sample
mean and coefficient of variation (COV) of the failure probability were then determined using Eqs.
(4-18) and (4-20), respectively, and are shown in Table 4-3 for the normal and nonnormal probability
distributions.
For the purpose of comparison, P f was recalculated using the advanced second-moment (ASM)
method (see Chapter 3). The results are 1.1 X 10-3 and 3 X 10-3 for the normal and nonnormal
probability distributions, respectively.
Coefficient of
variation for
Estimated estimated
Number of probability of probability
Simulation method cycles failure, Pc of failure
Case 1: Normal
Direct Monte Carlo 200,000 0.00128 0.0625
Conditional expectation (CE) 40,000 0.00118 0.0460
Generalized CE 500 0.00118 0.0380
Case 2: Nonnormal
Direct Monte Carlo 100,000 0.00325 0.0560
Conditional expectation (CE) 2,000 0.00319 0.0460
Generalized CE 500 0.00300 0.0240
Simulation-Based Reliability Methods 67
where Sj is a randomly generated value of S. The probability expression was then evaluated, for the
normal probability distributions (case 1), as follows:
fJ.ySj - J.LM. )
(
Pr=1-<I>V22+ 2 (4-31)
, SjCTy CTM.
where IJ. is the mean value, and CT is the standard deviation. For the nonnormal probability distributions
(case 2), such an expression is not available. Therefore the advanced second-moment (ASM) method
was used to determine PCj" Then the mean value and coefficient of variation of the failure probability
were determined using Eqs. (4-18) and (4-20), respectively, for N simulation cycles. The resulting
statistical characteristics of Pc are shown in Table 4-3 for the normal and nonnormal probability distri-
butions, respectively.
By inspecting the results shown in Table 4-3, the advantages of the GCE method combined with the
advanced second-moment (ASM) method in expediting convergence are evident. However, the com-
putational effort in each simulation cycle of the GCE method is larger than that in each simulation
cycle of the CE method. The increase in the computational effort per cycle can be considered insignif-
icant compared to the reduction in the number of cycles. The main limitation of the GCE method is
that the control random variables must be selected such that they are statistically uncorrelated with the
conditional random variables.
It is evident from example 3 that the assessment of failure probability on the basis of a nonlinear
performance function can be reduced to averaging N evaluations of the probability of structural failure
according to a linear expression of the performance function. This transformation can be achieved by
carefully selecting the control random variables. The probabilistic evaluation of the linear expression
was performed, in example 3, using the advanced second-moment method. Other methods could have
been used to achieve this objective. The choice of the ASM method was for the purpose of illustrating
merging moment reliability methods with conditional expectation in Monte Carlo simulation. This con-
cept can be utilized in complex performance functions to transform them into computationally man-
ageable formats. Although example 3 had an explicit performance function, the GCE method can be
used to solve problems with nonexplicit performance functions also.
A conceptual extension of the above principle is to reduce a structural reliability assessment problem
that has a performance function with nonnormal probability distributions to N probabilistic evaluations,
based on a performance expression with only normal probability distributions. These evaluations in-
volving normal distributions can be determined under certain conditions using closed-form equations.
A combination of the above two concepts can be utilized in solving difficult structural reliability
problems.
5. CONCLUDING REMARKS
In this chapter, a critical review of simulation methods for structural reliability assessment is provided.
The reviewed methods include direct simulation, importance sampling, stratified sampling, Latin hy-
percube sampling, adaptive sampling, conditional expectation, antithetic variates, generalized conditional
expectation, and response surface methods. Several examples are presented to illustrate the methods,
their strengths, and their weaknesses. Also, the examples show the benefits of using variance reduction
techniques in expediting convergence and increasing the accuracy of estimating the probability of failure.
REFERENCES
MG, A H.-S. and W. H. TANG (1984). Probability Concepts in Engineering Planning and Design, Vol. II: Decision,
Risk and Reliability. New York: John Wiley & Sons.
MG, G. L., A H.-S. MG, and W. H. TANG (1989). Kernal method in importance sampling density estimation. In:
Proceedings of the 5th International Conference on Structural Safety and Reliability, Vol. 2. A H.-S. Ang,
M. Shinozuka, and G. I. Schueller, Eds. New York: American Society of Civil Engineers, pp. 1193-1200.
AYYUB, B. M., and C.-Y. CHIA (1992). Generalized conditional expectation for structural reliability assessment.
Structural Safety, 11(2):131-146.
AYYUB, B. M., and A HALOAR (1984). Practical structural reliability techniques. Journal of Structural Engineering,
American Society of Civil Engineers 110(8):1707-1724.
AYYUB, B. M., and K-L. !.AI (1989). Structural reliability assessment using Latin hypercube sampling. In: Pro-
ceedings of the 5th International Conference on Structural Safety and Reliability, Vol. 2. A H.-S. Ang, M.
Shinozuka, and G. I. Schueller, Eds. New York: American Society of Civil Engineers, pp. 1171-1184.
AYYUB, B. M., and K-L. !.AI (1991). Selective sampling in simulation-based reliability assessment. International
Journal of Pressure Vessels and Piping 46(2):229-249.
BJERAGER, P. (1987). Probability integration by directional simulation. Journal of Engineering Mechanics, Amer-
ican Society of Civil Engineers 114(8):1285-1302.
BOURGUND, U., and C. G. BUCHER (1986). Importance Sampling Procedures Using Design Points-a User Manual.
Report No. 8-86. Innsbruck, Austria: Institute of Engineering Mechanics, University of Innsbruck.
BUCHER, C. G. (1988). Adaptive sampling-an iterative fast Monte Carlo procedure. Structural Safety 5:119-126.
BUCHER, C. G., and U. BOURGUND (1987). Efficient Use of Response Surface Methods. Report 9-87. Innsbruck,
Austria: Institute of Engineering Mechanics, University of Innsbruck.
DITLEVSEN, 0., and P. BJERAGER (1987). Plastic Reliability Analysis by Directional Simulation. DCAMM Report
353. Lyngby, Denmark: Technical University of Denmark.
HARBrrz, A (1983). Efficient and accurate probability of failure calculation by FORM-SORM and updating by
importance sampling. In: Proceedings of the 5th International Conference on Applications of Statistics and
Probability Theory in Civil Engineering. A Augusti, A Borri, and G. Vannucchi, Eds., pp. 825-836.
HAsoFER, A M., and N. C. LIND (1974). Exact and invariant second-moment code format. Journal of Engineering
Mechanics, American Society of Civil Engineers 100(EM1):111-121.
IMAN, R. L., and W. J. CANOVER (1980). Small sample sensitivity analysis techniques for computer models with
an application to risk assessment. Communications in Statistics, Theory and Methods A9(17):1749-1842.
IMAN, R. L., and M. J. SHORTENCARIER (1984). A FORTRAN 77 Program and User's Guide for the Generation
of Latin Hypercube and Random Samples for Use with Computer Models. NUREG/CR-3624, SAND83-
2365. Washington, D.C.: Nuclear Regulatory Commission.
KARAMCHANDANI, A, P. BJERAGER, and C. A CORNELL (1989). Adaptive importance sampling. In: Proceedings
of the 5th International Conference on Structural Safety and Reliability. A H.-S. Ang, M. Shinozuka, and
G. I. Schueller, Eds. New York: American Society of Civil Engineers, pp. 855-862.
Simulation-Based Reliability Methods 69
LAw, A. M., and W. D. KELTON (1982). Simulation Modeling and Analysis. New York: McGraw-Hill.
MADSEN, H. 0., S. KRENK, and N. C. LIND (1986). Methods of Structural Safety. Englewood Cliffs, New Jersey:
Prentice-Hall.
MELCHERS, R. E. (1987). Structural Reliability Analysis and Prediction. London: Ellis Horwood.
MELCHERS, R. E. (1989). Improved importance sampling for structural system reliability calculation. In: Proceed-
ings of the 5th International Conference on Structural Safety and Reliability. A. H.-S. Ang, M. Shinozuka,
and G. I. Schueller, Eds. New York: American Society of Civil Engineers, pp. 1185-1192.
RAND CORPORATION (1955). A Million Random Digits with 100,000 Normal Deviates. New York: The Free Press.
RUBINSTEIN, R. Y. (1981). Simulation and Monte Carlo Method. New York: John Wiley & Sons.
SCHUELLER, G. I., and R. Snx (1987). A critical appraisal of methods to determine failure probabilities. Structural
Safety 4:293-309.
SCHUELLER, G. I., C. G. BUCHER, U. BOURGUND, and W. OUYPORNPRASERT (1989). An efficient computational
scheme to calculate structural failure probabilities. Probabilistic Engineering Mechanics 4(1):10-18.
WHITE, G. J., and B. M. AYYUB (1985). Reliability methods for ship structures. Naval Engineers Journa~ ASNE
97(4):86-96.
5
PROBABILISTIC FINITE ELEMENT
METHOD*
1. INTRODUCTION
It is becoming increasingly evident that traditional deterministic methods will not be sufficient to prop-
erly design advanced structures or structural components subjected to a variety of complex loading
conditions. Because of uncertainty in loading conditions, material behavior, geometric configuration,
and supports, the stochastic computational mechanics, which accounts for all these uncertain aspects,
must be applied to provide rational reliability analysis and to describe the behavior of the structure.
The fundamentals of stochastic computational mechanics and its application to the analysis of uncertain
structural systems are summarized and recapitulated in a book by Liu and Belytschko (1989).
Although the theory of statistics and structural reliability has been used successfully in modeling the
uncertain nature of structures, load environments, and in computing the probability of failure, its ap-
plication is usually limited to simple structures with linear constitutive behavior. Because of the com-
plexity in the geometry, external loads, and nonlinear material behavior, more advanced computational
tools, such as finite element methods (FEMs) or boundary integral equation methods (BIEMs) have to
be employed to provide the necessary computational framework for analyzing complex structures. The
combination of these advanced computational tools with the theory of statistics and structural reliability
has become a rational way for the safety assessment and uncertainty characterization of complex struc-
tures. In this chapter, attention is focused on the development of the probabilistic finite element method
(PFEM), which combines the finite element method with statistics and reliability methods, and its
application to linear and nonlinear structural mechanics problems and fracture mechanics problems. The
novel computational tool based on the stochastic boundary element method (SBEM) is also given for
the reliability analysis of a curvilinear fatigue crack growth.
"The support of NASA Lewis Grant No. NAG3-822 for this research and the encouragement of Dr. Christos Chamis are
gratefully acknowledged. This work was also supported in part by the Federal Aviation Administration (FAA) Center for Aviation
Systems Reliability, operated by the Ames Laboratory, U.S. Department of Energy, for the FAA under Contract No. W-7405-
ENG-82 for work by Iowa State University and Northwestern University.
70
Probabilistic Finite Element Method 71
The existing PFEMs have been applied to solve two types of problems: (1) determination of the
response uncertainty in terms of the means, variance, and correlation coefficients, and (2) determination
of the probability of failure associated with prescribed limit states. Although the second-order statistic
moments of a response are not sufficient for a complete reliability analysis, these moments offer useful
statistical information and serve as a measure of reliability. Furthermore, because of the unavailability
of multivariate distribution functions of random variables in most problems, a more accurate reliability
analysis may not be feasible.
The perturbation method has been used extensively in developing PFEM because of its simplicity
and versatility. Cambou (1975) appears to have been the first to apply the first-order perturbation method
for the finite element solution of linear static problems with loading and system stochasticity. Baecher
and Ingra (1981) also used the same techniques for settlement predictions. The perturbation method in
conjunction with the finite element method has also been adopted by Handa and Anderson (1981) for
static problems of beam and frame structures, by Ishii and Suzuki (1987) for slope stability reliability
analysis, and by Righetti and Harrop-Williams (1988) for static stress analysis for soils. The accuracy,
convergence, and computational efficiency of the perturbation method have been compared with those
from the Neumann expansion method and direct Monte Carlo simulation (MCS) method (Shinozuka
and Yamazaki, 1988; Shinozuka and Deodatis, 1988). The PFEM based on the second-order perturbation
approximation has been introduced by Hisada and Nakagiri (1981, 1985) for static problems and for
eigenvalue problems.
Extensive research on the PFEM has been conducted by the authors and their colleagues at North-
western University. The PFEM based on the second-order perturbation has been developed to estimate
the statistic moments of the response for linear static problems (Liu et ai., 1986a), nonlinear dynamic
problems (Liu et ai., 1986b), and inelastic problems (Liu et ai., 1987). The formulation based on the
single-field variational principle has been extended by Liu et ai. (1988a) to the three-field Hu-Washizu
variational principle formulation, which has far greater versatility. The numerical instability resulting
from the secular terms in the perturbation has been removed by Liu et ai. (1988b) on the basis of
Fourier analysis. The perturbation methods have been shown to provide efficient and accurate results
for small random fluctuations in the random parameters. An extensive review on the application of
perturbation methods in developing the PFEM has been given by Benaroya and Rehak (1988).
The finite element method coupled with the first- and second-order reliability methods (FORM and
SORM) has been developed by Der Kiureghian and Ke (1985, 1988) for linear structural problems and
by Liu and Der Kiureghian (1991) for geometrically nonlinear problems. The most critical step in this
method is the development of an efficient search algorithm for locating the point at which the response
surface is to be expanded in a first- or second-order Taylor series. This point is obtained by an iterative
optimization algorithm, which involves repeated computation of the limit state function and response
derivatives. Unlike the method of direct differentiation (Der Kiureghian and Ke, 1988; Liu and Der
Kiureghian, 1991; Zhang and Der Kiureghian, 1991), the PFEM based on the perturbation approxi-
mation in conjunction with the FORM has been developed by Besterfteld et al. (1990, 1991) for the
reliability analysis of brittle fracture and fatigue. In a slightly different context, a PFEM has been
developed by Faravelli (1986, 1989) that couples response surface approach with deterministic finite
element formulation. A finite element simulation coupled with the polynomial response surface fitting
has been also proposed by Grigoriu (1982). Using a deterministic finite element code and finite differ-
ences, an advanced algorithm based on fast probability integration (FPI) has been developed by Wu et
ai. (1990) to generate the entire or part of the cumulative distribution function (CDF) of the response.
The performance of FPI based on either the advanced mean value method or the advanced mean value
first-order method has been demonstrated by Cruse et al. (1988) through the reliability analysis of
turbine blades.
In addition to the PFEM, the stochastic boundary element method (SBEM) has been developed and
72 Probabilistic Finite Element Method
adopted by researchers. An SBEM that combines the deterministic boundary element method with
perturbation expansions has been developed by Ettouney et al. (1989) and Dasgupta (1992) for the
determination of the statistic moments of both displacements and tractions. Most recently, the authors
have developed an SBEM that combines the mixed boundary integral equation method (Lua et al.,
1992c) with the FORM, for the study of probabilistic fatigue crack growth (Lua et al., 1992d).
This chapter concentrates on the PFEM based on second-order perturbation and first-order reliability
methods. The chapter is organized as follows. In Section 3, the representation and discretization of
random fields are presented. The development of the PFEM for the general linear transient problem
and nonlinear elasticity, using the Hu-Washizu variational principle, are described in Sections 4 and 5,
respectively. The computational aspects are discussed in Section 6. The application of PFEM to the
reliability analysis is given in Section 7; two examples, one on brittle fracture reliability and the second
on fatigue crack growth reliability, are discussed. A novel stochastic computational tool based on SBEM
is presented in Section 8. The final conclusions are drawn in Section 9.
2.1. Notations
a Acceleration vector
Initial crack length
a, Final crack length
B Generalized gradient matrix
b Random vector
C Elasticity tensor
COY Coefficient of variation
Cov[.] Covariance operator
D Material response matrix
D Fatigue parameter
E[.] Expectation operator
g[.] Performance function
f, Surface Jacobian
f, Volume Jacobian
L Lagrange functional
Ni Shape function associated with node i
N, Sample size used in Monte Carlo simulation
n Fatigue parameter
neq Total number of displacement equations
P Applied load
P, Probability of failure
Pi Applied internal pressure
q Total number of random parameters
r Uncorrelated standard normal variables
T Fatigue life
T, Service life
Var[.] Variance
Probabilistic Finite Element Method 73
2.2. Abbreviations
3.1. Background
The randomness of a stochastic system can be described in three forms: random variables, random
process in space, and random process in time. The random process in space is also called a random
field. The aspects of random fields and their application to engineering problems are given by Vanmarcke
(1984). The spectral representation of random processes by computer simulation has been proposed by
Shinozuka (1987).
The spatial variability of mechanical properties of a system and the intensity of a distributed load
can conveniently be represented by means of random fields. Because of the discrete nature of the finite
element formulation, the random field must also be discretized into random variables. This process is
commonly known as random field discretization. Various methods have been developed for the repre-
sentation of random fields. They are the midpoint method (Hisada and Nakagiri, 1985; Der Kiureghian
and Ke, 1988; Yamazaki et al., 1988), the spatial averaging method (Vanmarcke and Grigoriu, 1983),
the series expansion method (Lawrence, 1987; Spanos and Ghanem, 1988), and the interpolation method
(Liu et al., 1986a). In this section, the interpolation method (Liu et al., 1986a) is described. In this
74 Probabilistic Finite Element Method
method, the random field is represented thy a set of deterministic shape functions and the random nodal
values of the field. The size of the random field element is controlled by the correlation length of the
field and the stability of the probability transformation used in the reliability methods (FORM and
SORM). The random field mesh should be so fine as to capture the fluctuation of the random field. On
the other hand, the random field mesh should not be so small that highly correlated stochastic variables
of adjacent elements cause numerical instability in the probability transformation, which is required in
the reliability methods (FORM and SORM). As suggested by Der Kiureghian (1985), two separate
meshes for the finite element and for random fields have to be used in the numerical implementation.
Because the computational effort in the determination of response derivatives or sensitivities is pro-
portional to the number of random variables, it is desirable to use as few random variables as possible
to represent a random field. To achieve this goal, the transformation of the original random variables
(random nodal values) into a set of uncorrelated random variables has been introduced by Liu et al.
(1986a) through an eigenvalue orthogonalization procedure. Comparison with a Monte Carlo simulation
demonstrates that a few of these uncorrelated variables with larger eigenvalues is sufficient for the
accurate representation of the random field. This technique along with other computational aspects is
presented in Section 6.
b(x) = 2: Ni(x)bi
i=1
(5-1)
where Ni(x) represents the shape functions and bi the discretized values of b(x) at Xi' i = 1, ... , q, and
each Xi is a vector of the nodal coordinate (the three spatial coordinates of each nodal point). It follows
from Eq. (5-1) that
db(x) =L ;... 1
Ni(x) db, (5-2)
and
where
or
E[b{x)] = L N;(x)E[b.]
;=1
(5-6)
and
(5-7)
or
respectively, where !(b) is the multivariate probability density function; Xk and Xm are any two points
in the domain of x.
From second-moment analysis, the mean of any function S[b(x), xl at any point Xk and the covariance
of the function between any two points Xk and Xm can be written as
(5-9)
and
(5-10)
where
Sk = S[b{x),Xt] (5-11)
and the superposed bar implies evaluation at b. The error in Eqs. (5-9) and (5-10) arises from (1) the
truncation of higher order moments and (2) the discretization of the random field b(x) by the finite
vector b. If the randomness in b(x) is small, then the first error will be small for a smooth function and
the second-moment analysis is applicable. The error due to discretization in Eqs. (5-6) and (5-8) has
been studied by Liu et al. (1987).
When the random field discretization is coupled with a FEM discretization, as in the PFEM, q need
not be equal to the number of finite elements (NUMEL) and the shape function Ni(x) need not be the
same as the finite element interpolants for the displacement field. As indicated before, two meshes, one
depending on structural topology and the other on correlation length, can be employed to improve the
computational efficiency.
76 Probabilistic Finite Element Method
The probabilistic finite element method (PFEM) is used to study systems with parametric uncertainties
in both the unknown function and mathematical operators acting on it. The loads can be either deter-
ministic or random. In this section, the second-order perturbation is employed to develop a PFEM for
a general linear transient problem. By applying the second-order perturbation, the random linear system
equations can be replaced by a finite number of deterministic system equations up to second order. The
effective load in each of these equations depends on the randomness of the system and the solutions
of all the lower-order equations.
Because of space limitation, a review of the deterministic finite element method is not given here.
The state of the art of finite element techniques can be found in a review article by Noor (1991). Using
either the single-field variational principle or the Galerkin formulation, the discretized linear equations
of motion are
where M and K(b) are the (neq X neq) global mass and stiffness matrices, respectively; a(b, t), d(b, t),
and f(b, t) are the (neq X 1) nodal acceleration, displacement, and force vectors, respectively; neq is
the number of equations; and b is a q-dimensional discretized random variable vector, that is, bi =b(Xi)'
where Xi is the spatial coordinate vector. The mass is usually assumed to be deterministic whereas the
probability distributions for the stiffness and external force are represented by a generalized covariance
matrix, Cov[bi, bj], ~ j = 1, ... , q. It is worth noting that the stiffness matrix can be expressed in terms
of the generalized gradient matrix, B(x), and the material response matrix, D(b, x). In this formulation,
the random vector, b, can represent a random material property (e.g., Young's modulus) and/or a random
load.
The application of second-order analysis to develop a P~M involves expanding all random functions
about the mean value of the random vector b, denoted by b, via Taylor series and retaining only terms
up to the second-order terms. That is, for a small parameter, ~, the random displacement function
d(b, t) is expanded about b via a second-order perturbation as follows:
q q
- - --
where d(t), dbi(t), and db;bj (t) r9>resent the mean displacement, the first-order variation of displacement
with respect to ~ evaluated at b, and the second-order variation of displacement with respect to bi and
bj evaluated at b, respectively and t::.bi represents the first-order variation of bi about bi' In a similar
manner, K(b), a(b, t), and f(b, t) are also expanded about hi via a second-order perturbation. Substitution
of the second-order perturbations of the random function d(b, t), K(b), a(b, t), and f(b, t) into Eq. (5-
12) and collecting terms of order 1, t, and t2 yield the following equations.
1. Zeroth-order equations
(5-15)
Probabilistic Finite Element Method 77
where
(5-16)
(5-17)
where
F2(d, t) = t {~
1 q
t.!>P) - ~ K"pJd(t) - K",dbP) } Cov[bi> bj ] (5-18)
The solution process for Eqs. (5-14) through (5-20) can be performed in parallel because only one
effective stiffness matrix needs to be formed. Therefore, the total solution requires one factorization of
the effective stiffness matrix and q + 2 forward reductions and back substitutions of an (neq X neq)
system of linear equations to obtain the zeroth-, first-, and second-order solutions.
To illustrate the performance of the PFEM, a simple two degrees of freedom spring-mass system is
discussed here (tiu et al., 1987). The computed results are compared with those obtain~d using
(1) Monte Carlo simulation (MCS) and (2) Hermite-Gauss quadrature (HGO) schemes. The problem is
depicted in Fig. 5-1. A sinusoidal vector forcing function is used:
(5-21)
The random spring constants Kl and K2 are normally distributed with a coefficient of variation equal
to 0.05. The mean spring constants are 24 X 106 and 12 X 106, respectively. The deterministic masses
ml and m2 are 0.372 and 0.248, respectively. A stiffness-proportional damping of 3% is included. The
probabilistic equations derived earlier are solved by the implicit Newmark-~ method (Ma, 1986). The
mean amplitude d1 is depicted in Fig. 5-2 for all three numerical methods-PFEM, HGQ, and MCS.
The PFEM solution compares very well with the other two methods; in fact, results from these methods
are so close that they are not distinguishable in the Figure. For the variance of d1 the PFEM solution,
plotted in Fig. 5-3, seems to overshoot the results of the other two methods at large time. The ::!:3(J
bounds for the displacement d1 is plotted in Fig. 5-4.
The probabilistic finite element method has been developed in the previous section using the single-
field variational principle. Because of the direct stiffness matrix approach used, it can be applied only
to solve a limited number of problems with uncertainty in loading and material properties. To handle
problems with randomness in the eqUilibrium equations, domain, and boundary conditions consistently,
78 Probabilistic Finite Element Method
the three-field Hu-Washizu variational principle is employed to develop the PFEM in this section. An
additional advantage of using the Hu-Washizu variational principle involves the elimination of the
locking phenomena (Belytschko and Bachrach, 1986) and suppression of hourglass modes (Belytschko
et al., 1984). Solution of three stationary conditions for the compatibility relation, constitutive law, and
equilibrium yield the variations in displacement, strain, and stress. The statistics such as expectation,
autocovariance, and correlation of displacement, strain, and stress are then determined.
Using matrix notation, the Hu-Washizu variational principle (HWVP) for nonlinear problems
adopted in this section is (see Liu et al., 1988c)
where E, 0', and u are independent random field variables representing the nonsymmetric measure of
the strain, first Piola-Kirchhoff stress, and displacement, respectively; '" is a nonlinear function of the
deformation gradient; and a superscript T represents the transpose. In Eq. (5-22), n
iJ~, F, h, and Vu
represent the domain, traction boundary, body force vector, prescribed traction vector, and the nonsym-
metric part of the displacement gradient, respectively; 8 represents a virtual quantity. The surface and
volume integrals in Eq. (5-22) can be expressed via parametric representation:
df = J s dA and dO = J v dR (5-23)
respectively, where J s and J represent the surface and volume Jacobians, respectively; and R and A
y
represent the reference domain and boundary, respectively. Random domains and boundaries are incor-
porated into the formulation through randomness in the gradient operator and Jacobians. The application
of second-order perturbation techniques in the HWVP involves the expansion of all random functions
about the mean value of the random field b(x) denoted by b(x) and retaining only up to second-order
terms, that is, for a given small parameter [~ = the scale of randomness in b(x)], the random function
- F(t)
DISPLACEMENT (NODE 1)
2.000r-----:-=:::------------------:-_:_-------,.
_._._.- PfEM - - HGQ --------. MCS
t.ooo
-z
Co
-1.000
-~OOO'OOO~--.I~~-:---~~~~--~-~~----~~----Am+-----.~~----~~----t-.ooo~---t-.t~a----~t~
nME (SEC) (x10-2)
Figure 5-2. Comparison of the mean displacement at node 1, using the probabilistic finite element method (pFEM),
by Hermite-Gauss quadrature (Hoo), and Monte Carlo simulation (MCS). (Note that the three solutions are so
close that they are not distinguishable.)
DISPLACEMENT (NODE 1)
.~~-----------------------------~
_._._.- PFEU - - HGQ --------. UCS
.100
.5CIO _ .750
1.000 1.125
nME (SEC) (X10-2)
Figure 5-3. Comparison of variance of displacement at node 1, using the probabilistic finite element method
(PFEM), Hermite-Gauss quadrature (Hoo), and Monte Carlo simulation (MCS).
80 Probabilistic Finite Element Method
......
z
c
w
o
~
Il..
~
-~~+----.~tZ-----~~---~+----~~----~~---.~±---~~~~---t~~---t~.tZ~--t~~
TIME (SEC) (X10-2)
Figure 5-4. Upper and lower bounds (±3a) of the displacement at node 1, using probabilistic finite element method.
(5-24)
where the superscripts nought, prime, and double prime represent the random functions evaluated at E,
the first-order variation due to variations in E, and the second-order variation, respectively. The first
elasticity tensor, C in Eq. (5-24), is given by
(5-25)
where W is the strain energy density function and G is the deformation gradient. Similarly, the rest of
random functions E, 0', F, h, J s, JV) Vu, and 8(VU)T can also be expressed as second-order perturbations
(see Liu et aL 1988a). After substituting the second-order perturbations of all these random functions
into Eq. (5-22), the following three equations for the zeroth-, first-, and second-order nonlinear PHWVP
are obtained.
(5-26)
Probabilistic Finite Element Method 81
(5-27)
-L R
BuT(F'.fv + J'lJ~)dR - (
JilRb
BuT(b'J? + hOJ;) d4 =0
(5-29)
or
where (!>I(x) are the q shape functions; C~ denotes the lth nodal value of C evaluated at b; C; denotes
the first-order variation of C(X), b) due to variations abi; and C~ denotes the second-order variation.
The last two are then expanded in terms of the random variables bi and given by
C; = 2: (C!)iAbi
i=1
(5-31)
and
q
C~ = ~ ~ (C;')jjAbjAbj (5-32)
respectively. The factor 1/2 is included in order to be consistent with the second-order Taylor series
expansion. The nodal values (CDi and (CDij can be obtained by partial differentiation of C or by a
least-squares fit to the actual data. Similar definitions can be developed for the rest of the random
functions (see Liu et af., 1988a).
Substituting the above approximation of all random functions into the zero-order, first-order, and sec-
ond-order PHWVPs (Eqs. [5-26]-[5-28]), and using the three stationary conditions (strain-displacement,
stress-strain, and equilibrium), the zeroth, first-, and second-order equations can be obtained (see Liu et
af., 1988a). The zeroth-order equations require an iterative solution technique, but the first-order and
second-order equation are linear. After determining the zeroth-, first-, and second-order solutions, the
expectations and autocovariance matrices for the displacements, strains, and stresses can be o:t>tained.
The applicability and effectiveness of the PFEM for nonlinear problems was demonstrated by Liu
et af. (1988a) through the problem of a cantilever beam SUbjected to large deflection. The Saint Venant-
Kirchhoff model for nonlinear elasticity with randomness in the external force, beam height, and material
properties was considered. The probability distributions for displacement, strain, and stress were also
computed. The static elastic/plastic analysis of a turbine blade in the presence of random load, random
yield stress, and random length has been also performed by Liu et al. (1988d).
To reduce computational effort, the random variables can be transformed to the uncorrelated normal
form by an eigenvalue problem as shown below.
6. COMPUTATIONAL ASPECTS
and
Var[c cj ] = Var[c
j, j ]; (for i =j) (5-34)
Probabilistic Finite Element Method 83
Therefore, the number of evaluations is proportional to q. The above is achieved through the
eigenproblem:
where the nand A matrices denote Cov[bj, bj] and Var[cj, Cj], respectively; '\}1 is a constant q X q
fundamental matrix with the following properties:
and
I is the q X q identity matrix and c is the transformed q X 1 vector of random variables. Thus, the
discretized random vector b is transformed to an uncorrelated random vector c with the variance of c
as the eigenvalues of n in Eq. (5-35).
With Eqs. (5-37) and (5-38), the mixed derivatives appearing in Section 5 reduce to second deriv-
atives and Var[bj, bJ reduces to Var[c;). Thus, the mean of any function S[b(x), x] at any point Xk and
the covariance of the function between any two points Xk and Xm can be written as
(5-39)
and
(5-40)
respectively.
It is observed that for one-dimensional random fields, as the correlation length increases from zero
to a large value, the number of largest n eigenvalues, n :::;; q, necessary to evaluate the mean and
covariance in Eqs. (5-39) and (5-40) to a specified accuracy, decreases from q to 1. When the correlation
length is zero the random field is uncorrelated and all q eigenvalues are dominant. As the field is
uncorrelated, all q random variables are necessary to represent the randomness of the field. As the
correlation length increases the number of dominant eigenvalues decreases. Eventually, for a very large
correlation length the random field is closely correlated and there is just one dominant eigenvalue. As
the field is closely correlated, only one random variable, corresponding to the largest eigenvalue, is
sufficient to represent the randomness of the field. The feature, when present, can easily be exploited
to reduce the computations. The value of n can be chosen on the basis of the distribution of the
eigenvalues before solving the PFEM equations. The eigenvalues here can be interpreted as weighting
factors for the corresponding mode shapes necessary to represent the covariance structure; a large
eigenvalue means a dominant mode and vice versa. Results of the eigenvalue distribution and selection
of n, for a beam problem and a bar problem, are discussed in Liu et al. (1986a, 1987).
84 Probabilistic Finite Element Method
Consider a typical function ll(c, d) involving the displacements d and the random variables c. Chain
differentiation yields
(5-41)
(5-42)
Using the first-order equation of the PFEM in the transformed space, that is,
Kd; = r; (5-43)
(5-44)
Usually, in the direct method, the above equation is evaluated for each random variable Cj , involving n
solutions of the linear equation [eq. (5-44)]. In the adjoint method, A is selected to satisfy
(5-45)
(5-46)
The adjoint problem, Eq. (5-45), is solved only once in this method. In the direct method, n solutions
of Eq. (5-43) are required. This is the advantage of the adjoint method over the direct method. Both
methods require n inner products with r;, in Eqs. (5-41) and (5-46), respectively.
However, it has been shown that when the number of functions is more than the number of random
variables, the computational advantage of the adjoint method is lost (Liu et al., 1988d). By solving q
adjoint problems, the second-order sensitivities can also be evaluated. It should be noted that the adjoint
method is applicable to nonlinear problems as well, as the first- and second-order equations are still
linear.
The parallel implementation of the PFEM can be easily achieved in the solution of the first-order
equations (sensitivity analysis). As shown from Eqs. (5-14) to (5-20), only one effective stiffness matrix
needs to be formulated. Once the zeroth-order solution is obtained, q equations (Eq. [5-15]) can be
solved in parallel to determine the response derivatives. Multiple levels of parallelism can be achieved
if the substructuring (Komzsik and Rose, 1991), domain decomposition (Chan et ai., 1989), and operator
splitting (Sues et ai., 1992a) are also employed in the PFEM.
PI = L /80»
g(b)SO
db (5-48)
where fB(b) is the multivariate density function of b.1\\'o difficulties are associated with Eq. (5-48).
First, the domain of integration [g(b) ~ 0] is an implicit function of the random vector b. Second,
standard numerical integration of this multiple integral is prohibitively complicated as the number of
random variables becomes large. 1\\'0 approaches, namely, Monte Carlo simulation (MCS) and failure
surface approximation methods such as the first- or second-order reliability method (FORM or SORM),
have been employed extensively to calculate Eq. (5-48). In the FORM, the limit state surface in the
standard normal space is represented by the tangent hyperplane at the design point. In the SORM, the
lFORM and SORM are discussed in Chapter 3 and Monte Carlo simulation is discussed in Chapter 4 of this book.
86 Probabilistic Finite Element Method
limit state surface in the standard normal space is replaced by a quadratic surface, which is tangent at
the design point. Although MCS is completely general, it is very expensive and time consuming for
small probabilities of failure, which the major concern in reliability engineering. The FORM and SORM
are more accurate and efficient for extreme probability of failure (e.g., 0.0001 or 0.9999); however
implementation can be more complex. In this chapter, the FORM is applied to predict the reliability of
a flawed structural component.
In order to make use of the properties of the standard normal space (rotationally symmetric and
exponential decay), a transformation is introduced to map the original random variables b to a set of
standard, uncorrelated normal variables r (see Rosenblatt, 1952). Equation (5-48) in the r space becomes
(5-49)
where (Y denotes the transpose of a vector or a matrix, and g(r){= g[b(r)]} is the performance function
in the transformed r space. The FORM approximates the calculation in Eq. (5-49) as follows: first the
point r* on the limit state surface [g(r) = 0], which has the minimum distance to the origin, is found
through an iterative algorithm, then the limit state surface at the design point r* is replaced with a
tangent hyperplane given by
(5-50)
(5-51)
(5-52)
and <1>(.) is the standard normal cumulative probability. The step to determine the most probable point
(r*) on the failure surface is the most critical in the reliability analysis. It generally requires the de-
velopment of an iteration and optimization scheme to calculate the gradients of the performance
function.
In this chapter, the reliability index ~ is determined by solving the following optimization problem
in r space, that is,
The optimization can be solved using any general nonlinear optimization algorithm such as the HL-
RF method (Hasofer and Lind, 1974; Hohenbichler and Rackwitz, 1981; Rackwitz and Fiessler, 1978),
gradient projection method (Haug and Arora, 1979), and the modified HL-RF method (Der Kiureghian
and Liu, 1988). A fast convergence rate is essential in selecting an iteration method.
The second-order reliability method, based on the second-order Taylor expansion of the failure sur-
face, is given by Fiessler et al. (1979), Breitung (1984), Der Kiureghian et al. (1987), and Tvedt (1983).
It is also discussed in Chapter 3 of this book.
Probabilistic Finite Element Method 87
The PFEM may be used for the computation of structural failure probabilities related to any type of
failure mode such as yielding, plastic collapse, buckling, creep, fracture, and fatigue (Ghanem and
Spanos, 1991; Halder and Mahadevan, 1991; tiu and Der Kiureghian, 1991; Halder and Zhou, 1992).
The stochastic damage model for a multiphase material has been proposed by Lua et al. (1992a,b) to
quantify the inherent statistical distribution of the fracture toughness in the presence of random micro-
cracks. Examples of PFEM-based reliability analysis with respect to fracture and fatigue modes are
described in the next two sections.
_[R(b)
K(b) - C(b)T
C(b)]
E(b) (5-56)
In Eqs. (5-54) through (5-56), d and h are the regular displacement and external force vectors, respec-
tively; R, E, and C are the [neq X neq] regular stiffness matrix, the [2 X 2] stiffness matrix from the
enriched terms, and the [neq X 2] coupled stiffness matrix from the regular and enriched terms, re-
spectively. The other submatrices in Eq. (5-55) are
K(b)= [K~)]
Kn(b)
and {}(b)= [fl(b)]
fn(b)
(5-57)
where the two terms h. and In are zero if the enriched element is not on a loaded boundary. Equations
(5-54) through (5-57) are solved by condensing out the stress intensity factors (i.e., static condensation).
For mixed mode I and mode II fracture, several kinds of fracture criteria have been summarized by
Wu and ti (1989). Among these criteria, the most widely used are the maximum principal stress criterion
proposed by Erdogan and Sib (1963) and the minimum strain energy density criterion by Sib (1974).
In the case of mixed mode fatigue, the fatigue laws are generally based on an equivalent mode I case
to simulate actual mixed mode behavior. To be consistent with the mixed mode fatigue laws, the
maximum principal stress criterion (Erdogan and Sib, 1963) is applied here to determine the equivalent
88 Probabilistic Finite Element Method
mode I stress intensity factor. Thus, the performance function for the mixed mode fracture can be
expressed as
Equation (5-58) implies that fracture occurs when the equivalent mode I stress intensity factor, Keq ,
exceeds the critical value, Kc. The direction of crack growth where the hoop stress becomes maximum
is given by
(5-59)
where
9-[ sin 9 ]
- 3cos9-1 (5-60)
In Eq. (5-60), the crack direction angle e is measured from the current crack line. The relation between
the equivalent mode I stress intensity factor (Keq) and stress intensity factors K, and K" is given by
(5-61)
where
(5-62)
and e is determined by Eq. (5-59). When only mode I or mode II fracture is present, Eq. (5-58) can
be rewritten as
where Kc is given by
In Eq. (5-64), KIc stands for the fracture toughness. As indicated in Section 7.1, the determination of
the reliability index for calculating the first-order probability of failure in the FORM is achieved by
solving an optimization problem with one constraint (limit state condition). To incorporate other con-
straints such as equations of equilibrium or crack direction law (in fatigue crack growth problem) in
the formulation, the method of Lagrange multipliers can be applied. The statement of the optimization
problem for brittle fracture is described in the following paragraphs.
The nonlinear programming problem consists of determining the correlated random variables, b =
[bl> ... , bqY, and the generalized displacements, aT = [dT, K T], that minimize the distance from the
origin to the limit state surface in the uncorrelated standard normal space. The minimum distance is
termed as the reliability index 13 (Eq. [5-53]). The minimization is subject to the following equality
Probabilistic Finite Element Metlwd 89
constraint:
g(b) ~ 0 (5-66)
(Le., the performance function being on the limit state surface or in the failure state region is a constraint
in the optimization problem).
Equations (5-65) and (5-66) are converted to the Kuhn-Tucker problem (Arora, 1989) by defining
a Lagrange functional, L, of independent variables b, 8, ,"", A, and IX as follows:
(5-67)
where '"" is a Lagrange multiplier for equilibrium, A ::::?:: 0 is a Lagrange multiplier for the inequality
constraint, and IX is a slack variable that is introduced to ensure that g :$; O. Depending on the sign of
A, the function to be minimized will increase or decrease with a change in g. In other words, if A ::::?::
0, then rTr will decrease (Le., minimize) while g :$; 0 (Converse, 1970). The Kuhn-Tucker necessary
conditions for the minimization of Eq. (5-67) are obtained by setting the derivatives of the Lagrange
function with respect to the independent variables b, 8, /-L, A, and IX to zero, that is,
aL a(rTr) T[ a ] ag
ab = ~ + 1.1. ab (f - K8) + >t. ab = 0 (5-68)
aL T ag
a8 = -1.1. K + >t. a8 = 0 (5-69)
aL
-=f-K8=O (5-70)
a1.1.
aL 2
a>t. = g + a == 0 (5-71)
aL
-=2>t.a==0 (5-72)
aa
The optimization requires the solutions of Eqs. (5-68) through (5-72) for b, 8, /-L, A ::::?:: 0, and IX. Equation
(5-70) is simply equilibrium; and Eqs. (5-71) and (5-72) can be simplified to eliminate the slack variable
IX such that Ag = 0 and g :$; 0, which ensures that A ::::?:: O.
Because 8 and b are independent variables in the Lagrange function (see Eq. [5-67]), the partial
derivative of the second term with respect to b in Eq. (5-68) can be expressed as
a (f -
-
af aK
K8) = - - - 8 (5-73)
ab ab ab
To simplify the right-hand side of Eq. (5-73), the first-order probabilistic finite element equation (Eq.
[5-15]) is used. For the present static problem, Eq. (5-15) may be written as
(5-74)
90 Probabilistic Finite Element Method
a a&
- ( f - K&) = K - (5-75)
ab ab
Now multiplying each side of Eq. (5-75) by J..LT and using Eq. (5-69) in the right-hand side we obtain
T a aga&
f.L - (f - K&) = A - - (5-76)
ab a& ab
T a ag aK
f.L - (f - K&) = A - - (5-77)
ab aK ab
(5-78)
when A ~ O. Here
(5-79)
OaT
= - (r r)
L~ (5-80)
ab
In Eq. (5-80), iJ(rTr)/iJb is computed either explicitly or by finite difference, depending on if the random
variables are normal or nonnormal. To perform the sensitivity analysis on the stress intensity factors,
namely iJK/ab, the probabilistic finite element method described in Section 4 can be used. Because we
are interested only in the sensitivity for the stress intensity factors, considerable computational effort
can be saved by using the adjoint method as described in Section 6.2. The iteration algorithm for the
brittle fracture reliability is given by Besterfield et al. (1990).
To demonstrate the applicability of this approach to the brittle fracture reliability analysis, a single
edge-cracked beam subjected to a concentrated point load is considered (see Fig. 5-5) (Besterfield et
al., 1990). The problem constants are given in Table 5-1. Because of symmetry, 10 regular 9-node
elements and 2 enriched 9-node elements are depicted in the left half of the beam as shown in Fig.
5-5. The applied load is modeled with one random variable with a coefficient of variation of 0.1 and
the crack length is also modeled with one random variable with a coefficient of variation of 0.1. The
convergence criterion for the optimization is 0.001. The variance of the mode I stress intensity factor
with randomness in force, material, crack length, and the combination is presented in Table 5-2 for
the adjoint method. Also presented in Table 5-2 are the summaries of the numerical performance and
results of the reliability analysis (e.g., starting point, number of iterations, the failure point, reliability
index, and probability of failure). As shown in Table 5-2, with a 10% coefficient of variation in the
Probabilistic Finite Element Method 91
10
Figure 5-5. Model for single edge-cracked beam with an applied load.
load, material, and crack length, the mode I stress intensity factor varies by 10, 0.18, and 3.83%,
respectively.
Table 5-1. Problem Constants: Single-Edge Cracked Beam with an Applied Load
Coefficient of
Parameter Mean Standard deviation variation (%)
The most common law for fatigue crack growth is the Paris-Erdogan model (1963), which gives
the fatigue life T by
(5-81)
where ai and af are the initial and final crack lengths, respectively; da is the random crack path; D and
n are primarily material parameters but can also depend on the loading and environmental effects; and
~Keq(a) is the range of equivalent mode I stress intensity factors, that is,
(5-82)
where K:;n and K~ax are the minimum and maximum equivalent mode I stress intensity factors, respec-
tively, associated with the minimum and maximum cyclic applied stresses, respectively. If the minimum
equivalent mode I stress intensity factor is assumed to be zero, then
(5-83)
The direction of the crack can be considered to be a random function, which will depend on the material
properties, history of the loading, and the crack path. At each step, the statistics of the crack tip, as
reflected in this random function, in conjunction with the previous length of the crack and its orientation,
will be used to obtain the new configuration. On the basis of the maximum hoop stress criterion
(Erdogan and Sih, 1963), the crack growth direction Z(K, 0) given by Eq. (5-59) is also employed here.
The performance function for fatigue crack growth is given by
g=T-T, (5-84)
where Ts is the service life of the component. In other words, the component fails when the fatigue life
is less than the desired service life. The performance function could also be expressed in terms of a
critical crack length. The calculation of the reliability index by the first-order reliability theory is per-
formed in the same way as before, by solving a constrained optimization problem. The formulation and
numerical implementation of the fatigue crack growth reliability can be found in Besterfield et al. (1991).
To demonstrate the performance of the method for reliability analysis against failure due to fatigue
crack growth, a classic mode I fatigue problem is presented (see Besterfield et al., 1991). Figure 5-6
shows a finite rectangular plate with a single edge crack of length a subjected to a distributed load.
The problem constants and second-moment statistics are given in Table 5-3. Because of symmetry, 2
enriched 9-node elements and 23 regular 9-node elements are depicted on the upper half of the plate.
The reliability index is plotted versus the service life under the various types of uncertainties for the
reference solution (Bowie, 1964) and the solution obtained by the PFEM in Fig. 5-7a and b, respectively.
The same trends as the reference solution with the slight difference in the value of the reliability index
can be observed through comparison of Fig. 5-7a and b. This difference is due to the small numerical
error in calculating the stress intensity factor by finite element methods. As shown in Fig. 5-7a, for a
service life of 4 X 106 cycles, the reliability index is less for uncertainty in the initial crack length
(100% coefficient of variation) and stress (25% coefficient of variation) than for randomness in the final
crack length (10% coefficient of variation), fatigue parameter D (30% coefficient of variation), and
94 Probabilistic Finite Element Method
-,-
~ ~ ~ J
1 I I I I
I I I I
RIRIRIRI R
I I 1 I
L I 1 1 I
---------
1 I 1 1
RIRIRIRIR
1----------
RIRIRIRI R
4a r- T .., -I -1- - - - -
R R R R R
I- +- -+ ~ -I - - - - -
E.JE1R1R 1 R
a~
I
~
.i'
4a I
L
~
W ...
--
't
Figure 5-6. Model for single edge-cracked plate with an applied load.
Table 5-3. Problem Constants: Single Edge-Cracked Plate with a Distributed Load
Coefficient of
Parameter Mean Standard deviation variation (%)
(a)
7
~
41
"t:I 5 Initial Crack Length
r:: " Final Crack Length
~
;.,.
•
4 Fatigue ParaJreter D
.-:: a
:c • Fatigue Parameter n
.~ 3 • Stress
4i
ex::
2
o;-------------~----------~~-----------,------------~
o 200000O 400000O 600000O 8000000
(b)
7
--
~
41
"t:I 5
r::
a Fatigue Parameter D
;.,. 4 • Fatigue Parameter n
~
:c .. Suess
.~ 3
41
ex::
2
o+-----------~----------~------------.-----------,
o 200000O 400000O 600000O 8000000
fatigue parameter n (2.5% variation). When all five of the parameters are treated as random, the com-
bined effect is much greater than anyone individual effect, as expected.
of three random processes in the expression of the response gradient, namely the mode I and mode II
stress intensity factors and the crack direction angle, the first-order response-surface model is employed
to determine the response sensitivity of these random processes. An iteration scheme based on the HL-
RF method (Rackwitz and Fiessler, 1978) is employed to find the most probable failure point (or design
point). Because of the high accuracy of the response gradient calculation, based on the direct differ-
entiation, fast convergence is obtained in the numerical iteration.
where J.L is the shear modulus and v is the Poisson's ratio. The initial crack length (ai), external load
(T), fatigue parameters (D, n), the defect geometry (xc, Yc, rc), and the internal pressure (Pi) resulting
Yc
w
I'"
Figure 5-8. A single edge-cracked plate with a random transformation inclusion subjected to a distributed load.
98 Probabilistic Finite Element Method
Table 5-4. Statistical Parameters and Distributions of Input Random Variables of the Example Problem in
Curvilinear Fatigue Crack Growth
Coefficient of variation
Random variable Mean Standard deviation (%)
ai (uniform with tail) 0.5833 X 10- 2 in. 0.3584 X 10- 2 in. 61.4
D (lognormal) 0.3770 X 10- 9 0.1885 X 10- 10 5.0
n (lognormal) 3.60 0.18 5.0
,. (normal) 11.0 ksi 1.1 ksi 10.0
Xc (uniform) -0.25 in. 0.14433 57.7
Yc (uniform) -0.40 in. 0.05774 14.4
Tc (uniform) 0.1375 in. 0.03608 26.2
Pi (uniform) 35.0 ksi 3.5 10.0
from the residual strain in the inclusion are assumed to be independent random variables with specified
probability density functions.
The statistical parameters of random input variables (mean, standard deviation, and coefficient of
variation [COY]) along with corresponding distribution functions are listed in Table 5-4. As shown in
Table 5-4, the initial crack size ai has the largest dispersion (COV = 60%). For the initial crack length
ai> a uniform distribution with a tail is employed here (see Fig. 5-9). The detection threshold, which is
equal to 7.5 X 10-3 (as shown in Fig. 5-9), represents the lower limit of an inspection device to detect
the presence of a small crack. Below the detection threshold the probability density is assumed uniform;
above the threshold the probability density decays linearly to zero, representing false negatives of the
inspection technique. For the purpose of verifying the accuracy of the stochastic BEM, a Monte Carlo
simulation (MCS) with a sample size Ns = 2000 is used.
The cumulative distribution function (CDF) of the fatigue life T obtained by the stochastic BEM is
presented in Fig. 5-10. The agreement of MCS and SBEM results shown in Fig. 5-10 demonstrates the
accuracy and efficiency of the stochastic BEM. As a rule of thumb (Bjerager, 1989), the sample size
necessary for MCS to obtain a probability estimate with good confidence is around 100/Pf. For small
probabilities of failure Pf (=10- 3 to 10- 6), which are the major interest in reliability engineering, one
needs 105 to 108 Monte Carlo trials to achieve good confidence. The number of iterations in the sto-
chastic BIEM required to find the design point b* is only on the order of 15 to 20 for 13 = 3 to 5 (or
88.889
0.015
o 7.5 X 10.3
Figure 5-9. Uniform distribution with tail for the initial crack length ai.
Probabilistic Finite Element Method 99
1.2
-- -------
:>-
E-c 1.0
..J
IX)
-<
IX)
0.8
0
c:l::
~ 0.6
-
~
~
E-c 0.4
-<
..J Stochastic BIEM
;:J MCS (Sample Size=2(00)
~ 0.2
;:J
U
2 3 4 5
SERVICE LIFE (*E+06)
Figure 5-10. Comparison of CDF of the fatigue life T obtained by SBEM and MCS.
Pf = 0.001 to 0.3 X 10- 6). Therefore the stochastic BEM based on the FORM has an overwhelming
advantage over the MCS for small probabilities of failure in terms of solution accuracy and efficiency.
The reliability index (13) versus the service life (Ts) is shown in Fig. 5-11 for the plate, with and
without microdefect. As shown in Fig. 5-11, the presence of a random transformation inclusion has a
detrimental effect on the fatigue life. The comparison of response sensitivities at the most probable
points (MPPs or design points) versus the probability of failure for both cases is plotted in Fig. 5-12.
As shown in Fig. 5-12, the presence of a random transformation inclusion changes the response sen-
sitivity of ai significantly. The comparison of the loci of the MPP of a i is shown in Fig. 5-13. The
presence of the random transformation inclusion changes the locus of MPP of ai considerably as shown
in Fig. 5-13. When the value of ai increases, the probability of failure Pf becomes large (see Fig. 5-13).
6,------------------------------------,
5
-
---- ----
"- ......
--
...
o4-~~~rT~~~~~~~~~,_~~~~
0.00 0.25 0.50 0.75 1.00 1.25
SERVICE LIFE (*E+06)
Figure 5-11. Comparison of reliability index.
100 Probabilistic Finite Element Method
3.0
CiO
=+ 2.5 --0- Without a Random Inclusion
r;a;l
- With a Random Inclusion
*'-"
-; 2.0
~
0 1.5
>-
........
E-o
.... 1.0
....
E-o
rIl
Z 0.5
r;a;l
rIl
0.0
0.0 0.2 0.4 0.6 0.8 1.0
PROBABILITY OF FAILURE
Figure 5-12. Comparison of response sensitivity at design points.
This is the main reason why the routine crack inspection is so important to avoid a large probability
of failure.
9. CONCLUSIONS
An overview of response analysis of stochastic structural systems by the PFEM, with emphasis on
second-order perturbation techniques, is provided. Because of the discrete nature of the finite element
formulation, the random field must also be discretized. Existing approaches for representation of random
fields are outlined. For an efficient characterization of the random field, the transformation of the original
random variables into a set of uncorrelated random variables is introduced through an eigenvalue or-
rIl
E-o 100
....Z
--
0
Iloo
--0- Without a Random Inclusion
Z 80 With a Random Inclusion
....~
l'Il
r;a;l
Q 60
E-o
<
,.....
=
~
40
~
.....,
*
-; 20
~
0
....
U 0
0
~ 0.0 0.2 0.4 0.6 0.8 1.0
PROBABILITY OF FAILURE
Figure 5-13. Comparison of the locus of aj at design points.
Probabilistic Finite Element Method 101
thogonalization procedure. Both the single-field variational principle and the three-field Hu-Washizu
variational principle are employed to develop the PFEM for linear and nonlinear problems, respectively.
The computational aspects in the numerical implementation of the PFEM are also presented.
The accuracy and efficiency of PFEM in quantifying the statistic moments of a stochastic system
are demonstrated through the example of a stochastic spring-mass system under sinusoidal excitation.
The results are in good agreement with Monte Carlo simulation (MCS). The computational efficiency
of the PFEM far exceeds the computational efficiency of the MCS. Because the PFEM discussed in
this chapter essentially involves solution of a set of deterministic problems, it is easily integrable into
any FEM-based code.
The PFEM coupled with the first-order reliability method is also presented for the reliability analysis.
The methodology consists of calculating the reliability index via an optimization procedure, which is
used to calculate the probability of failure. The PFEM provides a powerful tool for the sensitivity
analysis, which is required in an iterative optimization algorithm. Performance of the methodology
presented is demonstrated on a single edge-cracked beam with a concentrated load and a classic mode
I fatigue crack growth problem.
In addition to the PFEM, the stochastic boundary element method (SBEM), which combined the
mixed boundary integral equation with the first-order reliability method, is also presented for the cur-
vilinear fatigue crack reliability analysis. Because of the high degree of complexity and nonlinearity of
the response, direct differentiation coupled with the response-surface method is employed to determine
the response gradient. The reliability index and the corresponding probability of failure are calculated
for a fatigue-crack growth problem with randomness in the crack geometry, defect geometry, fatigue
parameters, and external loads. The response sensitivity of the initial crack length at the design point
is also determined to show its role in the fatigue failure. The results show that the initial crack length
is a critical design parameter. Because crack lengths below the threshold of an inspection limit are
likely to exhibit a large amount of scatter, this makes its imperative that the life expectancy of a structure
be treated from a stochastic viewpoint.
Probabilistic analysis is becoming increasingly important for the safety and reliability assessment of
aging structures and for tailoring new advanced materials. Because of the complexity in characterizing
material behavior, structural response, and failure mechanism, probabilistic mechanics problems are
computationally intensive and strain the resources of currently available computers. Since many sources
of parallelism are inherent in probabilistic mechanics problems, the development of a parallel computing
environment for probabilistic response analysis is the current trend in stochastic computational
mechanics.
REFERENCES
AKIN, J. E. (1976). The generation of elements with singularities. International Journal for Numerical Methods in
Engineering 10: 1249-1259.
AuABADI, M. H., and D. P. ROOKE (1991). Numerical Fracture Mechanics. Southampton, England: Computational
Mechanics Publishers.
ARoRA, 1. S. 1989. Introduction to Optimal Design. New York: McGraw-Hill, pp. 122-136.
BAECHER, G. B., and T. S. INGRA (1981). Stochastic FEM in settlement predictions. Journal of the Geotechnical
Engineering Division of American Society of Civil Engineers. 107(GT4):449-463.
BARSOUM, R. S. (1976). On the use of isoparametric finite elements in linear fracture mechanics. International
Journal for Numerical Methods in Engineering 10:25-37.
BELYI'SCHKO, T., and W. E. BACHRACH (1986). Simple quadrilateral with high-course mesh accuracy. Computer
Methods in Applied Mechanics and Engineering 54:279-301.
102 Probabilistic Finite Element Method
BELYTSCHKO T., et al. (1984). Hourglass control in linear and nonlinear problems. Computer Methods in Applied
Mechanics and Engineering 43:251-276.
BENAROYA, H., and M. REHAK (1988). Finite element methods in probabilistic structural analysis. Applied Me-
chanics Review 41(5):201-213.
BESTERFIELD, G. H., W. K. Lm, M. A. LAWRENCE, and T. B. BELYTSCHKO (1990). Brittle fracture reliability by
probabilistic finite elements. Journal of Engineering Mechanics Division of American Society of Civil En-
gineers 116:642-659.
BESTERFIELD, G. H., W. K. Lm, M. LAWRENCE, and T. BELYTSCHKO (1991). Fatigue crack growth reliability by
probabilistic finite elements. Computer Methods in Applied Mechanics and Engineering 86:297-320.
BIERAGER, P. (1989). Probability computation methods in structural and mechanical reliability. In: Computational
Mechanics of Probabilistic and Reliability Analysis. W. K. Liu and T. Belytschko, Eds. Lausanne, Switzer-
land: Elme Press International, pp. 47-68.
BOWIE, O. L. (1964). Rectangular tensile sheet with symmetric edge cracks. Journal of Applied Mechanics 31.
BRElTUNG, K. (1984). Asymptotic approximation for multinormal integrals. Journal of Engineering Mechanics
Division of American Society of Civil Engineers. 110:357-366.
CAMBOU, B. (1975). Application of first-order uncertainty analysis in the finite element method in linear elasticity.
In: Proceedings of the 2nd International Conference on Application of Statistics and Probability in Soil and
Structural Engineering. Aachen, Germany: Deutsche Gesellschaft fur Grd-und Grundbau ev, Essen, FRC,
pp.67-87.
CHAN, T. F., R. GLOWINSKI, J. PERIAUX, and O. WIDLUND (1989). Domain decomposition methods for partial
differential equations. SIAM Journal.
CONVERSE, A. O. (1970). Optimization. New York: Robert Krieger Publishing, pp. 243-248.
CRUSE, T. A. (1988). Boundary Element Analysis in Computational Fracture Mechanics. Dordrecht, The Nether-
lands: Kluwer Academic Publishers.
CRUSE, T. A., Y-T. Wu, B. DIAS, and K. R. RAJAGOPAL (1988). Probabilistic structural analysis methods and
applications. Computer and Structures 30:163-170.
DASGUPTA, G. (1992). Stochastic finite and boundary element simulations. In: Proceeding of the 6th Specialty
Conference. New York: American Society of Civil Engineers, pp. 120-123.
DER KIUREGHIAN, A. (1985). Finite element methods in structural safety studies. In: Proceeding of the ASCE
Convention. New York: American Society of Civil Engineers.
DER KIUREGHIAN, A., and B.-J. KE (1985). Finite-element based reliability analysis of frame structures. In: Pro-
ceedings of the 4th International Conference on Structural Safety and Reliability, Vol. I. (Kobe, Japan). New
York: International Society for Structural Safety and Reliability, pp. 395-404.
DER KmREGHIAN, A., and B.-J. KE (1988). The stochastic finite element method in structural reliability. Proba-
bilistic Engineering Mechanics 3:83-91.
DER KIUREGHIAN, A., and P.-L. Lm (1988). Optimization algorithms for structural reliability. In: Computational
Probabilistic Mechanics. New York: American Society of Mechanical Engineers, pp. 185-196.
DER KIUREGHIAN, A., H.-Z. LIN, and S. J. HWANG (1987). Second-order reliability approximations. Journal of
Engineering Mechanics Division of American Society of Civil Engineers 113:1208-1225.
ERDOGAN, F., and G. H. SIH (1963). On the crack extension in plates under plane loading and transverse shear.
Journal of Basic Engineering 85:519-527.
ErrOUNEY, M., H. BENAROYA, and J. WRIGHT (1989). Probabilistic boundary element methods. In: Computational
Mechanics of Probabilistic and Reliability Analysis. W. K. Liu and T. Belytschko, Eds. Lausanne, Switzer-
land: Elme Press International, pp. 142-165.
FARAVELLI, L. (1986). A response surface approach for reliability analysis. In: RILEM Symposium on Stochastic
Methods in Material and Structural Engineering.
FARAVELLI, L. (1989). Response surface approach for reliability analysis. Journal of Engineering Mechanics Di-
vision of American Society of Civil Engineers 115:2763-2781.
Probabilistic Finite Element Method 103
FrnsSLER, R, H.-I. NEUMANN, and R. RACKWITZ (1979). Quadratic limit states in structural reliability. Journal of
Engineering Mechanics Division of the American Society of Civil Engineers 105:661-676.
GHANEM, R. G., and P. D. SPANOS (1991). Spectral stochastic finite element formulation for reliability analysis.
Journal of Engineering Mechanics Division of the American Society of Civil Engineers 117(10):2351-2372.
GIFFORD, L. N., and P. D. HILTON (1978). Stress intensity factors by enriched finite elements. Engineering Fracture
Mechanics 10:485-496.
GRIGORIU, M. (1982). Methods for approximate reliability analysis. Structural Safety 1:155-165.
RAwER, A., and S. MAHAoEVAN (1991). Stochastic FEM-based validation of LRFD. Journal of Structural En-
gineering Division of the American Society of Civil Engineers 117(5):1393-1412.
RAwER, A., AND Y. ZHou (1992). Reliability of geometrically nonlinear frames. Journal of Engineering Mechanics
Division of the American Society of Civil Engineers 118(10):2148-2155.
HANDA, K., and K. ANDERSON (1981). Application of finite element methods in the statistical analysis of structures.
In: Proceedings of the 3rd International Conference on Structural Safety and Reliability. Amsterdam, The
Netherlands: Elsevier Science Publishing, pp. 409-417.
HASOFER, M., and N. C. LIND (1974). Exact and invariant second-moment code format. Journal of Engineering
Mechanics Division of the American Society of Civil Engineers 100: 111-121.
HAUG, E. I., and J. S. ARORA (1979). Applied Optimal Design: Mechanical and Structural Systems. 1st ed. New
York: John Wiley & Sons, pp. 319-328.
HENSHELL, R. D., and K. G. SHAW (1975). Crack tip elements are unnecessary. International Journal for Numerical
Methods in Engineering 9:495-507.
HISADA, T., and S. NAKAGIRI (1981). Stochastic finite element method developed for structural safety and reliability.
In: Proceedings of the 3rd International Conference on Structural Safety and Reliability. Amsterdam, The
Netherlands: Elsevier Science Publishing, pp. 395-408.
HISADA, T., and S. NAKAGIRI (1985). Role of the stochastic finite element method in structural safety and reliability.
In: Proceedings of the 4th International Conference on Structural Safety and Reliability. In: Proceedings of
the 4th International Conference on Structural Safety and Reliability (Kobe, Japan). New York: International
Society for Structural Safety and Reliability, pp. 385-394.
HOHENBICHLER, A. M., and R. RACKWITZ (1981). Non-Normal Dependent Vectors in Structural Safety. Journal of
Engineering Mechanics Division of American Society of Civil Engineers 107:1227-1238.
IsmI, K., AND I. SUZUKI (1987). Stochastic finite element method for slope stability analysis. Structural Safety 4:
11-129.
KOMZSIK, L., and T. ROSE (1991). Substructuring in MSC/NASTRAN for large scale parallel applications. Com-
puter Systems in Engineering 2(2/3):167-173.
LAWRENCE, M. A. (1987). Basis random variables in finite element analysis. International Journal for Numerical
Methods in Engineering 24:1849-1863.
Lm, P.-L., and A. DER KIUREGmAN (1991). Finite element reliability of geometrically nonlinear uncertain struc-
tures. Journal of Engineering Mechanics Division ofAmerican Society of Civil Engineers 117(8):1806-1825.
Lm, W. K., and T. BELYfSCHKO, Eds. (1989). Computational Mechanics of Probabilistic and Reliability Analysis.
Lausanne, Switzerland: Elme Press International.
Lm, W. K., T. BELYfSCHKO, and A. MANI (1986a). Random field finite elements. International Journal for Nu-
merical Methods in Engineering 23:1831-1845.
Lm, W. K., T. BELYTSCHKO, and A. MANI (1986b). Probabilistic finite elements for nonlinear structural dynamics.
Computer Methods in Applied Mechanics Engineering 56:61-81.
Lm, W. K., T. BELYfSCHKO, and A. MANI (1987). Applications of probabilistic finite element methods in elasticl
plastic dynamics. ASME Journal of Engineering for Industry 109:2-8.
Lm, W. K., G. H. BESTERFIELD, and T. BELYTSCHKO (1988a). Variational approach to probabilistic finite elements.
Journal of Engineering Mechanics Division of American Society of Civil Engineers 114:2115-2133.
104 Probabilistic Finite Element Method
LIU, W. K, G. H. BESTERFIELD, and T. BELYTSCHKO (1988b). Transient probabilistic systems. Computer Methods
in Applied Mechanics and Engineering 67:27-54.
LIU, W. K, T. BELYTECHKO, and J. S. CHEN (1988c). Nonlinear version of fiexurally superconvergent element.
Computer Methods in Applied Mechanics and Engineering 71:241-258.
LIU, W. K, A. MANI, and T. BELYTSCHKO (1988d). Finite element methods in probabilistic mechanics. Probabilistic
Engineering Mechanics 2(4):201-213.
LUA, Y. J., W. K LIU, and T. BELYTSCHKO (1992a). A stochastic damage model for the rupture prediction of a
multi-phase solid. I. Parametric studies. International Journal of Fracture 55:321-340.
LUA, Y. 1., W. K LIU, and T. BELYTSCHKO (1992b). A stochastic damage model for the rupture prediction of a
multi-phase solid. II. Statistical approach. International Journal of Fracture 55:341-36l.
LUA, Y. J., W. K LIU, and T. BELYTSCHKO (1992c). Elastic interactions of a fatigue crack with a micro defect by
the mixed boundary integral equation method. International Journal for Numerical Methods in Engineering
36:2743-2759.
LUA, Y. J., W. K LIU, and T. BELYTSCHKO (1992d). Curvilinear fatigue crack reliability analysis by stochastic
boundary element method. International Journal for Numerical Methods in Engineering 36:3841-3858.
MA, F. (1986). Approximate analysis of a class of linear stochastic systems with colored white noise parameters.
International Journal of Engineering Science 24(1):19-34.
MACKERLE, J., and C. A. BREBBIA (1988). The Boundary Element Reference Book. Southampton, England: Com-
putational Mechanics Publications.
MYERS, R. H. (1971). Response Surface Methodology. Boston, Massachusetts: Allyn and Bacon Inc.
NOOR, A. K (1991). Bibliography of books and monographs on finite element technology. Applied Mechanics
Review 44(6):307-317.
PARIS, P. c., and F. ERDOGAN (1963). A critical analysis of crack propagation laws. Journal of Basic Engineering
85:528-534.
RACKWITZ, R., and B. FIESSLER (1978). Structural reliability under combined load sequences. Computer and
Structures 9:489-494.
RiCE, 1. R. (1968). A path independent integral and the approximate analysis of strain concentrations by notches
and cracks. Journal of Applied Mechanics 35:379-386.
RiGHETTI, G., and K HARROP-WILLIAMS (1988). Finite element analysis of random soil media. Journal of Geo-
technical Engineering Division of American Society of Civil Engineers 114(1):59-75.
ROSENBLATT, M. (1952). Remarks on a multivariate transformation. Annals of Mathematical Statistics 23:470-
472.
SAOUMA, V. E. (1984). An automated finite element procedure for fatigue crack propagation analysis. Engineering
Fracture Mechanics 20:321-333.
SHINOZUKA, M. (1987). Stochastic fields and their digital simulation. In: Stochastic Methods in Structural Dynam-
ics. C. I. Schuellemd and M. Shinozuka, Eds. Boston, Massachusetts: Martinius Nijhoff, pp. 92-133.
SHINOZUKA, M., and G. DEODATIS (1988). Response variability of stochastic finite element systems. Journal of
Engineering Mechanics Division of American Society of Civil Engineers 114(3):499-519.
SHINOZUKA, M., and F. YAMAZAKI (1988). Stochastic finite element analysis: An introduction. In: Stochastic
Structural Dynamics, Progress in Theory and Applications. S. T. Ariaratnm, G. I. Schueller, and I. Elishakoff,
Eds. New York: Elsevier Applied Science, pp. 241-29l.
SIH, G. C. (1974). Strain energy density factor applied to mixed mode crack problems. International Journal of
Fracture Mechanics 10:305-322.
SPANOS, P. D., and R. GHANEM (1988). Stochastic Finite Element Expansion for Random Media. Report NCEER-
88-0005. Houston, Texas: Rice University.
SUES, R. H., H.-C. CHEN, and L. A. TwiSDALE (1991a). Probabilistic Structural Mechanics Research for Parallel
Processing Computers. Report CR-187162. Washington, D.C.: National Aeronautics and Space Administration.
SUES, R. H., H.-C. CHEN, C. C. CHAMIS, P. L. N. MURTHY (1991b). Programming probabilistic structural analysis
Probabilistic Finite Element Method 185
for parallel processing computers. Paper presented at the AlAA/ASME/ASCE/AHS/ASC 32nd SDM Confer-
ence, Baltimore, Maryland, April 6-8, 1991.
SUES, R. H., H.-C. CHEN, and F. M. LAVELLE (1992a). The stochastic preconditional conjugate gradient method.
Probabilistic Engineering Mechanics 7:175-182.
SUES, R. H., Y. J. LUA, and M. D. SMITH (1992b). Parallel Computing for Probabilistic Response Analysis of
High Temperature Composites. Report CR-26576. Washington, D.C.: National Aeronautics and Space
Administration.
TONG, P., T. H. H. PIAN, and S. J. LAsRY (1973). A hybrid-element approach to crack problems in plane elasticity.
International Journal for Numerical Methods in Engineering 7:297-308.
TVEDT, L. (1983). Two Second Order Approximations to the Failure Probability. Norway: Det Norske Veritas.
Report RDIV/20-004-83.
VANMARCKE, E. (1984). Random Fields: Analysis and Synthesis, Second printing. Cambridge, Massachusetts: MIT
Press.
VANMARCKE, E., and M. GRIGORIU (1983). Stochastic finite element analysis of simple beams. Journal of Engi-
neering Mechanics Division of American Society of Civil Engineers 109:1203-1214.
Wu, X. F., and X. M. LI (1989). Analysis and modification of fracture criterion for mixed-mode crack. Engineering
Fracture Mechanics 34:55-64.
Wu, Y.-T., H. R. MILLWATER, and T. A. CRUSE (1990). Advanced probabilistic structural analysis method for
implicit performance functions. AlAA Journal 28:1663-1669.
YAMAZAKI, F. M., M. SlllNOZUKA, and G. DASGUPTA (1988). Neumann expansion for stochastic finite element
analysis. Journal of Engineering Mechanics Division of American Society of Civil Engineers 114(8):1335-
1354.
ZHANG, Y., and A. DER KIuREGHIAN (1991). Dynamic Response Sensitivity of Inelastic Structures. Technical Report
UCB/SEMM-91/06. Berkeley, California: Department of Civil Engineering, University of California.
6
PROBABILISTIC FRACTURE
MECHANICS
D. O. HARRIS
1. INTRODUCTION
Fracture mechanics is an engineering discipline that quantifies the conditions under which a load-bearing
body can fail due to the enlargement of a dominant crack contained in that body (Kanninen and Popelar,
1985). Such enlargement can occur over an extended period, due to cyclic loading and/or adverse
environmental effects. This subcritical growth of the dominant crack eventually leads to attainment of
critical conditions, at which point the crack grows rapidly in an unstable manner.
The technology of fracture mechanics for prediction of subcritical growth of cracks and final crack
instability is well established, with Kanninen and Popelar (1985), Broek (1982), and Anderson (1991)
providing examples of comprehensive books on the field.
The key ingredients in a deterministic fracture mechanics analysis are the initial crack size, crack
driving force solution (stress intensity factors for linear elastic problems), applied stresses, and material
properties describing the subcritical crack growth characteristics and conditions for final crack instability.
A conventional deterministic fracture mechanics analysis provides the time (or cycles) to failure for a
given set of initial (or current) conditions. Included as part of this process is evaluation of the critical
crack size. Many of the inputs to a fracture mechanics analysis are often subject to considerable scatter
or uncertainty. Hence, the results of the fracture mechanics analysis must be viewed with some skep-
ticism. Quite often, conservative bounds on inputs are employed, thereby providing a conservative
estimate of the time to failure and remaining life. This stacks conservatism on conservatism and may
provide an overly conservative and unrealistic result.
One way to provide a more realistic result is to consider some of the key inputs to be random
variables, and viewing the output as a statistical distribution of lifetime (rather than a single deterministic
value). This distribution of lifetime provides the component reliability as a function of time, and de-
cisions concerning replacement, design, inspection, etc., can be based on component reliability. Con-
sidering the input variables as random also eliminates the need to shift conservative "bounds" as
additional data (that may be outside the bounds) becomes available.
Probabilistic fracture mechanics (PFM) is fracture mechanics that considers some of the inputs to be
random variables. A prime example of a random input is initial crack size. This is seldom accurately
106
Probabilistic Fracture Mechanics 107
known and usually has a strong influence on lifetime. All other inputs, such as stresses, cycles, sub-
critical crack growth characteristics, and fracture toughness, can also be considered as random variables.
Another factor that is naturally incorporated into PFM analyses is the effects of preservice and in-
service inspections. These enter through the probability of detecting a defect by a given inspection
procedure as a function of its size and the probability of accurately sizing the defect and satisfactorily
repairing it.
In this chapter, the theoretical foundations of PFM are reviewed. Because PFM is based on deter-
ministic fracture machanics, a review of deterministic fracture mechanics is provided first for complete-
ness. This is followed by a discussion of probabilistic aspects, such as characterization of random
variables, and procedures for obtaining failure probability results. Harris (1985) and Provan (1987)
provide earlier related reviews.
2.1. Notations
a Crack depth
a Indicated crack depth
Critical crack depth
Initial crack depth
arepair Crack depth above which repair is made
aso Median crack depth
B Coefficient in e--S relation (Eq. [6-5])
C Coefficient in fatigue crack growth relation
C* Steady state creep crack driving force
C(t) Tune-dependent creep driving force (Riedel, 1987)
CT TIme-dependent creep crack driving force (Eq. [6-8])
CT(ave) Average value of CT during a hold time
Co Constant in modified Forman crack growth relation
C3 Creep crack growth rate coefficient (Eq. [6-7])
c Semimajor axis of elliptical crack
d Exponent in modified Forman crack growth relation
E Young's modulus
F Geometry function in expression for stress intensity factor
G Equals K/Sh I12 (Eq. [6-8])
g(Ui ) Failure curve in Ui space (performance function)
h Wall thickness
hI Geometry function in expression for C*
J Value of J-integral, crack driving force for nonlinear elastic solid
K Stress intensity factor
K., K 1• Fracture toughness
K.... Maximum K during a load cycle
Kmin Minimum K during a load cycle
Lower limit of ~c
Number of random variables
108 Probabilistic Fracture Mechanics
Basic methodologies for development of probabilistic fracture mechanics (PFM) models and generation
of numerical results are described. Because PFM has a strong deterministic basis, this basis is briefly
reviewed for completeness.
As-fabricated
Inspection detection
crack size - probability
and location
n
f
-
Stress Intensity
factor solution
Initial crack size
and location _I
~ Subcrltlcal crack growth
characteristics of
material/environment
~ t 'f
Crack growth as Material properties for
Stress history --'" a function of critical crack growth
time, cycles, etc. K lc ' J lc , T mat , etc.
'*'
~
Failure criterion,
K>KIc,J>J lc ' Critical crack size
J
T_ > Tmat, etc.
Figure 6-1. Basic components of a deterministic fracture mechanics model for prediction of crack growth and
crack instability.
110 Probabilistic Fracture Mechanics
The behavior of cracks is usually governed by their strain energy release rate (the rate of release of
stored strain energy per unit area of crack extension). In linear elastic solids, this can be expressed in
terms of the stress intensity factor, K. For nonlinear elastic solids, which are often used to represent
elastic-plastic metals, the value of the J-integral describes the strain energy release rate. J and K also
control the strength of the crack tip singularity in nonlinear- and linear-elastic material, respectively.
The crack driving force depends on strain level, loading level and distribution, crack size, and body
geometry. For a through crack of length 2a in an infinite sheet subjected to all-around stress S far from
the crack, the stress intensity factor is given by the expression
K = S(rray12 (6-1)
For more complex geometries, the expression for K is similar, but contains factors related to crack
and body geometry. As an example, the stress intensity factor for the complete circumferential crack at
the inside diameter of an axially loaded cylinder shown in Fig. 6-2 is given by
(6-2)
where h, a, Ri , and Ro are as indicated in the figure. The function F is usually obtained by numerical
techniques, such as finite elements. Results analogous for F are provided for many geometries in hand-
books, such as by Tada et al. (1985) and Murakami (1987). As an example, Fig. 6-3 provides results
for F for the circumferentially cracked pipe shown in Fig. 6-2, as drawn from Mettu and Forman (1993).
Expressions analogous to Eq. (6-2) are available for J-integral solutions for power law hardening ma-
terials for a wide variety of geometries (Kanninen and Popelar, 1985; Anderson, 1991; Kumar et al.,
1981; Zahoor, 1989).
3.0~------~----~------~~--~~----~
ttt
2.5
UI
-'i 2.0
p
cr- 11'(r:'-rt>
-i
.... K1 - crJii·f(~. ~.)
*~
cf 1.5
·t
~
~
CI)
1.0
0.5
0.1 ~
O~------.-------.------,-------.------~
o 0.2 0.4 0.6 0.8 L.O
Crack Length, alb
Figure 6-3. Dimensionless stress intensity factor for a complete circumferential interior surface crack in a pipe
subjected to tension. (Source: Mettu and Forman [1994]. Copyright ASTM. Reprinted with permission.)
III Probabilistic Fracture Mechanics
where Ill( =Kmax - Kmin during the fatigue cycle, R =KmmlKmax , and C, n, m, Co. d, llKo, p, Ke , and q
are curve-fit parameters. This equation is a form of the modified Forman relation (Forman et al., 1988),
which has been found to provide a good fit to a wide variety of materials. When m, q, and p are zero,
this reduces to the well-known Paris relation
Figure 6-4 provides an example of fatigue crack growth data for ~Cr-1Mo steel at llOO°F, as reported
in Grunloh et al. (1992). In this case, Eq. (6-4) provides a fit to the data. The solid line corresponds
to n = 2.14 and C = 1.23 X 10-8 (daldN in inches per cycle and Ill( in ksi·in. 1!2).
Another example of subcritical crack growth is that occurring as a result of high-temperature, time-
dependent deformation (creep) in metals (Kanninen and Popelar, 1985; Anderson, 1991; Riedel, 1987).
Considering the case of steady state creep, the creep strain rate is related to the stress as
E = BS' (6-5)
This is analogous to power law hardening for fully plastic materials (E ex: SO), and the crack driving
force for creeping solids can be obtained from the corresponding fully-plastic l-integral solution. For
example, the creep crack driving force, C*, for the crack configuration in Fig. 6-2, can be expressed
as
(3
C* = BS'+l ah1(a, n,'y) 112 2-y + 1 ),+1 (ex = a/h, -y = RJh) (6-6)
(1 - ex)" 2 2-y + 1 + ex
where h1(a., n, 'Y) is a tabulated function determined by nonlinear finite element calculations (Kanninen
and Popelar, 1985; Anderson, 1991; Kumar et al., 1981; Zahoor, 1989).
The rate at which a crack grows in a steady state creeping solid has been found to be related to C*
(Kanninen and Popelar, 1985; Riedel, 1987). Figure 6-5 provides an example of such data for 2; Cr-
1Mo base metal at llOO°F as reported by Grunloh et al. (1992). The figure also shows as a solid line
the following curve fit to the data
(6-7)
(6-7) and integrating the resulting first-order ordinary differential equation. In practice, this is invariably
done numerically.
Equations (6-5) through (6-7) are suitable for steady creeping solids. The situation is somewhat more
complex for times following initial loading or for cyclic loading; because of the presence of an elastic
transient, the body is not undergoing steady state creep. To handle the case of an elastic transient
following initial loading, the C(t) parameter of Riedel (1987) or CT parameter of Saxena (1986) can be
used. These parameters can be expressed in terms of K and C *, with CT being given by
In this case, t is the time since initial loading, and when t is large CT approaches C* in value; 13 is a
material constant (see Saxena, 1986); n is the creep exponent (Eq. [6-5]); E is Young's modulus; B is
the creep coefficient (Eq. [6-5]); h is thickness; G is K/Sh1!2; and G' is the derivation of G with respect
to a/h (a being crack depth).
Cyclic loading at elevated temperature (creep/fatigue) is an important problem, and the above treat-
ment has been generalized to cover this situation (Grunloh et al., 1992; Yoon, 1990). This involves
averaging C T over the loading cycle (with t = 0 at the beginning of the cycle) to provide CT(ave)' (A
slightly modified definition of CT is used in this averaging procedure.) As reported in Grunloh et al.
5~-----------------,-----------r-------'--~7I--'-1
<l.l
U
>.
u 2
""-..
c
Z
- 'V
"0 6 'V
.......... 6
o
\J 1 0- 5 6
~
6
da
7 -- = 1.23 x 10-8 AK"Z.14
dN
6
5L------------L--~----------~----~--~~~~
2 3 4 5 6
101 1/2
t1 K. ksi-in
Figure 6-4. Fatigue crack growth data for 2~r-1Mo steel base metal at 1100°F. (Source: Grunloh, H. J., et al.
[1992]. An Integrated Approach to a Life Assessment of Boiler Pressure Parts, Vol. 4: BLESS Code User's Manual
and Life Assessment Guidelines. Report on Project RP22S3-1O. R Viswanathan, EPRI Project Manager. Palo Alto,
California: Electric Power Research Institute. Copyright©1992. Electric Power Research Institute. Reprinted with
Permission.)
114 Probabilistic Fracture Mechanics
(1992), fatigue tests performed at elevated temperature with various hold times (th ) revealed that the
time-dependent component of the crack growth per cycle could be predicted by
(6-9)
with C3 and q being the same as measured for steady state creep crack growth (see Eq. [6-7]). The overall
crack growth per cycle is then given by adding on the fatigue contribution (see Eq. [6-3] or [6-4]):
(6-10)
The above treatment of creep/fatigue crack growth is easily expanded to include primary creep by
incorporating such effects into CT (Riedel, 1987; Grunloh et aI., 1992). This involves the use of
K-solutions and analogous I-solutions and is, therefore, amenable to a handbook approach.
100
•
-
618
10-1 o WC2
WHI
'-
::J
.
210- 2
...........
c /'
~0-3
o -;..- ~-
~
..... i @
"U o 0
'd-l0- 4
"U
dal
dt
= 0.0263 C 0.732
TC-)
........
- 2.25 Cr-O.5Mo (8M)
10-5 1100 F, creep (steady load)
10-6~~~L-~~~~~~~~~~~~~~~~~~
variables include initial crack location and size (depth and length), fracture toughness, subcritical crack
growth characteristics, and stress levels and cycles. In addition, the effects of inspections can be included
through their influence on crack detection, sizing, and repair. In this section, selected data will be discussed
to provide examples of the characterization of the statistical distribution of fracture mechanics random
variables. This usually consists of gathering relevant information (from testing or the literature) and char-
acterizing its scatter by selecting the type of distribution and the parameters of the distribution. Characteri-
zation of uncertainty in inputs not subject to testing is also necessary. An example of this is uncertainty in
loads and boundary conditions that can result in uncertainty in calculated stresses or temperatures.
3.2.1. Fracture Mechanics Variables. The initial crack size distribution is one of the key inputs
to any PFM analysis. In many practical problems, buried or surface cracks of finite length are encoun-
tered. For fracture mechanics analysis purposes, such cracks are generally idealized as elliptical or semi-
or quarter-elliptical. Figure 6-6 schematically shows a buried elliptical crack in a plate of finite thickness.
Three numbers are required to describe this crack: a, c, and s. Each of these can be a random variable.
Stress intensity factor solutions for such cracks are available in the literature, especially for semielliptical
surface cracks (s = 0) (Newman and Raju, 1983).
Statistics on crack size and location are generally sparse. The dimension s is often taken as zero, so
as to concentrate on surface cracks, or the probability distribution of s is assumed to be uniform (Nilsson,
1977) or normal (Bruckner et ai., 1983).
The crack depth (a) distribution is of primary concern, because a has a much stronger influence on
K than c. Information on the distribution of a is sparse, and such distributions depend on material,
thickness, welding procedure, etc. Probably the most familiar crack depth distribution is that drawn
from the Marshall (1976) report on nuclear power reactor pressure vessels. In this case, a is taken to
be exponentially distributed with a probability density function
1
p(a) = _e-a/x (6-11)
~
with A =0.246 in. A mean crack depth of 0.246 in. for vessel thickness approaching 10 in. is reasonable,
but the use of this distribution (with the same value of A) is not suitable for thinner materials, in which
case other estimates or measurements must be made.
A rare example of a crack size distribution based on observations is provided by Hudak et al. (1990),
who report observations of crack sizes in Inconel 718 weldments. Semielliptical surface cracks of length
2c and depth a were characterized, and Fig. 6-7 provides a histogram of crack depths. As reported in
Hudak et al. (1990), the depths were found to be lognormally distributed. This probability density
function of crack depth is expressed as (Ang and Tang, 1975)
In(a/a5o) 2}
1
{ [---;2ii2
p(a) = j.l.a(21Ty!2 exp - ] (6-12)
with a so = 15.3 mils = 0.38 mm, and 1.1 = 0.807. Figure 6-7 also plots this density function, and a good
fit is observed.
The second dimension of crack size, c, is also important. Little information on the distribution of c
is available, and the assumption is often made that the aspect ratio cia is independent of a (Harris et
al., 1981; Bruckner et al., 1983). This greatly simplifies the statistical description of crack size and
appears to be a good approximation. Few data are available to check the goodness of this assumption.
Hudak et al. (1990) also reports data on c and the aspect ratio cia. The data shows a mode at cia =
50
49
:n 39
u
c
ill
~
rt
ill
I.
u.. 29
aso = 15.3 mils
f.l = 0.807
10
Figure 6-7. Histogram and corresponding lognormal probability density function for initial crack depths in Inconel
718 nickel-base alloy weldments. (Source: Hudak et al., [1990).)
Probabilistic Fracture Mechanics 117
1, which corresponds to a semicircular crack, with 35 of 152 cracks having (a/2c) < 1/3 (or cia> 3/2).
Assuming cia to be lognormal, which from the data appears to be reasonable, with a mode of density
function at 1, provides a median value of cia of 1.126 and a second parameter (analogous to IJ. of Eq.
[6-12]) of 0.344. The assumption that cia is independent of a then allows the crack depth and length
distribution to be defined in terms of p(a) and p(cla).
The data of Hudak et al. (1990) is specific to weldments of Inconel 718 nickel-based alloy. Different
weld procedures, materials, and thickness produce different crack size distributions, and care must be
taken in estimating crack size distributions for different situations. Measurements such as reported by
Hudak et al. (1990) are laborious and expensive. An alternative procedure is to back out an initial crack
size distribution from fatigue data. Sire and Harris (1992) provide an example. Another alternative
procedure is to construct probabilistic models of introduction of defects during the welding process,
such as reported by Bruckner and Munz (1984). This is briefly discussed in Section 8.
This discussion of defects concentrates on cracks in welds in metal, which is where the majority of
cracks are generally found. Of course, metal parts not containing welds would have significantly dif-
ferent defect size distributions (and frequency of occurrence). Information on such cases is virtually
nonexistent.
Another factor defining the initial crack sizes is the frequency of occurrence of cracks. The above
discussion provides the size distribution of cracks-given that a crack is present. The effect of volume
of material considered enters into the probability of a crack being present. Furthermore, in many cases,
the probability of a (macroscopic) crack being present is not close to unity, and a single crack can be
considered to be present for analysis purposes, with a size as described by the conditional crack size
distribution (i.e., given that a crack is present). If numerous cracks are present, then an extreme value-
type distribution of crack sizes may be suitable (Ang and Tang, 1984), or an approach analogous to
that used for ceramics may be called for (see Chapter 30).
The data of Hudak et al. (1990) on crack sizes discussed above do not include the length or volume
of sample material for the 152 cracks characterized, so an estimate of the crack frequency is not
possible from these data. The above distributions of a and cia are conditional on a crack being present
(big enough to detect by the procedures employed). Typically, the number of cracks in a given weld
or component is assumed to be Poisson distributed, with a specified mean frequency per unit length
or volume of weld. Such data are reviewed extensively in Harris et al. (1981), with considerable
additional data for welds having become available since that review. A wide range of mean frequencies
has been reported. Volume and size effects are handled through the mean frequency and volume
considered.
Material characteristics input to a fracture mechanics analysis are subject to considerable scatter, as
shown in the fatigue and creep crack growth data in Figs. 6-4 and 6-5. The scatter can be conveniently
characterized by considering "constants" in the crack growth laws to be random variables, and using
the data points as samples of these values. For instance, the data of Fig. 6-4 can be used in conjunction
with Eq. (6-4), to evaluate C for every data point (while holding the exponent nat 2.14). An analogous
procedure can be performed on the data of Fig. 6-5. On the basis of this procedure cumulative distri-
butions of C and C3 are constructed, as shown in Figs. 6-8 and 6-9, respectively. These figures are on
lognormal probability paper, and show that the coefficient of the crack growth relations can be approx-
imated as being lognormal. The parameters are given in Table 6-1.
This procedure for characterizing the scatter in crack growth properties is appealing in its simplicity
and is justified in that it is merely being attempted to quantify scatter in test data. This appears to be
closely related to prediction intervals (Weiss, 1989, p. 552), which are not the same as confidence
intervals. Numerous other procedures for "stochastic modeling of fatigue crack growth" have been
suggested, which are invariably more complex and involved than the procedures shown above. See, for
instance, Sobczyk and Spencer (1991).
118 Probabilistic Fracture Mechanics
The fracture toughness is another material property that is often subject to considerable scatter. This
is especially true for low- to intermediate-strength steels that undergo a ductile-to-brittle transition as
temperature is lowered. The transition temperature can be increased by irradiation damage and temper
embrittlement. The transition in toughness is accompanied by a change in the appearance of the fracture
surface, with a fiat fracture at low temperature and a rougher surface at higher, more ductile, temperature.
Considerable scatter in toughness is often observed, especially in the transition regime. Denoting the
fracture appearance temperature as FAIT, the K[e versus T relation for a given material appears to be
controlled by T' = T - FAIT, and T' is referred to as the excess temperature (Viswanathan, 1989). As
an example of characterization of the scatter in fracture toughness, Fig. 6-10 is a plot of the fracture
toughness, K[e, as a function of excess temperature for steam turbine rotor steels. Also shown are various
percentiles of the distribution of toughness as a function of temperature (Ammirato et al., 1988). These
percentiles were determined by considering the distribution of K[e at a given excess temperature to be
a three-parameter Weibull, that is, the probability density function is
(6-13)
.99
C (fat. 50) = 1. 23E-B
.98
1-1=0.1701.
x
.95
.9
.B
.7
.6
a. .5
.4
.2 1100 F. fatigue
.05
.02
.01~--~-L--LL~-L--------------~----~3E~_~0~8----~--~
6E-09 10.8
t
Figure 6-8. Cumulative lognormal probability plot of fatigue coefficient C, for data from Figure 6-4. (Source:
Grunloh, H. J., et al. [1992]. An Integrated Approach to a Life Assessment of Boiler Pressure Parts, Vol. 4: BLESS
Code User's Manual and Life Assessment Guidelines. Report on Project RP2253-10. R. VIswanathan, EPRI Project
Manager. Palo Alto, California: Electric Power Research Institute. Copyrightc1992. Electric Power Research In-
stitute. Reprinted with Permission.)
Probabilistic Fracture Mechanics 119
.98
1100 F. creep
)(
.9 x
3 specillens . 34 data pOints )(
.8 5teady load
.7
x
x
.6 x
x
x
x
Q. .5 x
X
x
.4
.3
.2
C(50) =0.0289
~=1.0317. q=0.732
.1
x
.05
'0.'b03 .01 .1
C3
Figure 6-9. Cumulative lognormal probability plot for creep crack coefficient C3 , for data from Figure 6-5. (Source:
Grunloh, H. J., et al. [1992]. An Integrated Approach to a Life Assessment of Boiler Pressure Parts, Vol. 4: BLESS
Code User's Manual and Life Assessment Guidelines. Report on Project RP2253-1O. R. Viswanathan, EPRI Project
Manager. Palo Alto, California: Electric Power Research Institute. Copyright@1992. Electric Power Research In-
stitute. Reprinted with Permission.)
where the parameters K, 11, and Ko are the following functions of T':
Ko = 50 + 30 tanh[(T' + 60)198]
" = 2.05 + 0.35 tanh[(T' + 150)/98]
K =56.7 + 46.4 tanh[(T' + 60)/98]
"Notes: Crack length (in.); K (ksi·in.'i2); time (hr); C· (kips/in"hr); C (in./cyc1e) (ksi·in. 1i2r n ]; C3 [(in.lhr) (kips/in.·hrrq].
UO Probabilistic Fracture Mechanics
(stress intensity factors are in units of ksi·in l12 and temperatures in degrees F). The hyperbolic tangent
functional form provides the correct shape of the curves as a function of T', and the constants in the
fit were obtained by (nonlinear) maximum likelihood estimates (Aog and Tang, 1975). A good repre-
sentation of the scatter in the data is provided by this convenient curve fit, which has been employed
in PFM analyses of large steam turbine rotors (Ammirato et al., 1988).
The value of FAIT may itself be a random variable, and it may be shifted by a variety of mechanisms,
including irradiation damage and temper embrittlement. Such shifts may depend on composition and
may also be subjected to considerable scatter. The statistics of FAIT values and their shifts can play
an important role in probabilistic analyses of steam turbine rotors and reactor pressure vessels.
The stresses produced by various events (loads), and the probability of such events (loads) occurring,
can also be random variables as discussed extensively elsewhere in this handbook.
The above discussion concentrates on examples of characterization of scatter in material properties.
With the current level of understanding, these can be viewed as inherent scatter in the data, and the
generation of additional data will not reduce the scatter (variance). In contrast to this, there may be
random input variables for which only sparse data are available. In such instances, there may be con-
siderable uncertainty in the distribution type and parameters of the distribution. This uncertainty can be
a contribution to the estimated mean and variance, the values of which would most likely change as
additional data become available. Inherent randomness can be combined with uncertainty (due to lack
of information) to provide a failure probability that reflects both factors, or the two factors can be
.
S
L
300
p - 0.99
,,,... ,------------------------------~
/ .",;',,-------------------------------
...•• ,
D
•
i!.
I P • 0.90
: 200 I it:
•
~
I
II:
/ ,.
'I.
' . p - 0.50
ICII
/'
...o
,
:J
/ ' ':
I I. I •
• /: , "
/,:' 0 :..1_____
__________________ ~_: __ 0 _
...lil
L
:J .II ,,..,---: '. p - 0.01
/ /. ,:~i.~. ". ...-----------:-T-------------------
I" '.;-" ':
100
/7'ii;:;/'-'-'-'-'-'-'-'lower Ii;n'-
L
IL
Figure 6-10. Plot of fracture toughness data versus excess temperature along with percentiles of the fitted three-
parameter WeibuU distribution. (Source: Ammirato, et al. 1988. Life Assessment Methodology for Turbo-Generator
Rotors, Vol. I: Improvements to the SAFER Code Rotor Lifetime Prediction Software. Report No. CS/EL-5593.
Palo Alto, California: Electric Power Research Institute. CopyrightlC 1988. Electric Power Research Institute. Re-
printed with permission.)
Probabilistic Fracture Mechanics 121
considered individually to provide a statistical distribution of the failure probability (see Chapter 19 of
this handbook) (the statistical distribution reflects the uncertainties).
3.2.2. Effects of Inspection. The effects of inspection (and repair) on component reliability can
be effectively addressed by PFM when failure occurs due to the unchecked growth of an undetected
(or improperly sized or repaired) crack. Even though the effects of inspection do not fall within the
realm of fracture mechanics, such effects are briefly addressed here. Chapter 12 of this handbook
discusses nondestructive examination reliability. Such reliabilities can be expressed in terms of inspec-
tion detection probabilities and uncertainties, which can be used, in conjunction with PPM, to analyze
the effects of inspection on component reliability. The probability of detection of a crack of size a,
POD (a), is a key input that varies with crack location and inspection procedures, among other things.
Values of POD are typically obtained by successive inspection of a series of samples with defects of
various sizes (not known to the inspector). Inspection uncertainty, which describes the crack size in-
dicated by the inspection (a) for a given true size (a), forms another key ingredient in the proper
disposition of detected defects (Johnson, 1976; Berens, 1989). For a given crack size, the probability
of detection is POD(a). If the crack is detected, then p(ala) provides the density function of indicated
crack size. If a > arepain then some repair is performed. The repaired part may be as good as new (has
a crack size distribution the same as that of a new part), or the repair procedure may produce a part
better or worse than new. The inspection itself does not affect the reliability, only repairs or remedial
actions performed as a result of the inspection have an affect. Detailed probabilistic models of repair
are sparse.
Analyses of the effects of inspection is one of the major potential uses of PFM (Harris, 1992). As
discussed above, the initial crack size distribution has a strong influence on calculated failure proba-
bilities. Hence, knowledge of the initial crack size distribution is important for obtaining absolute values
of failure probabilities. However, if the relative failure probabilities due to time- or cycle-dependent
crack growth for alternative inspection strategies are desired, such relative values are not highly de-
pendent on the initial crack size distribution. The ratio of failure probabilities with and without inspec-
tion is virtually independent of the initial crack size distribution, being more dependent on detection
probabilities and crack growth characteristics (Harris and Lim, 1983).
An alternative approach to the analysis of the effects of inspection is to use the results of inspections
to update the probabilistic model, or to select the most realistic of several alternative probabilistic models
(Chapman, 1983; Fujimoto et al., 1989).
viously (Harris, 1985). Consider a through crack of length 2a in an infinite plate subjected to a stress
S far from the crack. In this case
(6-14)
(This simple geometry greatly simplifies the K-relation, as can be seen by comparing Equations [6-14]
and [6-2].) Let failure occur when K exceeds K ic • Consider the fatigue crack growth rate to be given
by Eq. (6-4), which is repeated here for convenience.
daMN = eM('
(The simplification of using this expression rather than the more precise modified Forman relation of
Eq. [6-3] is obvious).
The cycles to failure when the plate is subjected to a stress ranging from zero to S, for an initial
crack size ai> is given by the cycles to grow this crack to the critical size, ae , which is given by
(6-15)
(Calculation of the critical crack size would be more involved for more complex geometries. For in-
stance, a K-solution such as Eq. [6-2] would require a trial and error approach to evaluation of ae .)
The cycles to failure is obtained by inserting Eq. (6-14) into Eq. (6-4) and integrating. Such a
procedure provides the following expression for crack size a after N cycles, for an initial crack size,
ai, and n = 4.
(6-16)
(6-17)
The failure probability within N cycles, for a given distribution of initial flaw size ai, cyclic stress S,
and fracture toughness K 1e , is P(N < N f ), which is obtainable from the distribution of N f • Even for the
simple expression (Eq. [6-17]), the distribution of Nf cannot be readily evaluated analytically.
In the case of ai being the only random variable, a closed-form expression for the failure probability
can be obtained. After N cycles, the probability of having a crack larger than ae is the probability of
initially having a crack of size larger than that which would grow to ae in N cycles. From the above
expressions, this is given by
(6-18)
(6-19)
If C, S, and Kic are also random variables, then an expression as simple as Eq. (6-19) cannot be
Probabilistic Fracture Mechanics 123
written, but multiple integration is involved. For all but the simplest of problems, the integration would
be performed numerically. Such integration can be tricky, and the integrand itself cannot be written
down for problems of more realistic complexity. The work of Tanaka (1989) provides an example of
an elaborate scheme built around integration and "stochastic crack growth." The point is soon reached
at which the immediate application of numerical procedures is called for.
An intermediate approach is possible that provides approximate results. This approach uses the
procedure often referred to as the "generation of system moments" (Hahn and Shapiro, 1967). Its
application to the example problem under discussion is discussed in Harris (1985). If Eq. (6-16) gives
the crack size at N cycles, and ai'S, C, and KIc are random variables, then the mean crack size after N
cycles is given by (Hahn and Shapiro, 1967, p. 229, Eq. [7-3])
(6-20)
with an analogous equation for the variance of a(N). In Eq. (6-20), a bar over a variable indicates mean
value; Xl> X2 , and X3 are a;, S, and C, respectively, and (J' denotes the standard deviation. Similar
expressions for the mean and variance of the critical crack size (tic, (J'~) can also be obtained. If a(N)
and ae are assumed to be normally distributed, the failure probability is given by
(6-21)
where <I>(y) is the cumulative distribution function of a unit normal variate. In problems of realistic
complexity, expressions such as Eq. (6-20) cannot be written, and this method is difficult to implement.
The accuracy of this approximate analytical technique, when applied to the simple example problem,
is addressed following discussion of numerical techniques.
The strength-strength interference method also provides analytical solutions for some simple prob-
lems, especially those involving only two random variables, for certain types of probability distributions
(Bloom, 1984). When analytical solutions are available, results can be obtained by hand within a few
minutes. The stress-strength interference method is discussed in Chapter 2 of this handbook.
3.3.2. Numerical Techniques. Numerical techniques are generally needed for generation of results
for problems of reasonable complexity. The Monte Carlo technique, which is discussed in Chapter 4
of this handbook, has been most widely used in PFM analyses, because of its generality, ease of
implementation, and ability to handle possible dependencies between the input random variables. The
most generally quoted drawback to Monte Carlo simulation is the computer time involved, but the
computer expenses may be no higher than those associated with numerical integration, and this is a
diminishing problem in these days of rapidly increasing computer power at decreasing cost.
Conceptually, Monte Carlo techniques are straightforward to visualize. A value of each input random
variable is selected at random from its distribution. The randomly sampled input variables are used to
calculate a value of the dependent variable. This is repeated many times, and a histogram of the
dependent variable is constructed. Such a procedure is called Monte Carlo simulation and provides exa~:1
results as the number of simulations becomes large. Increased accuracy merely requires more
simulations.
In the context of the example problem discussed in Section 3.3.1, values of ai. C. S. and 1\1." arc
randomly sampled, and a value of Nf calculated by use of Eq. (6-17). Each such complItalinll is ,-,ailed
124 Probabilistic Fracture MecluJnics
a trial, and a number of such trials is carried out. In this way a histogram of Nt values is constructed,
from which the statistical distribution of Nt is estimated. This provides P(Nt < N), that is, the reliability
of the structural component for N cycles. A single Monte Carlo trial is no more complex (or any simpler)
than the corresponding deterministic fracture mechanics analysis.
In many instances, the probability of failure will be small. Hence, many trials would be required in
order to determine the failure probability accurately. This can be alleviated by selectively sampling from
the distributions of the input variables in order to draw from the tails of the distributions, which are
controlling the failure probabilities. This selective sampling is then compensated for by manipulation
of the numerically generated results. Such procedures are referred to as stratified or importance sampling
(see Chapter 4). As an example, if it is known that large initial crack sizes (which are present with low
probability) are required for a failure to occur, then it is a waste of effort to sample randomly from the
initial crack size distribution, because the vast majority of sampled cracks would be from the (much
more likely) small cracks that would not result in failure within the time frame of interest. Stratified
sampling would involve sampling only from the large crack end of the initial crack size distribution,
and then compensating the results by suitably factoring the probability of having an initial crack in the
portion of the distribution sampled. This procedure has been incorporated in the PRAISE code for
evaluation of piping reliability (Harris et aI., 1981, 1992; Harris and Lim, 1982) and has provided
substantial savings in computer expense. Results for Monte Carlo simulation on the simple example
problem will be presented following discussion of another approximate technique for estimating failure
probabilities.
In some instances, a faster means of estimating failure probabilities than Monte Carlo simulation is
desired. This is true especially when a single trial requires excessive computer time, for example, use
of three-dimensional finite elements with explicit crack modeling.
An alternative procedure for estimation of failure probability can be based on approximations to the
integral involved. These procedures are based on estimating the location of the most probable failure
point (MPFP) in the M-dimensional space of the M random input variables (Shinozuka, 1983), and
approximating the volume of the M-dimensional joint density function over the region of the space that
corresponds to failure. This approximation of the volume provides an estimate of the failure probability.
The procedures involved are often referred to as the Rackwitz-Fiessler algorithm (Rackwitz and Fies-
sler, 1978) and are discussed in some detail in Chapter 3 of this handbook.
The procedure is most easily explained for two random variables, both of which are normally dis-
tributed and independent of one another. Once this simple case is understood, it is relatively straight-
forward to advance to the case of M random variables that are not normally distributed, and then to
the case of correlated random variables. The two normal variables XI and X 2 (fracture toughness and
crack size for instance), are normalized to unit normal variates, U I and U2 :
(6-22)
U2 =X_=-2_-_X-=2:::::"""'
ax,
="
The joint density function in XI - X 2 space becomes an axisymmetric unit normal density function in
UI - U2 space. The failure condition can be rewritten in terms of XI and X 2, such as Kic - Kapplied <
o. This defines a curve on the XI - X 2 plane, which can be transformed to a curve on the UI - U2
plane. This is depicted pictorially in Fig. 6-11. The failure condition on the UI - U2 plane can be
Probabilistic Fracture Mechanics 125
rewritten as
g(Ui) ~ 0, safe
(6-23)
g(Ui ) < 0, failure
(6-24)
This result is true for any number of dimensions (random variables), and the "failure curve" is replaced
by a "failure hypersurface" in M-space. Hence, extension to M normal random variables is straight-
forward. (The distance ~ is known as the safety index or reliability index.)
The approximate procedures can also be extended to nonnormal variates by either (1) replacing the
nonnormal distributions with equivalent normal distributions in the vicinity of the MPFP, or (2) using
a Rosenblatt transformation to convert nonnormal distributions to unit normals (see Chapter 4). An
equivalent normal distribution at a given point is the normal distribution with a mean and standard
Joint unit
normal density function
u,
mpfp
Figure 6-11. Pictorial representation of joint density function in unit variate space, showing failure curve and most
probable failure point (mpfp).
126 Probabilistic Fracture Mechanics
deviation selected to match the value of the density function and cumulative distribution of the non-
normal distribution at that point. This provides the normal distribution that matches the joint density
function and its slope at that point. A Rosenblatt transformation is a stretching of the random variable
to transform its scale so that its cumulative distribution is the same as a unit normal variate.
Correlations between random variables are treated by rotation of coordinates (to remove correlations)
and/or Rosenblatt transformations (see Chapter 4).
The remaining trick is to locate the MPFP in M-space. This is where the Rackwitz-Fiessler algorithm
enters, and is accomplished by an iterative procedure that involves guessing the location of the most
probable failure point, and then updating the guesses until some convergence criterion is satisfied (such
as a guess and its update being sufficiently close to one another). Let the failure curve be defined by
the equation
Figure 6-12 diagrammatically shows the steps involved. Equivalent normal distributions are used in the
approach shown in this Figure. An initial guess of the MPFP is made, (3 is evaluated, and the perform-
ance function g(U;) = 0 is evaluated at this point (go). The performance function is linearized at the
guessed MPFP, and (3 is solved for so that g = 0 (as it should if the guessed point is on the failure
curve). The linearization of g at the guessed point provides a linear equation for (3-which is the
updated value. If the updated (3 is sufficiently close to the previous guess for (3, the process is stopped.
Otherwise, using the new value of (3 and Uj , the process is repeated.
The underlying mathematics is not reproduced here, but is discussed in Madsen et al. (1986) and
Melchers (1987), as well as in Chapter 3 of this handbook. Also, Ang and Tang (1984) provide many
numerical examples, with results for intermediate iterations, along with background information on the
mathematics. The iterative process generally converges rapidly (if it converges at all), and the number
of calls to the underlying deterministic model (performance function) is related to the number of deriv-
atives (ag/aU;) (= number of random variables) times the number of iterations. This number is usually
much less than the number of Monte Carlo trials that would be required to obtain the same results. The
iterative procedure has less of an advantage over Monte Carlo when the number of random variables
increases. The burden of Monte Carlo simulation is generally not significantly increased by more random
variables.
An intermediate product of these calculations is the values of the direction cosines of the (3 vector,
which is the vector from the origin to the most probable failure point in reduced unit variate space.
These direction cosines are denoted as a i in Fig. 6-12, and serve as measures of the sensitivity of the
failure probability to input random variables. For instance, if the (3 vector is perpendicular to an axis
Ui (i = 1, 2, ... , or M), (3 is not influenced by Uj , the angle between Ui and (3 is 900 , and the direction
cosine is zero. Madsen et al. (1986, p. 54) states that (3 is approximately altered by an amount
1/(1 - (7) if (J'i (standard deviation of X;) is set equal to zero. Hence, if a j is small, little error is
introduced by considering Xi to be deterministic and equal to its mean value. The direction cosines
provide sensitivity measures which are a useful by-product of the use of the Rackwitz-Fiessler iterative
procedure.
The procedure has not been widely used in fracture mechanics, or related life prediction methodol-
ogies. This is probably because of the relative newness and lack of familiarity among fracture mechanics
practitioners, and because Monte Carlo procedures are usually suitable, because computer time is gen-
erally not excessive. Wu et al. (1987) and Bruckner (1987) provide examples of fracture mechanics-
related applications, which are not numerous. The procedure is approximate and is not a replacement
Probabilistic Fracture Mechanics 127
for Monte Carlo simulation. Monte Carlo can be made as accurate as desired by performing sufficient
trials. The procedure does generally provide reasonable accuracy with substantial computer savings.
Results for the simple example problem discussed in Section 3.3.1 were generated by Monte Carlo
simulation and by the Rackwitz-Fiessler iteration technique (using equivalent normal distributions).The
direction cosines were generated in the process. Table 6-2 summarizes the random variables for the
example problem, their distribution types, and parameters of the distributions.
•
Unearlze g at MPFP and calrulate m and S
2
n
u- r,
g'l u l
1-1
n
m-
1-1
E
g'l "'I
n
S2_ r,
g,202
•
1-1 I 1
u - go - m
T
update II, II - - S (Value of Pnew)
,
updateul' ul-111 + "I °111
No ~ J
........ I
Is Pnew close enough to II old
to satisfy convergence?
I
,Yes
I II,Output
"I,UI
I
Figure 6-12. Diagrammatic representation of steps in Rackwitz-Fiessler algorithm for locating most probable
failure point.
128 Probabilistic Fracture Mechanics
Input variable i
Symbol a, S K" C
( fl
Distribution type Exponential Normal Weibull Lognormal
Probability density 1 -oif>. _l_e -(S-S/I(Zn') c K-Ic 1 e-(InCIC,/1(2/)
-e - e-(KIe/b)c
function A a(2'IT)If2 b b CI1(27T)1I2
Values of parameters A = 0.02 S = 16 b = 32.07 C,o = 10- 9
a=2 c=7 11 = 0.5
Mean 0.02 16 30 1.13 X 10-9
Standard deviation 0.02 2 5.037 6.04 X 10- 10
This is a rearrangement of
using Eq. (6-16) for a in terms of the input variables. The failure probability was evaluated as a function
of the number of fatigue cycles, N, starting with N = 1.
Corresponding results by Monte Carlo simulation (2 X 105 samples) and the "method of system
moments" (Eq. [6-21]) were also generated. The method of system moments and Rackwitz-Fiessler
iterations are particularly simple for this problem, because the necessary derivatives can be obtained
analytically. In addition, analytical results, treating ai as the only random variable and all the other
variables as deterministic with their values fixed at their mean values (as given in Table 6-2), were
generated by use of Eq. (6-19). Figure 6-13 summarizes the results. This figure shows that the Rackwitz-
Fiessler and Monte Carlo results agree very well. The system moments results are inaccurate especially
at points away from the median [P(N) = 0.50]. The analytical results (with ai as the only random
variable) agree very well with the Rackwitz-Fiessler and Monte Carlo results for cycles greater than
about 10,000. This indicates that the initial crack size is the most influential random variable for cycles
greater than this value.
No effort was made to quantify the computer time required for the various procedures, but the Monte
Carlo simulations required about 100 times as much computer time as the Rackwitz-Fiessler iterations.
However, the 2 X 105 Monte Carlo trials required only tens of seconds on a 486, 33-MHz personal
computer.
Figure 6-14 presents the direction cosines as a function of the number of cycles, N. These results
show that the direction cosine for ai is large for all N, hence it is always influential. C has little influence
for small values of N, and Kc has little influence for large values of N. This is physically reasonable.
On the first loading cycle, the fatigue crack growth coefficient C has no influence. Once many load
cycles are applied, C (fatigue) becomes important. For large values of N, the fracture toughness (KIc)
Probabilistic Fracture Mechanics 129
has no effect, because the crack spent most of its life growing at small daMN when a was small, and
the life is only weakly influenced by the critical crack size (which is controlled by KIc). Figure 6-14
shows the direction cosine for ai is the largest (in absolute value) for N greater than about 3000. This
is consistent with the observation in Fig. 6-13 that the results when ai is the only random variable agree
closely with the other results for large N. The direction cosines demonstrate the expected results and
should provide useful sensitivity measures in problems with more subtle interactions between random
variables.
The Rackwitz-Fiessler iterative approach appears to be capable of providing a speedy alternative to
Monte Carlo simulation for PPM calculations. The approach does have its limitations, such as a difficulty
in treating the effects of actions taken during the life of the structural component being analyzed, such
as repairs made as the result of in-service inspections, or midlife changes to operating procedures made
as a result of simulated operating experience.
Probabilistic finite element methods and probabilistic boundary element methods have also been used
for complex PFM problems. These methods are discussed in Chapter 5 of this handbook.
. 999999
I
I
.99999
Method of 1
System __ I
.9999 Moments~
.999
.99
.95
v Rackwitz-Fiessler
.9 o Monte Carlo
.8
~
11. .5
.2
.1
.05
.01
.001
.0001
.00001
1 10 100 10' 10'
N / cycles
Figure 6-13. Cumulative failure probability as a function of the number of fatigue cycles for example problem, as
generated by various methods.
130 Probabilistic Fracture Mechanics
4. RECENT ADVANCES
Advances have occurred on many fronts involving probabilistic fracture mechanics, both in the under-
lying deterministic fracture mechanics and in the probabilistic treatments.
1.0
0.5
III
....~
III
0
Co)
9
:0
0
Co)
f S
;a
-0.5
-1.0
10 103
number of cycles N
Figure 6-14. Direction cosines for the four random variables of the example problem, as a function of the number
of fatigue cycles.
Probabilistic Fracture Mechanics 131
of the stress intensity factors with corresponding solutions for isotropic materials allows maximum use
to be made of existing handbooks (Tada et al., 1985; Murakami, 1987). Cases in which the scale of
inhomogeneity is appreciable are more complex, both in characterizing the crack driving force and the
response of the materials.
The use of J-integrals for treatment of cyclic and monotonic loading into the plastic regime has
provided a major advancement beyond linear-elastic fracture mechanics (Kanninen and Popelar, 1985;
Anderson, 1991). To date, such methodologies have been largely restricted to plane strain or plane
stress. Recentl-solutions for semielliptical cracks (Yagawa et al., 1986; Zahoor, 1989) allow extensions
to three-dimensional problems. Constraint effects in conjunction with J-integrals has led to an increased
ability to predict the stability of realistic surface cracks in elastic-plastic metals (see, e.g., discussions
in Wang et al. (1991) and references cited therein).
Advances have also been made in understanding the growth of "small cracks" (ASTM, 1992).
Improved understanding in this area is important in bridging the gap between crack initiation and crack
growth. Bridging this gap would allow combinations of probabilistic fatigue analysis (as discussed in
Chapter 7 of this handbook), and probabilistic fracture mechanics (as discussed in this chapter). This
is important in the development of comprehensive models of failure of cyclically loaded materials that
are initially free of macroscopic defects.
5. COMPUTER SOFTWARE
As mentioned in Section 3.3.2, numerical techniques are required for PPM, if for no other reason than that
the underlying deterministic analyses require numerical procedures. Once Monte Carlo simulation or Rack-
witz-Fiessler procedures are involved, a computer is mandatory for all but unrealistically simple problems.
complexity, using accurate crack growth laws. For instance, the growth of a crack with a K-relation
such as given in Eq. (6-2) governed by a crack growth law such as Eq. (6-3), is usually analyzed by a
numerical procedure that considers crack growth to be constant during a small interval of cycles, with
updates on K (and daldN) as the crack extends.
For cracks growing in complex stress fields, numerical evaluation of stress intensity factors is often
performed by numerical integration of influence functions (Cruse and Besuner, 1975). Convenient curve
fits to influence function results generated by numerical procedures are used in the numerical integration.
Hence, even evaluation of K often involves numerical procedures. The numerical procedures for eval-
uation of K for crack geometries that have not been studied before is a field unto itself and is not
addressed here. Such procedures include finite elements (Hsu, 1986), boundary collocation (Gross et
ai., 1964), and boundary integral equations (Cruse, 1975).
Computer programs for numerical analysis of crack growth are often custom written for a specific
problem, but general-purpose software is available. Table 6-3 summarizes publicly available software
currently available for analysis of crack growth and instability. This table provides a selection and is
most likely not complete.
Table 6-3. Summary of Publicly Available Software for Deterministic Analysis of Crack Growth and
Instability
Cyc1e- or
time-
dependent Crack
Name growth stability Ref. Available from
DIble 6-4. Summary of Publicly Available Software for Probabilistic Fracture Mechanics
pc-PRAISE Stress corrosion and fatigue Harris et al. (1981, Lawrence Livermore
crack growth in commercial 1992) National Laboratory
power reactors (Livermore, CA)
SAFER Creep-fatigue crack growth in Ammirato (1988) Electric Power Research
steam turbine rotors; includes Institute (palo Alto, CA)
thermal and stress analysis
BLESS Creep-fatigue crack initiation Grunloh et al. Electric Power Research
and growth in boiler (1992) Institute (palo Alto, CA)
components; includes thermal
and stress analysis
PACIFIC Fatigue crack growth Dedhia and Harris Failure Analysis Associates
(1988) (Menlo Park, CA)
PROBAN Comprehensive structural DNV Industrial Services
reliability code; includes (Houston, TX)
some fracture mechanics; can
assist in development of
inspection strategies
R/ring-Life Crack initiation and growth due RicardelIa et al. Electric Power Research
to stress corrosion cracking (1991) Institute (palo Alto, CA)
in electrical generator
retaining rings
VISA Pressurized thermal shock in Simonen et al. COSMIC Code Center
nuclear reactor pressure (1986) (Atlanta, GA)
vessels
NESSUS Comprehensive structural Millwater et al. Southwestern Research
reliability code; includes (1992) Institute (San Antonio,
some fracture mechanics TX)
are often custom written for a specific application. However, some PFM software is publicly available,
and Table 6-4 summarizes such software. Interestingly, none of the deterministic codes of Table 6-3
has a probabilistic counterpart in Table 6-4. Table 6-4 includes only probabilistic codes with a strong
fracture mechanics component. Other codes exist for probabilistic structural reliability analysis that do
not specifically consider fracture mechanics, and are not included in Table 6-4. Table 6-4 is probably
incomplete, but is considered to be a representative sample of available software.
6. DATA
Probabilistic fracture mechanics analyses generally require considerably more data than are necessary
for the corresponding deterministic problem. In deterministic analyses, it is generally desired to char-
acterize mean or upper bound values or lines, rather than the distribution types and parameters of the
distribution.
Data on distributions of material properties can be generated in the laboratory and/or gathered from
the literature. Figures 6-4 and 6-5 provide examples of data for crack growth characterizations. Figure
6-10 is an example of statistical characterization of fracture toughness based on compilation of data
134 Probabilistic Fracture Mechanics
reported in the literature. Uncertainties in distribution type and parameters of the distribution due to
spareseness of data should be kept in mind, but have rarely been considered in PPM analyses.
Data on load histories or spectra can be gathered experimentally, such as directly from strain gages,
or can be based on engineering models that employ some well-characterized underlying forcing function,
such as wave spectra in the ocean or statistical distribution of wind loads based on past data. A number
of application chapters in this handbook (for example, Chapters 24 to 26) discuss probabilistic models
for various types of loads acting on various types of structures.
Data on inspection reliability and uncertainty can be gathered in the laboratory or from the literature.
Chapter 11 provides details in this area. Generation of laboratory data is preferable, but reasonable
estimates can often be made from results reported in the literature (e.g., see Rummel et al., (1989).
Finally, data on the initial crack size distribution and location are important to any PFM analysis.
The importance of initial crack size that is demonstrated in the simple example problem of Section
3.3.2 is typical, and the initial crack size distribution forms a key input to any PFM analysis. Unfor-
tunately, information on initial crack size distributions is sparse and expensive to gather. Data such as
shown in Fig. 6-7 are desirable, but are the exception and are rarely available. Furthermore, the trans-
ferability of such data to other weld procedures and materials is poorly understood.
Estimates of initial crack sizes can be made on the basis of past experience and engineering judgment,
or by back calculation from failure data, such as the example presented in Sire and Harris (1992).
Section 8 discusses future trends in PPM, including estimation of initial defect distributions from models
of welding or the results of inspections.
Fracture mechanics is a general area that has wide applicability. It is basically suitable for situations in
which failure occurs because of the unchecked growth of a dominant crack. Probabilistic fracture me-
chanics is also then widely applicable, and has, in fact, been used in a variety of fields, including
ceramics and a wide range of metals. A number of examples are provided elsewhere in this handbook
(see, e.g., Chapters 5,22,25, and 30), and only a minimal overview is provided here. Harris and Balkey
(1993) and the proceedings of a recent Symposium on Reliability Technology (ASME, 1992) contain
numerous applications covering a wide range of industries.
The work of Sire et al. (1992) provides an example of the use of PFM for inspection planning and
repair of container ships. In this case, deck doubler plates were welded to the deck of a fleet of container
ships during the process of adding cargo bays. Figure 6-15 shows a cross-section of the ship and a
close-up of a deck doubler plate weld. Soon after being placed in service, cracks were observed to
occur in the deck doubler plates at the welds. The cracks were found to have initiated from large internal
weld defects, such as lack of fusion at the butt welds.
The simplest repair scheme would have involved removing the entire length of doublers, rewelding
the butt joints, and then completing the associated fillet welds. However, the time and cost of this
procedure were prohibitive. An effective repair strategy was needed by which the cost of initial in-
spection and repair could be balanced against the cost of intermittent, periodic inspections and the
probability of detecting new cracks that would result in further disruption of operation.
The problem at hand was a complex one, involving several technical and economic issues. The
technical issues involved ultrasonic inspection procedures, repair schemes, welding procedures, and
alternative joint geometries for main deck doubler butt joints. The economic factors included inspection
time and interval, lost service time required for repairs, the costs associated with future crack devel-
Probabilistic Fracture Mechanics 135
opment, and the potential for catastrophic failure. Several weld repair alternatives were considered in
an effort to increase the predicted reliability of the joint.
The above issues, combined with an improved inspection procedure, were aimed at the reduction of
the initial flaw size and the reduction of residual stresses. The order of importance of each parameter
affecting the butt weld life was not known in advance and had to be determined through analysis. In
addition, several of the input variables, namely, the initial flaw size distribution, the material properties,
and the stress state, were random in nature.
A PFM approach is ideally suited to this type of life prediction/maintenance optimization problem,
and such an approach was developed to predict the expected life of doubler butt joints for any set of
input variables such as flaw size distribution, flaw detection criteria, flaw rejection size, weld fracture
toughness, joint residual stress, and inspection interval.
A probabilistic model of deck doubler butt weld lifetime based on fracture mechanics analysis of
fatigue growth of preexisting weld defects was constructed. The model was based on a deterministic
fatigue crack propagation model. Initial crack size, maximum cyclic stress level, and fracture toughness
were taken to be random.
Sheer
Slrake
doubler
Conlainer hold
Doubl.r pial.
66
Main dec!< 43
6mm
Figure 6-15. Midship section showing location of main deck, sheer strake, and bottom doublers, with close-up of
cross-section through doubler butt weld. (Source: Sire et al. [1992). Reprinted with permission.)
136 Probabilistic Fracture Mechanics
The initial flaw size distribution was estimated from the reported results of early ultrasonic inspec-
tions. The estimates were based on the assumption that all the reported indications in the vicinity of
the bottom of the doubler plate were defects. The inspection provided two points on the cumulative
crack depth distribution. Assuming the crack depth to be lognormally distributed allowed the parameters
of the distribution to be evaluated.
The following nondetection probabilities were employed:
The detection size is the depth of cracks defined in the inspection procedure as requiring repair-if
detected. This nondetection probability distribution defines the detection size to be equal to the rejection
size. Although rather simplistic, this detection/rejection probability model was sufficient to demonstrate
trends in the final failure distributions.
The fatigue stress state consists of two components, namely, the static or mean stress and the dynamic
or cyclic stress. Mean stress is comprised of the still-water bending stress and the residual stress built
into the joint due to the fabrication and welding process. Cyclic stresses result from relatively low-
frequency (less than 0.3 Hz) wave-induced bending and high-frequency motions (slamming or whip-
ping). Statistical representation of the former is routinely employed in naval architecture (Bishop and
Price, 1974). The cyclic stress peaks were assumed to follow a Rayleigh distribution due to the Gaussian
distribution of the sea elevation. The root-mean-square value of the distribution was estimated as a
function of the significant wave height. In the class of container ships under consideration, bending
stresses induced by slamming and whipping are significant because of their flared bows, and a statistical
distribution of cyclic stress due to slamming and whipping was estimated.
The objective of the analysis was to provide data to predict the relative improvement in reliability
for various combinations of inspection/rejection crack sizes, inspection plans, and residual stresses.
Figure 6-16 presents failure probability results for 20 years of years of operation with varying inspection
schedules and rejection crack sizes. The results consistently show that in-service inspection (lSI) is
beneficial, with at least an initial inspection being especially effective. For an inspection/rejection crack
size of 1 mm, just one initial inspection reduces the predicted number of failures in 20 years by one
order of magnitude. The influence of periodic inspections can further reduce the predicted failures by
up to another two orders of magnitude.
The influence of various inspection plans is pronounced at inspection/rejection crack sizes greater
than 1 mm. As shown in Fig. 6-16, for larger crack sizes there is a great benefit to multiple repetitive
inspections throughout the life of the ship, whereas there is less difference between various other
schemes in which inspections cease after several years. The results of Fig. 6-16 are for an inspection
reliability of 95%. Inspection reliability means that all of the cracks with depths less than the inspection/
rejection crack size will remain unrepaired; 95% of the cracks with depth equal to or greater than the
inspection/rejection crack size will be repaired.
Results of PFM analyses showed that butt joint life is greatly increased by frequent in-service in-
spections, especially early in the life of the ship. Factors that improve butt weld life are, in order of
importance, reduction of allowable weld defect size, improved repair weld quality, periodic inspections,
and reduction of residual stresses.
On the basis of the results of the PFM analyses, inspection schedules and inspection/rejection crack
sizes were recommended for the most effective technical and economical means to increase the relia-
bility of the butt joints of the fleet, without penalties to the structural integrity of the ships. The number
of weld failures has been observed to be lower than predicted. This is attributed to conservatism in the
analysis. However, relative ranking and order of magnitude of improvements is representative of actual
Probabilistic Fracture Mechanics 137
experience. Repaired ships have been in service for more than 2 years, without any reports of doubler
cracking.
The BLESS code provides another example of the use of PFM for estimating the reliability of
pressurized components. BLESS is an acronym for boiler evaluation and simulation system, and is a
computer code for life estimation of headers and piping in fossil-fired power plants (Grunloh et ai.,
1992). The code can treat piping and ligaments in headers, such as shown schematically in Fig. 6-17.
Deterministic or probabilistic lifetime estimations can be made. Figure 6-18 shows a representative
header cross-section, which is considered as an example. ligament GH is concentrated on in the ex-
ample, with the dimensions as shown in Fig. 6-18. The operating history is considered to consist of
successive transients of nominally lOoo-hr duration, which are composed of heat-up, steady operation,
and cool-down. Table 6-5 summarizes the pressure, temperature, and flow rate history of the transient.
A 50"F temperature difference between the header and tube steam is treated.
The BLESS code has the capability of estimating the stress and temperature field within the ligament,
including the effects of plasticity and creep. Using the stress-temperature-time history, creep and
fatigue damage are calculated using Larsen-Miller creep damage based on Robinson's rule and fatigue
damage based on Miner's rule. A probabilistic treatment is employed that allows the probability of
crack initiation to be estimated as a function of time by use of Monte Carlo simulation. Crack initiation
due to oxide notching can also be considered. Once a crack has initiated, its growth is treated, also in
a probabilistic manner, using the CT(ave) approach of Section 3.1. A probabilistic treatment of crack
growth is used, based on data such as are included in Figs. 6-4, 6-5, 6-8, and 6-9. In fact, these figures
500~--~----~--~----'---~----'---~~---r----~---'
No Inspection
400
Initial inspection only
300
Inspections:
0, 0.5, 1, 1.5, 2 years
200
100
o
o 2
Figure 6-16. Predicted number of failures in 20 years for various inspection schedules and rejection crack depths.
(Source: Sire et al. [1992]. Reprinted with permission.)
138 Probabilistic Fracture Mechanics
are the default properties for 2~r-1Mo base metal and weld metal used in the BLESS code. Failure
is considered to occur when the crack depth reaches a user-specified fraction of the ligament width
Figure 6-19 presents the cumulative ligament fauilure probability as a function of time for the
1
example problem. The default material properties in BLESS for 2:;CR-1Mo base metal were employed.
The stress and temperature were considered to be normally distributed with means equal to the deter-
ministically defined values and coefficients of variation (standard deviation divided by mean) of 0.05
and 0.01, respectively. The failure probability is below lO-3 for times less than about 17 years and
reaches lO-1 at about 35 years. This result is for a single ligament. If such results for all ligaments in
the header were combined, the overall probability of ligament failure somewhere in the header would
be much higher.
The BLESS results shown in Fig. 6-19 were generated on a 486, 50-MHz personal computer with
5000 Monte Carlo trials. About 1 min of computer time was required per trial. Hence, about 3.5 days
of computer time was required; the run was performed over a weekend. However, failure probabilities
smaller than the approximately lO-4 value in the example are usually of interest, in which case more
trials would be necessary. This results in even more computer time being required.
The failure probability as a function of time provides more information than a single estimated
lifetime, based on best-estimate or worst-case conditions. The lifetime for the example problem based
on median properties is 65.5 years, compared to a median lifetime of 67 years from the probabilistic
results. This shows how a best estimate may provide unfounded optimism, because the failure proba-
bility is uncomfortably high by the time the best estimated lifetime is reached. Alternatively, a worst
case analysis stacks conservatism on conservatism and may be overly pessimistic.
Figure 6-17. Schematic representation of a header, with illustration of tubes and ligaments. (Source: Grunloh,
H. J., et al. [1992]. An Integrated Approach to a Life Assessment of Boiler Pressure Parts, Vol. 4: BLESS Code
User's Manual and Life Assessment Guidelines. Report on Project RP2253-10. R Viswanathan, EPRI Project
Manager. Palo Alto, California: Electric Power Research Institute. Copyright@1992. Electric Power Research In-
stitute. Reprinted with Permission.)
Probabilistic Fracture Mechanics 139
----E
Figure 6-18. Schematic of header cross-section for example problem. (Source: Grunloh, H. J., et al. [1992]. An
Integrated Approach to a Life Assessment of Boiler Pressure Parts, Vol. 4: BLESS Code User's Manual and Life
Assessment Guidelines. Report on Project RP2253-1O. R. Viswanathan, EPRI Project Manager. Palo Alto, Califor-
nia: Electric Power Research Institute. Copyright©1992. Electric Power Research Institute. Reprinted with
Permission.)
Table 6-S. Summary of History for Single Transient Type Considered to Occur Repetitively in Header
Ligament Example Problem
Header Header
flow rate Thbe flow rate steam Tube steam
Time Pressure (thousands (thousands temperature temperature
(hr) (psi) of lbs!hrs) of lbs!hrs) CF) CF)
The use of a probabilistic analysis provides a means of incorporating uncertainty and material scatter
into results, with run/retire/replace decisions then based on an acceptable level of reliability. This, of
course, raises the question of acceptable level of reliability, but perspective can be gained by compar-
isons with past experiences with similar or related components.
The BLESS code results, as well as those for the ship deck doubler plates and other examples cited,
demonstrate the usefulness of PFM in addressing questions concerning the reliable operation of struc-
tures and components.
8. FUTURE TRENDS
Future trends in PPM are expected to include increased usage to provide results for inclusion in decision
tree and fault tree analyses of system reliability. This includes generation of results for various candidate
inspection strategies that are used in cost/risk optimization of inspections. As a part of this development,
it is expected that PFM codes will evolve to provide the more detailed information often needed when
the output from such codes is to be used as inputs to decision trees. For instance, the PRAISE code
provides failure probabilities for a given inspection history, but does not separate the failure probabilities
of pipes with detected cracks from pipes without detected cracks. The latter information is desirable for
decision tree analysis of benefit and costs for various candidate inspection schedules.
A key input to such analyses, as for any PFM analysis, is the initial crack size distribution. As
discussed elsewhere in this chapter, crack size information is sparse and usually expensive to generate.
To overcome the difficulty in estimating initial crack size distributions, a credible technique for obtaining
.1
.......>-
....
~ .01
10
.D
o
[,.
a.
10~OL---~----~10L-~~----2~O----~----~----~--~----~--~
Time (years)
Figure 6-19. Cumulative ligament failure probability as a function of time for example problem.
Probabilistic Fracture Mechanics 141
such estimates is highly desirable. Future progress in this area is crucial, and future trends point to at
least two alternatives. One is to calculate flaw sizes and frequencies in weldments knowing the weld
lay-up procedure and inspections performed between weld passes. Progress has been made in this area
(Chapman, 1992) for the case of welds in ferritic steels, although the work has yet to receive much
attention. Models have been developed and benchmarked against the results of field inspections. Results
to date are reported to be encouraging, and further advancement along these lines would be useful in
a wide variety of industries.
A second alternative to estimating initial flaw size distributions and frequencies that holds promise
for the future is to base such estimates on the results of inspections. This requires a well-characterized
inspection system, both in its detection probability [POD(a)] and inspection uncertainty [p(ala)]. Foulds
et al. (1992) and Ammirato et al. (1992) provide examples of efforts in this area, which hold promise
for the relatively straightforward development of crack size distributions for a wide variety of material
and welding processes.
An area closely related to inspection is repair. Past analyses have generally assumed that detected
cracks are repaired, without consideration of sizing inaccuracies and the potential for improper dispo-
sition of detected cracks. Such considerations will probably be more realistically treated in the future
as PFM continues to progress. An additional factor is the effect of repair, which mayor may not result
in a part as good as new. Repairs may even introduce additional damage, and, with a finite probability,
result in a less reliable part than was the case before repair was attempted.
In critical applications, such as nuclear reactor pressure vessels, it may be desirable to include effects
on fracture toughness distribution beyond those accounted for by combining all data, such as was done
for the results shown in Fig. 6-10. "Fine tuning" of the fracture toughness distribution, based on refined
knowledge of important variables, may allow a better (and tighter) estimate of the random variables to
be made. In the case of commercial power reactors, refinements on P(Klcll') may be possible knowing
details of nickel content, sulfur content, irradiation levels, etc. Such knowledge would allow refined
estimates of pressure vessel reliability to be made, based on improved estimates of the statistical dis-
tribution of fracture toughness.
The use of prediction intervals (Weiss, 1989) (rather than confidence intervals on the mean) holds
promise for future development of random variables that include both uncertainty due to sparse data
and inherent randomness of the data. For example, the data analysis shown in Figs. 6-4 and 6-8 for
fatigue crack growth could be placed on a firmer statistical footing, which would also quantify uncer-
tainty due to limited data, by use of prediction intervals rather than the procedure described in Section
3.2.1, which does not consider the sample population used to generate the parameters of the distribution.
REFERENCES
American Society of Mechanical Engineers (ASME) (1992). Reliability Technology-1992, AD, Vol. 28. New York:
American Society of Mechanical Engineers.
American Society for Testing and Materials (ASTM) (1992). Small Crack Test Methods. ASTM STP 1149. Phil-
adelphia: American Society for Testing and Materials.
AMMIRATO, F. v., G. H. WILSON, C. H. WELLS, D. HARRIs, D. JOHNSON, A. WARNOCK, R. ROBERTS, and
B. SOMERS (1988). Life Assessment Methodology for Turbo-Generator Rotors, Vol. I: Improvements to the
SAFER Code Rotor Lifetime Prediction Software. Report No. CS/Elr5593. Palo Alto, California: Electric
Power Research Institute.
AMMIRATO, F. L., BECKER, J. LANCE, V. DIMITRIGENlC, and S. N. LIN (1992). Flaw distribution and use of lSI
data in RPV integrated evaluations. In: Proceedings of the 11 th International Conference on NDE in the
Nuclear and Pressure Vessel Industries. Metals Park, Ohio: ASM International, pp. 111-119.
142 Probabilistic Fracture Mechanics
ANDERSON, T. L. (1991). Fracture Mechanics Fundamentals and Applications. Boca Raton, Florida: CRC Press.
MG, A H.-S., and W. H. TANG (1975). Probability Concepts in Engineering Planning and Design, Vol. I: Basic
Principles. New York: John Wiley & Sons.
MG, A H.-S., and W. H. TANG (1984). Probability Concepts in Engineering Design, Vol. II: Decision, Risk, and
Reliability. New York: John Wiley & Sons.
BAO, G., S. Ho, Z. Suo, and B. FAN (1992). The role of material orthotropy in fracture specimens for composites.
International Journal of Solids and Structures 29(9):1105-1116.
BERENS, A P. (1989). NDE reliability data analysis. In: Metals Handbook, Vol. 17: Nondestructive Evaluation and
Quality Control, 9th ed. S. R. Lampman, Tech. Ed. Metals Park, Ohio: ASM International, pp. 689-70l.
BISHOP, R. D., and W. G. PRICE (1974). Probabilistic Theory of Ship Dynamics. London: Chapman and Hall.
BLOOM, J. M. (1984). Probabilistic fracture mechanics-a state-of-the-art review. In: Advances in Probabilistic
Fracture Mechanics. C. (Raj) Sundararajan, Ed. New York: American Society of Mechanical Engineers, pp.
1-19.
BROEK, D. (1982). Elementary Engineering Fracture Mechanics, 3rd Ed. The Hague, The Netherlands: Martinius
Nijhoff.
BRUCKNER, A (1987). Numerical methods in probabilistic fracture mechanics. In: Probabilistic Fracture Mechanics
and Reliability. J. W. Provan, Ed. Boston: Martinius Nijhoff, pp. 351-386.
BRUCKNER, A, and D. MUNZ (1984). A statistical model of crack formation in welds. Engineering Fracture
Mechanics 19(2):287-294.
BRUCKNER, A, R. HABERER, D. MUNZ, and R. WEILEIN (1983). Reliability of the Steel Containment of a Nuclear
Power Plant Using Probabilistic Fracture Mechanics. ASME Paper No. 83-PVP-86. New York: American
Society of Mechanical Engineers.
CHAPMAN, O. J. V. (1983). A statistical approach to the analysis of lSI data using the Bayes methods. Paper D-ln.
In: Proceedings of the 7th Structural Mechanics in Reactor Technology Conference. Stanley H. Fistedis, Ed.
Amsterdam, The Netherlands: North-Holland.
CHAPMAN, O. 1. V. (1992). Private communication. Derby, England: Rolls Royce and Associates.
CRUSE, T. A (1975). Boundary-Integral Equation Method: Computational Application in Applied Mechanics.
T. A Cruse and F. J. Rizzo, Eds. New York: American Society of Mechanical Engineers.
CRUSE, T. A, and P. M. BESUNER (1975). Residual life prediction for surface cracks in complex structural details.
Journal of Aircraft 12(4):369-375.
DEDHIA, D. D., and D. o. HARRIs (1988). PACIFIC"': Probabilistic Analysis Code Including Fatigue Induced
Cracking. Menlo Park, California: Failure Analysis Associates, Inc.
FORMAN, R. G., Y. SmvAKUMAR, J. C. NEWMAN, JR., S. M. PIOTROWSKI, and L. C. WILLIAMS (1988). Development
of the NASNFIAGRO computer program. In: Fracture Mechanics: Eighteenth Symposium. ASTM STP
945. D. T. Read and R. P. Reed, Eds. Philadelphia: American Society for Testing and Materials, pp. 781-
803.
FOULDS, J. R., E. L. KENNEDY, S. BASIN, and S. T. ROSINSKI (1992). Flaw distribution development from vessel
lSI data. In: Proceedings of the 11th International Conference on NDE in the Nuclear and Pressure Vessel
Industries. Metals Parks, Ohio: ASM International, pp. 101-118.
fuJIMOTO, Y., H. ITAGAKI, I. HIROSHI, S. ITOH, H. AsADA, and M. SHlNOZUKA (1989). Bayesian reliability analysis
of structures with multiple components. In: Proceedings of the 5th International Conference on Structural
Reliability and Safety. A H.-S. Ang, M. Shinozuka, and G. I. Schuller, Eds. New York: American Society
of Civil Engineers, pp. 2143-2146.
GROSS, B., J. E. SRAWLEY, and W. F. BROWN, JR. (1964). Stress Intensity Factors for a Single-Edge-Notch Tension
Specimen by Boundary Collocation of a Stress Function. Report No. TN-2395. Washington, D.C.: National
Aeronautics and Space Administration.
GRUNLOH, H. J., R. H. RYDER, A GATTUSO, J. M. BLOOM, D. R. LEE, C. C. SCHULTZ, D. D. SUTHERIAND,
D. O. HARRIs, and D. D. DEDHIA (1992). An Integrated Approach to Life Assessment of Boiler Pressure
Probabilistic Fracture Mechanics 143
Parts, Vol. 4: BLESS Code User's Manual and Life Assessment Guidelines. Report on Project RP2253-10.
Palo Alto, California: Electric Power Research Institute.
HAHN, G. J., and S. S. SHAPIRO (1967). Statistical Models in Engineering. New York: John Wiley & Sons.
HARRIs, D. O. (1985). Probabilistic fracture mechanics. In: Pressure Vessel and Piping Technology-A Decade of
Progress. C. Sundararajan, Ed. New York: American Society of Mechanical Engineers, pp. 771-791.
HARRIs, D. O. (1992). Probabilistic fracture mechanics with application to inspection planning and design. In:
Reliability Technology-1992. T. A. Cruse, Ed. New York: American Society of Mechanical Engineers, pp.
57-76.
HARRIs, D.O., and K. R. BALKEY (1993). Probabilistic considerations in the life extension and aging of pressure
vessels and piping. In: Pressure Vessels and Piping Technology for the 90s. New York: American Society
of Mechanical Engineers, pp. 245-269.
HARRIs, D.O., and E. Y. LIM (1982). Applications of a fracture mechanics model of structural reliability to the
effects of seismic events on reactor piping. Progress in Nuclear Energy 10(1):125-159.
HARRIs, D.O., and E. Y. LIM (1983). Applications of a probabilistic fracture mechanics model to the influence of
in-service inspection on structural reliability. In: Probabilistic Fracture Mechanics and Fatigue Methods:
Applications for Structural Design and Maintenance. ASTM STP 798. J. M. Bloom and J. C. Ekvall, Eds.
Philadelphia: American Society for Testing and Materials, pp. 19-41.
HARRIs, D.O., E. Y. LIM, and D. D. DEDHIA (1981). Probability of Pipe Fracture in the Primary Coolant Loop
of a PWR Plant, Vol. 5: Probabilistic Fracture Mechanics Analysis. Report No. NUREG/CR-2189. Wash-
ington, D.C.: Nuclear Regulatory Commission.
HARRIs, D.O., C. 1. BIANCA, E. D. EASON, L. D. SALTER, and J. M. THOMAS (1987). NASCRAC: A computer
code for fracture mechanics analysis of crack growth. In: Proceedings of the 28th Structures, Structural
Dynamics, and Materials Conference, Part I. Paper No. 87-0847. New York: American Institute of Aero-
nautics and Astronautics, pp. 662-667.
HARRIs, D.O., D. D. DEDHIA, and S. C. Lu (1992). Theoretical and User's Manual for pc-PRAiSE. Report No.
NUREG/CR-5864. Washington, D.C.: Nuclear Regulatory Commission.
Hsu, T. R. (1986). The Finite Element Method in Thermomechanics. Boston, Massachusetts: Allen & Unwin.
HUDAK, S. 1., JR., R. C. McCLUNG, M. L. BARlLETI, J. H. FITZGERALD, and D. A. RUSSELL (1990). A Comparison
of Single-Cycle Versus Multiple-Cycle Proof Testing Strategies. Contractor Report No. 4318. Washington,
D.C.: National Aeronautics and Space Administration.
JOHNSON, D. P. (1976). Inspection uncertainty: The key element in nondestructive testing. Materials Evaluation
34(6):121.
KANNINEN, M. F. and C. H. POPELAR (1985). Advanced Fracture Mechanics. New York: Oxford University Press.
KUMAR, v., M. D. GERMAN, and C. F. SmH (1981). An Engineering Approach for Elastic-Plastic Fracture Analysis.
Report No. NP-1931. Palo Alto, California: Electric Power Research Institute.
MADSEN, H. 0., S. KRENK, and J. C. LIND (1986). Methods of Structural Safety. Englewood Cliffs, New Jersey:
Prentice-Hall.
MARSHALL, W. (Ed.) (1976). An Assessment of the Integrity of PWR Vessels. Report of a Study Group chaired by
W. Marshall. London: H. M. Stationery Office.
MELCHERS, R. E. (1987). Structural Reliability Analysis and Prediction. New York: Halsted Press.
MErru, S. R., and R. G. FORMAN (1994). Analysis of circumferential cracks in circular cylinders using the weight
function method. In: Fracture Mechanics: Twenty-Third Symposium. ASTM STP 1189. R. Chona, Ed. Phil-
adelphia: American Society for Testing and Materials (in press).
MILLwATER, H., Y. T. Wu, T. TORNG, B. THACKER, D. RIHA, and C. P. LEUNG (1992). Recent developments of
the NESSUS probabilistic structural analysis computer program. In: Proceedings of the 33rd AIAA/ASME/
AHS/ASC Structures, Structural Dynamics and Materials Conference. Paper No. AIAA-92-2411. New York:
American Institute of Aeronautics and Astronautics, pp. 614-624.
MURAKAMI, Y. (1987). Stress Intensity Factors Handbook. Oxford, England: Pergamon Press.
NEWMAN, J. C., JR., and I. S. RAJU (1983). Stress intensity factor equations for cracks in three-dimensional finite
144 Probabilistic Fracture Mechanics
bodies. In: Fracture Mechanics, Fourteenth Symposium, Vol. I: Theory of Analysis. ASTM STP 791. J. C.
Lewis and G. Sines, Eds. Philadelphia: American Society for Testing and Materials, pp. 238-265.
NILSSON, F. (1977). A model for fracture mechanics estimation of the failure probability of reactor pressure vessels.
In: Proceedings of the Third International Conference on Pressure Vessel Technology, Part II: Materials and
Fabrication. New York: American Society of Mechanical Engineers, pp. 593-601.
PROVAN, J. W., Ed. (1987). Probabilistic Fracture Mechanics and Reliability. Boston: Martinius Nijhoff.
QUINONES, D. E, W. L. SERVER, and B. E BEAUDOIN (1988). DA/DN: A Computer Program for Pipe Fatigue
Crack Growth. Report No. NP-5720. Palo Alto, California: Electric Power Research Institute.
RACKWlTZ, R., and B. FIESSLER (1978). Structural reliability under combined random load sequences. Computers
and Structures 9:489-497.
RICARDELLA, P. C., S. S. TANG, G. J. LICINA, W. R. BROSE, T. P. SHERLOCK, J. STEIN, and L. NOTTINGHAM
(1991). Development of a generator retaining ring life assessment code. Presented at EPRI Steam Turbine
and Generator NDE, Life Assessment, and Maintenance Workshop, Charlotte, North Carolina, July 1991.
Palo Alto, California: Electric Power Research Institute.
RIEDEL, H. (1987). Fracture at High Temperatures. Berlin: Springer-Verlag.
RUMMEL, W. D., G. L. HARDy, and T. D. COOPER (1989). Applications of NDE reliability to systems. In: Metals
Handbook, Vol. 17: Nondestructive Evaluation and Quality Control, 9th ed. S. R. Lampman, Tech. Ed.
Metals Park, Ohio: ASM International, pp. 674-688.
SAXENA, A (1986). Creep crack growth under non-steady-state conditions. In: Fracture Mechanics, Vol. 17. ASTM
STP 905. J. H. Underwood, R. Chait, C. W. Smith, D. P. Wilhem, W. A Andrews and J. C. Newman, Eds.
Philadelphia: American Society for Testing and Materials, pp. 185-201.
SlllNOZUKA, M. (1983). Basic analysis of structural safety. ASCE Journal of Structural Engineering 109(3):721-
740.
SIMONEN, E A, K. I. JOHNSON, A M. LIEBETRAU, D. W. ENGEL, and E. P. SIMONEN (1986). V/SA-II-A Computer
Code for Predicting the Probability of Reactor Pressure Vessel Failure. NUREG/CR-4486. Washington, D.C.:
Nuclear Regulatory Commission.
SIRE, R. A and D. O. HARRIS (1992). Probabilistic fracture mechanics modelling of microelectronic component
reliability. In: Advances in Electronics Packaging 1992, Vol. 2. W. T. Chen and H. Abe, Eds. Ne,,!, York:
American Society of Mechanical Engineers, pp. 991-997.
SIRE, R. A, J. E. KOKARAKIs, C. H. WELLS, and R. K. TAYLOR (1992). A probabilistic structure life prediction
system for container ship repairs and inspections. International Journal of Pressure Vessels and Piping 50:
297-315.
SOBCZYK, K., and B. F SPENCER, JR. (1991). Random Fatigue: From Data to Theory. New York: Academic Press.
TADA, H., P. C. PARIS, and G. R. IRWIN (1985). The Stress Analysis of Cracks Handbook, 2nd ed. St. Louis,
Missouri: Paris Productions.
TANAKA, H. (1989). Stochastic properties of semi-elliptical surface cracks based on Newman-Raju's K-expression.
Engineering Fracture Mechanics 34(1):189-200.
TSURUI, A, J. NIENSTEDT, G. I. SCHUELLER, and H. TANAKA (1989). Time variant structural reliability analysis
using diffusive crack growth models. Engineering Fracture Mechanics 34(1):153-167.
VISWANA1HAN, R. (1989). Damage Mechanisms and Life Assessment of High-Temperature Components. Metals
Park, Ohio: ASM International.
WANG, Y Y, D. M. PARKS, W. R. LLOYD, W. G. REUTER, and J. EpSTEIN (1991). Elastic-plastic deformation in
surface-cracked pipes: Experimental and numerical analysis. Journal of Applied Mechanics 58:895-903.
WEISS, N. A (1989). Elementary Statistics. Reading, Massachusetts: Addison-Wesley.
Wu, Y.-T., O. H. BURNSIDE, and J. DOMINGUEZ (1987). Efficient probabilistic fracture mechanics analysis. In:
Proceedings of the Fourth International Conference on Numerical Methods in Fracture Mechanics. A R.
Luxmoore, D. R. J. Owen, Y P. S. Rajapakse, and M. E Kanninen, Eds. Swansen, U.K: Pineridge Press,
pp.85-1oo.
Probabilistic Fracture Mechanics 145
YAGAWA, G., H. UEDA, and Y. TAKAHAsm (1986). Numerical and experimental study of ductile fracture of plate
with surface crack. In: Fatigue and Fracture Assessment by Analysis and Testing. S. K. Bhandari, S. Y.
Zamrick, and M. K. Au-Yang, Eds. New York: American Society of Mechanical Engineers, pp. 43-48.
YOON, K. B. (1990). Characterization of Creep Fatigue Crack Growth Using the C, Parameter. Ph.D. Thesis.
Atlanta, Georgia: Georgia Institute of Technology.
ZAHOOR, A. (1989). Ductile Fracture Handbook. Report No. NP-6301-D. Palo Alto, California: Electric Power
Research Institute.
7
PROBABILISTIC FATIGUE
ANALYSIS
P. H. WIRSCHING
1. INTRODUCTION
Fatigue is one of the most important failure modes to be considered in mechanical and structural design.
It has been stated that fatigue accounts for more than 80% of all observed service failures in mechanical
and structural systems. Moreover, fatigue and fracture failures are often catastrophic; they may come
without warning and may cause significant property damage, as well as loss of life. Many cases of
critical component fractures are observed in applications in which failures previously had not been
encountered.
Available information indicates that many fatigue failures result from poor details. Moreover, the
increased use of high-strength materials, the fatigue strength and fracture toughness of which are not
commensurate with the increased static strength, has resulted in more fatigue and fracture failures in
recent years. Furthermore, many structures are expected to perform in increasingly severe environments,
such as the ocean, where various combined environmental and cyclic loading conditions exist. In ad-
dition, costs associated with constructing and testing large-scale models to validate engineering analyses
are rapidly increasing.
Many sources of uncertainty in the fatigue analysis process exist. They include the following, to
mention a few.
l. The fatigue phenomenon in unpredictable, as evidenced by enormous statistical scatter in laboratory data,
with cyc1es-to-failure data having coefficients of variation (COY) typically ranging from 30 to 40% and
sometimes as high as 150%.
2. Extrapolation of laboratory data to engineered systems often requires many assumptions.
3. Geometry of the component, for example, defects and discontinuities in welded joints, complicates the
prediction of initiation and propagation of fatigue cracks.
4. Environmental processes that produce fatigue loading on systems may not be well defined. There is likely
to be significant uncertainty associated with models of the environment.
5. The dynamic force on a system produced by a given environment may not be accurately known.
6. The oscillatory stress causing fatigue at a detail, produced by a force on the system, contains uncertainty
in the stress analysis procedures.
146
Probabilistic Fatigue Analysis 147
7. Effects of temperature, corrosion, and so on, on fatigue strength are not well known.
Facing these uncertainties, engineers must make decisions regarding the integrity of components with
respect to fatigue. Therefore, a probabilistic and statistical approach utilizing developments in proba-
bilistic design theory seems particularly relevant.
2.1. Notations
2.2 Abbreviations
1Wo important phases in the fatigue process are (1) crack initiation and (2) crack propagation or sub-
critical crack growth. Depending on the nature of the structure and the service loads applied to it, either
crack initiation or subcritical crack growth, or both, may be important in assessing structural
performance.
For engineering applications, crack initiation refers to the formation of cracks that are easily detect-
able with the use of available nondestructive evaluation techniques, rather than to the beginning of
microstructural cracking. The crack initiation period thus defined may consume a substantial percentage
of the usable fatigue life in high-cycle fatigue problems, in which the oscillatory stress is relatively low.
On the other hand, when stress fluctuations are high or when cracks, notches, and other stress raisers
are present, fatigue cracks appear quite early and a significant portion of the service life of the com-
ponent may be spent in propagating the crack to a critical size. The two phases are of equal importance,
order of magnitude-wise, in low-cycle fatigue (total life less than 100,000 cycles). In welds and certain
other structural details, in which some defects are practically unavoidable because of the fabrication
process, crack propagation may begin with virtually the first load application.
The first step in fatigue reliability analysis is to define the engineering models. Theory and application
of the fatigue/fracture equations are now well documented in a wide variety of excellent references,
including those by Nelson (1978), Dowling (1979), Gurney (1979), Schijve (1979), Fuchs and Stephens
(1980), Collins (1981), Broek (1984), Rolfe and Barsom (1987), and Hertzberg (1989), as well as
numerous other works published by the American Society for Testing and Materials (ASTM) and
Society of Automotive Engineers (SAE).
Crack growth relations rely on the use of the change in the stress intensity factor as a function of
the crack growth rate, da/dn, where a is the crack depth and n is the number of cycles. Undoubtedly,
the most popular crack growth law is that of Paris (1964), but other models have been proposed and
studied (Miller and Gallagher, 1981: Ortiz et al., 1988). Of particular interest is the behavior of small
cracks at low stress levels because, in reality, structures experience many cycles at these conditions.
Reports by Leis et al. (1983) and Burnside et al. (1984) provide a wealth of information. Studies by
Hudak (1981) suggest that the use of an initiation model coupled with a Paris propagation model having
no threshold will produce accurate life predictions in real structures.
Probabilistic Fatigue Analysis 149
SMOOTII
SPECIMEN 1----4--+-+--+--+--+--+ time
+ P(t)
STRESS
RANGE
Fix S, and count the number of cycles to failure, N
STRESS AMPLITUDE, S
(or STRESS RANGE)
(Jog)
/'\ ./ Distribution of
P .......... fatigue life at a
Distribution of given stress
fatigue strength at •
a given cycle life
Endurance Limit
(steels only)
(Jog)
CYCLES TO FAILURE, N
Figure 7-1. Obtaining engineering data to establish the fatigue strength of a material.
ISO Probabilistic Fatigue Analysis
the S-N data, the events of failure are uncertain. Therefore, reliability methods are appropriate for
fatigue analysis and design.
In many case, the S-N data have a curve with a linear trend in log-log space, implying a model of
the form
NS" = A (7-1)
where m and A are empirical constants that must be determined from the data using least-squares
analysis. This form typically provides a good model for (1) welded joint data, (2) the low- or high-
cycle component of the general strain-life model, and (3) fatigue strength as defined by the fracture
mechanics model.
Smooth specimen, constant-amplitude fatigue tests on steel have shown an "endurance limit," SE, a
value of stress below which fatigue will not occur. Some models, particularly those used in mechanical
engineering practice (Shigley, 1977), use an endurance limit as shown in Fig. 7-1, with the knee at 106
cycles. Sometimes the model consists of more than one straight line segments having shallower slopes
out in the high-cycle range.
More generally, the S-N relationship, even when plotted in log-log space, is nonlinear. A wide
variety of empirical forms are employed. One example is
(7-2)
The parameters n, ao, ah a2, and 'Y are determined from the data. Another model is the general strain-
life relationship described below.
where E. is the strain amplitude (specimens are typically strain cycled with a prespecified strain range),
E is the modulus of elasticity, CJ'; is the fatigue strength coefficient, b is the fatigue strength exponent,
E; is the fatigue ductility coefficient, and c is the fatigue ductility exponent.
The general strain-life model is shown in Fig. 7-2. The stable hysteresis loop defines the stress and
strain that are applied to the specimen and 4E is the total strain range (fixed in the test), 4Ep is the
plastic strain range, and 4E. is the elastic strain range. In the case where mean stress, So, is present, the
term CJ'; can be replaced by (CJ'; - So). The empirical constants have bene tabulated for a wide variety
of materials (e.g., in the Fatigue Design Handbook; Society of Automotive Engineers, 1988).
The first term on the right-hand side, equal to the elastic strain range, dominates the expression in
the high-cycle range. The second term, equal to the plastic strain range, dominates the expression in
the high-strain low-cycle region.
Local strain analysis is the method used to predict fatigue life when an oscillatory random load is
applied (Fig. 7-2) and there is cyclic plasticity at a notch or point of stress concentration. This method
of life prediction is described by Dowling et al. (1977) and Dowling (1979), and in the Fatigue Design
Handbook (Society of Automotive Engineers, 1988).
Probabilistic Fatigue Analysis 151
(J'
Cyc:les to Failure, N
t P(t)
Figure 7-2. The general strain-life model and local strain analysis.
152 Probabilistic Fatigue Analysis
• NOMINAL STRESS
IN MEMBER
S(t) S Stress Intensity
Factor Range
~K=Y(a)~
INITIAL
I
CRACK SIZE a
CRACK AT FAILURE
SIZE
S(t)
• MATERIAL
BEHAVIOR
da Paris Law
dn
da
-=C(M)m
dn
in which S is the applied stress range; Y(a) is a finite geometry correction factor, which may depend
on a; and a is the crack depth for a surface flaw or half-width for a penetration flaw. The geometry
factor depends on the crack size, structural geometry, and applied far-field stress. The process is illus-
trated in Fig. 7-3.
It has been found from experimental data that the crack growth rate, daldn, and the stress intensity
factor range, tJ.K, can be modeled as shown in Fig. 7-3. The central region is governed by the Paris
law (paris, 1964),
in which C and m are empirical constants. These depend on such factors as the mean cycling stress,
the test environment, and the cycling frequency. Data are provided, for example (for materials for flight
vehicles), in the Damage Tolerant Design Handbook (U.S. Air Force Material Laboratory, 1988).
A convenient equation for cycles to failure, N, can be derived by integrating Eq. (7-5) from an initial
crack length, ao, to a critical crack length, ac (at which the crack growth becomes unstable), and for n
Probabilistic Fatigue Analysis 153
from 0 to N. Assuming that llKth = 0 (where llKtb is the threshold stress intensity factor range at which
the crack starts to grow), the result of the integration of Eq. (7-5) is
(7-6)
In a typical fatigue test, the stress level S (independent variable) is chosen and the cycles to failure, N,
is observed. Generally, given K specimens available for testing, various levels of stress are chosen in
order to construct an S-N relationship (e.g., Fig. 7-1). The analytical problem is to translate the data
set (Sj, N j ), i = 1, K, into a statistical summary or synthesis that can be used for design purposes. A
least-squares or maximum likelihood analysis can be employed to obtain estimates of the parameters
of the models chosen.
As suggested in Fig. 7-1, there is considerable scatter in fatigue test data. Probabilistic analysis is
used to manage this uncertainty, and thus cycles to failure, N, at any stress level, S, can be considered
to be a random variable. Similarly, stress S given N could be modeled as a random variable.
For design purposes, three different approaches can be taken: (1) the least-squares curve is used as
the design curve. A suitable factor of safety is employed to ensure a safe design; (2) a design curve is
defined on the safe side of the data, for example, a lower bound curve for the data of Fig. 7-1. Again
an appropriate factor of safety is used; (3) one or more of the parameters are modeled as random
variables, reflecting statistical uncertainty in the parameter estimators, as well as the scatter inherent in
the fatigue process. The latter is required for a reliability analysis in which all uncertainties associated
with the fatigue design process are accounted for.
Over the years, there have been numerous articles and some books written on the topic of statistical
analysis of fatigue data. An ASTM conference was devoted to the subject (Little and Ekvall, 1981).
The standards published by the ASTM provide a guideline for analysis of fatigue data (American Society
for Testing and Materials, 1987); the latest version is being reviewed and updated. In general, analysis
of the fatigue process is complicated by the following facts: (1) the log-log transformation will not
always linearize the data; (2) the data tend to be heteroscedastic as the scatterband of life at a given
stress broadens at lower stress levels; and (3) there will be some runnouts, or censored data. These
issues have been addressed by Schmee and Hahn (1979), Nelson (1984), and Hinkle and Emptage
(1991).
This chapter focuses on the reliability approach. In this light, some c.omments with regard to statistical
analysis of fatigue data are provided. First, it is difficult to derive distributions of the model parameters
from the data. In general, the parameters will be correlated. A simple approach, assuming that all of
the uncertainty is lumped into one random variable, was pursued by Wirsching and Hsieh (1980). For
example, in the linear model case for welded joints, Wirsching (1984) assumes that the fatigue strength
coefficient A (see Eq. [7-1]) is a random variable and that the exponent m is constant. It is easily shown
from Eq. (7-1) that, if cycles to failure N has a lognormal distribution with coefficient of variation CN,
A is also lognormal with coefficient of variation CA =CN. However, the general problem of constructing
a statistical model is still under consideration.
Second, it has been observed by this author (using a wide variety of fatigue data) that the lognormal
distribution provides a better fit to cycles-to-failure (the random variable N) data than other common
two-parameter models. Others who monitor databanks (e.g., for the gas turbine engine companies) claim
154 Probabilistic Fatigue Analysis
that the lognormal is seldom rejected in a hypothesis test. This is a fortuitous result, as there is a large
body of well-developed statistical theory available for analyzing variables having normal and lognormal
distributions.
The Weibull distribution is popular, but unpublished studies by this author show that the Weibull
does poorly in competition with the lognormal. There appears to be no physical or mathematical reason
for this. Some designers like to use the three-parameter Weilbull to model cycles to failure, and this
model is likely to fit better than the two-parameter lognormal. But it may be that the three-parameter
lognormal, introducing a location parameter just like the three-parameter Weibull, would do even better.
There are, however, a number of theoretical and practical reasons for avoiding three-parameter (or
higher) distributions. This controversy is not likely to be resolved soon.
The magnitude of uncertainty in fatigue strength exceeds that of most other physical phenomena.
Table 7-1 provides typical values of the coefficient of variation that are observed in fatigue data. Realize
that the fatigue process is extremely complicated, and that the scatter in the data depends on a large
number of factors, not the least of which is the material itself. Table 7-1 does, however, give a sense
of the amount of uncertainty in fatigue and the relevance of using probabilistic methods to manage that
uncertainty. For default values, COYs of 0.50 for cycles to failure at a given stress level and 0.10 for
fatigue strength at a given cycle life are not unreasonable.
The issue of predicting fatigue under random stress processes has attracted a great deal of attention
(e.g., Sobczyk and Spencer, 1991). The test data on random fatigue are limited as, unfortunately, most
fatigue data and empirical models based on these data were developed using constant-amplitude tests.
In real life, however, stress processes are typically as shown in Fig. 7-4. The problem is how to use
the mountains of available constant-amplitude data to predict fatigue under variable-amplitude stresses.
In general, the "random fatigue" problem is extremely complicated. The sequence, or specific his-
tory, of loading can be important when there are large differences in amplitudes of adjoining cycles.
But what designers need for "routine" applications is a simple formulation that produces life predictions
that are reasonably accurate. Such an algorithm was first published by Miner (1945), although it was
subsequently discovered that a similar rule had been proposed earlier by Palmgren (1924). Other pro-
posed models have been summarized by Collins (1981).
For stress histories that are typical of vibratory structural responses to environments, Miner's rule
seems to work reasonably well. Wirsching (1980, 1984) summarized random fatigue tests and proposed
a statistical model for damage at failure, 4. For purposes of reliability analysis, it is reasonable to model
4 as being lognormal with a median value of 1.0 and COY of 0.30. This COY represents strength
modeling error associated with the use of Miner's rule.
Probabilistic Fatigue Analysis 155
Miner's linear damage accumulation rule can be expressed as follows: Consider n cycles of a
variable-amplitude process. Let Si be the stress range (or amplitude) of the ith cycle. Define damage as
• 1
D= 2:
;=1
N(Si) (7-7)
where N is the number of cycles to failure at stress level Si (taken from the constant-amplitude S-N
curve). In the special case in which the S-N curve is given as NS m =A, it is easily shown that
(7-8)
where E(Sj is the expected value of sm and n is the number of applied cycles. For the special case in
which the distribution of stress ranges is Rayleigh (the stress time history is a stationary Gaussian
narrow-band process), if A is based on range,
where f(.) is the gamma function. If A is based on stress amplitude, eliminate the first "2" in the
expression.
When the stress process is wide band, as shown in Fig. 7-4, it is not so obvious how to identify the
number of cycles to be used with Miner's rule. One approach is the equivalent narrow-band method.
For a wide-band process, the RMS (standard deviation of a zero mean process) value and the rate of
zero crossings are computed. Assuming that narrow-band processes having the same RMS and rate of
zero crossings cause the same damage (see Fig. 7-4), the closed-form expressions of Eqs. (7-8) and
., .t
" I.
Figure 7-4. Examples of random stress processes that can produce metal fatigue.
156 Probabilistic Fatigue Analysis
(7-9) apply. The rainBow method is an alternate approach of counting the number of cycles of a wide-
band process. Among fatigue experts, it is now recognized that the rainflow method provides the most
accurate model (Fuchs and Stephens, 1980; Almar-Ness, 1985). Using the rainBow method, an empirical
correction factor to the equivalent narrow-band process was developed by Wirsching and Light (1980)
and refined by Ortiz and Chen (1987) and Lutes and Larsen (1990).
7. RELIABILITY ANALYSIS
Let Ns be the service life, or any other life for which reliability or a probability of failure estimate is
required. Let N be a random variable denoting cycles to failure. Clearly N will, in general, be a function
of X, a vector of design factors including stress and other variables. The limit state function can be
written as
This form is commonly used when the stress process is random and Miner's rule is employed.
Another form of the limit state function is
where U and V are vectors of design factors; R is the fatigue strength at life N s, the distribution of
which is illustrated in Fig. 7-1; and S is the stress range (or amplitude). This approach is used frequently
when the stress is constant amplitude and the application is high-cycle fatigue, for example, in the
endurance range for steel.
Because the reliability analysis for the limit state function given above is standard (the same analysis
techniques apply for any type of failure mode including fatigue), any of the standard reliability analysis
methods can be employed. A summary of available algorithms for reliability analysis, most of them
automated, is given in Table 7-2.
There have been a large number of published works on fatigue reliability methods over the years.
In 1982, The American Society of Civil Engineers (ASCE), Committee on Fatigue and Fracture Reli-
ability, published a state-of-the-art summary on fatigue reliability in the Journal of the Structural Di-
vision (ASCE, 1982). For the most part, reliability analyses have been performed employing basic
probability concepts and the methods given in Table 7-2. Attempts to address the problem in a com-
prehensive sense, treating all relevant design factors as random variables, have been made by Munse
et al. (1983) using a Weibull format and by Wirsching (1984) and Wirsching and Chen (1988) using a
lognormal format. A summary of probabilistic and statistical methods in fatigue analysis is provided by
Madsen et al. (1986), Jiao and Moan (1992), and Wirsching et al. (1991). The following examples
illustrate how reliability methods can be employed for fatigue.
and (3) Miner's rule is valid. Fatigue damage is given as (see Eq. [7-8])
n
D = -E(Sj (7-12)
A
S~ = [E(Sj]lIm (7-13)
The prime indicates "best estimate." Introducing stress modeling error through the random variable B,
the actual equivalent amplitude stress range is
S. = BS: (7-14)
where B accounts for all of the uncertainties associated with the analysis of translating the model of
the environment to stresses at fatigue-sensitive points. Thus, damage becomes
m m
D = -lJ'"S. (7-15)
A
At failure (limit state), D =A when n =N, where A is damage at failure (see above) and N is the total
number of cycles to failure,
AA
N = lJ'"sm . (7-16)
Table 7-2. Summary of Structural Reliability Methods for Probability of Failure Estimates of Components and
Systems
Analytical methods
1. Mean value first-order second-moment (MVFOSM; Cornell, 1969)"
2. Hasofer-Lind generalized safety index (Hasofer and Lind, 1974)"
3. First-order reliability methods (FORMs)"
a. Limit states represented by tangent hyperplanes at design points in transformed standard normal space
(Madsen et al., 1986)
h. Rackwitz-Fiessler algorithm (Rackwitz and Fiessler, 1978)
4. Second-order reliability methods (SORMs)"
a. Limit states represented by hyperparaboloids at design points in transformed standard normal space
(Madsen et al., 1986)
b. Wu/FPI algorithm (WU and Wirsching, 1987)
5. Advanced mean value (AMY) method (WU et al., 1990)"
Monte Carlo simulation
1. Direct Monte Carlob
2. Importance sampling (Shinozuka, 1983t
3. Domain-restricted sampling (Harbitz, 1986)
4. Adaptive sampling (Bucher, 1988t
5. Directional sampling (Bjerager, 1990)
Assume that a, A, and Bare lognormally distributed random variables. Then N will also have a log-
normal distribution. There will be a closed-form solution for the probability of a fatigue failure prior
to the end of the intended service life Ns ,
PI = P(N :5 N s ) (7-17)
(7-18)
where <I> is the standard normal distribution function and J3 is the safety index (reliability index), defined
for this limit state as (see Chapter 2 of this book)
(7-19)
(7-20)
(7-22)
where Ao defines the design S-N curve on the lower or safe side of the data. It is assumed that the
design curve and the median, defined by A, have the same slope m. Define the scatter factor as
(7-23)
Probabilistic Fatigue Analysis 159
A = exp(zo<TmJ (7-24)
where
and Zo is the number of standard deviations (on a log basis) that the design curve is to the left of the
median curve. Typically, Zo is chosen as 2 or 3.
A safe design is ensured by requiring the safety check expression
(7-26)
where ao is the target damage factor, essentially playing the role of a safety factor. For example, ao =
0.50 for a routine design and ao
= 0.10 for components for which the consequences of failure are
serious.
Given the target safety index, 130, and statistics of the design parameters, the target damage factor
can be derived by combining Eqs. (7-20), (7-22), (7-23), and (7-26),
(7-27)
An example is presented in Fig. 7-5. The statistics given in Fig. 7-5 are considered to be typical for
a welded joint (or a detail in which the initiation life is small) in a structure that is exposed to a natural
environment. The target damage level, ao, is plotted as a function of the target safety index, 130. Note
that for a typical value, 130 = 3.0, the target damage level ao
== 0.20.
C!J. 0.30
4 B 0.90
CB 0.25
3
2.0
1
0.10 1.0
Figure 7-5. Target damage level as a function of the target safety index.
160 Probabilistic Fatigue Analysis
8. CONCLUSIONS
Because of relatively large uncertainties in fatigue design factors, use of reliability methods is partic-
ularly appropriate for quantifying structural performance for the fatigue failure mode. All of the available
structural reliability methods are relevant for the purpose of performing probability of failure estimates
or developing reliability-based design criteria. The lognormal format is particularly useful for the ele-
mentary case where fatigue strength is defined by the form NS'" = A, in which m and A are empirical
constants, N is cycles to failure, and S is stress range (or amplitude). The advanced mean value method
is employed effectively when a computer code is required to evaluate the cycle life, for example, when
local strain analysis is used or when a closed-form expression for the integration of the crack growth
relationship is not available.
REFERENCES
ALMAR-NESS, A., Ed. (1985). Fatigue Handbook: Offshore Structures. Trondheim, Norway: Tapir Publishers.
American Society of Civil Engineers (ASCE), Committee on Fatigue and Fracture Reliability (1982). Fatigue and
Input:
1. Response function (explicit or implicit); N = N®; a lower tilde
indicates a vector.
2. Mean, standard deviation of each basic variable Ui ; i = 1, 2, ..., k.
1
Evaluate N at mean values of!!: and at perturbed values of (l1i + 0.1 0) of STEP 1
each Ui ; all other variables equal the means. There are a total of k + 1
function evaluations.
1
Improve the CDF ofN by an improved value ofNj at each ~j' namely, STEPS
the value of Nj at the design point !!j from Step 4: (AMV)
Nj= N(!!j)
j = -4, -3, -2, -1,1,2,3,4
In this example, there are M = 8 function evaluations. the choice is
arbitrary.
This is AMV, the second estimate ofthe CDF of N.
Figure 7-6. Flow chart for the advanced mean value (AMY) method.
162 Probabilistic Fatigue Analysis
fracture reliability: A state-of-the-art review. Journal of the Structural Division of American Society of Civil
Engineers. 108:3-104.
American Society for Testing and Materials ASTM) (1987). Metals-mechnical testing. In: ASTM Standards, Vol.
03.01. Philadelphia: American Society for Testing and Materials.
BJERAGER, P. (1990). On computation methods for structural reliability analysis. Structural Safety 9:79-96.
BROEK, D. 1984. Elementary Engineering Fracture Mechanics. The Hague, Netherlands: Martinius Nijhoff.
BUCHER, C. O. (1988). Adaptive sampling-an iterative fast Monte Carlo procedure. Structural Safety 5:119-126.
IP(t)
Z'
fi.i'
1
'iii External Loading
.~
~ ~P(t)
S -----
,,
Time, t
800
Cyclic Plasticity
at the Notch
Fatigue Strength
Defioedby
Strain· Life
Curve
Cycles to Failure, N
Constants Value
Kt (stress concentration factor) 3.0
E (modulus of elasticity) 206,800 MPa
b (fatigue strength exponent) -0.0843
c (fatigue ductility exponent) -0.9126
s (strain hardening exponent) 0.108
A (strain hardening coefficient) 1518 MFa
I~T-----------------------------------------'
z =<I>-l(F~
u
0.11
-0.11
-1.5
-2.11 • MVFO
o AMVFO
-3.11
-4.1I+-"""'l"TTnnr--''''-rTT'I'ITr--r-TT1T1'1T1r-r-T"TTT1mr-rT'T1nmr-T""TTTTmt
~ ~ ~ ~ ~ ~ ~
CYCLES TO FAIWRE
Figure 7-8. Distribution function of cycles to failure.
164 Probabilistic Fatigue Analysis
BURNSIDE, O. H., S. J. HUDAK, E. OELKERS, K. CHAN, and R. J. DEXTER (1984). Long Term Corrosion Fatigue
of Welded Marine Steels. SSC-326. Ship Structure Conimittee. U.S. Coast Guard. Springfield, Virginia:
National Technical Information System.
COFFIN, L. F. (1954). A study of the effect of cyclic thermal stresses in a ductile material. Transactions ASME 16:
931-950.
CoLLINS, J. A. (1981). Failure of Materials in Mechanical Design. New York: McGraw-Hill.
CORNELL, C. A. (1969). A probability-based structural code. Journal of the American Concrete Institute 66:974-
985.
DOWLING, N. E. (1979). Fatigue at Notches and the Local Strain and Fracture Mechanics Approaches. ASTM
STP 677. Philadelphia: American Society for Testing and Materials.
DOWLING, N. E., W. R. BROSE, and W. K. WILSON (1977). Notched member fatigue life predictions by the local
strain approach. In: Fatigue under Complex Loadings: Analysis and Experiments. Warrendale, Pennsylvania:
Society of Automotive Engineers, pp. 55-84.
fuCHS, H. 0., and R. K. STEPHENS (1980). Metal Fatigue in Engineering. New York: John Wiley & Sons.
GURNEY, T. R. (1979). Fatigue of Welded Structures, 2nd ed. New York: Cambridge University Press.
HARBrrz, A. (1986). An efficient sampling method for probability of failure calculations. Structural Safety 3:109-
115.
HASOFER, A. M., and N. C. LIND (1974). Exact and invariant second-moment code format. Journal of the Engi-
neering Mechanics Division of American Society of Civil Engineers 100:111-121.
HERTZBERG, R. W. (1989). Deformation and Fracture Mechanics of Engineering Materials. New York: John Wiley
& Sons.
HINKLE, A. J., and M. R. EMPTAGE (1991). The use of transformations in the analysis of fatigue life data. Fatigue
and Fracture of Engineering Materials 14:591-600.
HUDAK, S. J. (1981). Small crack behavior and the prediction of fatigue life. Journal of Engineering Materials
and Technology 103:26-35.
JIAO, G., and T. MOAN (1992). Reliability based fatigue and fracture criteria for welded offshore structures.
Engineering Fracture Mechanics 41:271-282.
LEIS, B., M. F. KANNINEN, A. T. HOPPER, J. AHMAD, and D. BROEK (1983). A Critical Review of the Short Crack
Problem in Fatigue. AFWAL-TR-83-4019. Wright Patterson Air Force Base, Ohio: Materials Laboratory.
UTILE, R. E., and J. C. EKVALL (1981). Statistical Analysis of Fatigue Data. ATSM STP 744. Philadelphia:
American Society for Testing and Materials.
LUTES, L. D., and C. E. LARSEN (1990). Improved spectral method for variable amplitude fatigue prediction.
Journal of Structural Division of American Society of Civil Engineers 116:1149-1164.
MADSEN, H. 0., S, KRENK, and N. C. LIND (1986). Methods of Structural Safety. Englewood Cliffs, New Jersey:
Prentice-Hall.
MANSON, S. S. (1954). Behavior of Materials under Conditions of Thermal Stress. NACA TN-2933. Cleveland,
Ohio: NASA/Lewis.
MILLER, M. S., and J. P. GALLAGHER (1981). An analysis of several fatigue crack growth rate (FCGR) descriptions.
In: Fatigue Crack Growth Measurement and Data Analysis. ASTM STP 738. Philadelphia: American Society
for Testing and Materials, pp. 205-251.
MINER, M. A. (1945). Cumulative damage in fatigue. Transactions ASME 67:AI59-164.
MUNSE, W. H., T. W. WILBUR, M. L. TELLALIAN, K. NICOLL, and K. WILSON (1983). Fatigue Characterization
of Fabricated Ship Structural Details for Design. SSC-318. Ship Structure Committee, U.S. Coast Guard.
Springfield, Virginia: National Technical Information System.
NELSON, D. V. 1978. Cumulative Fatigue Damage in Metals. Ph.D. dissertation. Stanford, California: Stanford
University.
NELSON, W. (1984). Fitting of fatigue curves with nonconstant standard deviation to data with runouts. Journal of
Testing and Evaluation 12:69-77.
Probabilistic Fatigue Analysis 165
ORTIZ, K., and N. K. CHEN (1987). Fatigue damage prediction for stationary wide-band stresses. Presented at the
Fifth International Conference on the Applications of Statistics and Probability in Civil Engineering, June
8-12,1987 Vancouver, Canada.
ORTIZ, K., C. J. KUNG, and H. L. PERNG (1988). Modeling fatigue crack growth resistance, dn/da. In: Probabilistic
Methods in Civil Engineering. P. D. Spanos, Ed. New York: American Society for Civil Engineers, pp. 21-
25.
PALMGREN, A. (1924). Die Lebensdauer von Kugellagern. ZeitschriJt des Vereines Deutscher Ingenieure 68(14):
339.
PARIS, P. C. (1964). The fracture mechanics approach to fatigue. In: Fatigue, an Interdisciplinary Approach. 1. 1.
Burke, N. L. Reed, and V. Weiss, Eds. Syracuse, New York: Syracuse University Press, pp. 107-132.
RACKWlTZ, R., and B. FIESSLER (1978). Structural reliability under combined random load sequences. Journal of
Computers and Structures 9:489-494.
ROLFE, S. T., and J. M. BARSOM (1987). Fracture and Fatigue Control in Structures. Englewood Cliffs, New
Jersey: Prentice-Hall.
SClDJVE, 1. C. (1979). Four lectures on fatigue crack growth. Engineering and Fracture Mechanics 11:167-221.
SCHMEE, 1., and G. HAHN (1979). A simple method for regression analysis with censored data. Technometrics 21:
417-431.
SIflGLEY, J. E. (1977). Mechanical Engineering Design. New York: McGraw-Hill.
SIflNOZUKA, M. (1983). Basic analysis of structural safety. Journal of Structural Division of American Society of
Civil Engineers 109:721-740.
SOBCZYK, K., and B. F. SPENCER (1991). Random Fatigue; from Data to Theory. New York: Academic Press.
Society of Automotive Engineers (1988). Fatigue Design Handbook. Publication No. AE-lO. Warrendale, Penn-
sylvania: Society of Automotive Engineers.
U.S. Air Force Material Laboratory (1988). Damage Tolerant Design Handbook. Wright Patterson Air Force Base,
Ohio: Material Laboratory.
WIRSCIflNG, P. H. (1980). Fatigue reliability in welded joints of offshore structures. International Journal of Fatigue
2:77-83.
WIRSCIflNG, P. H. (1984). Fatigue reliability for offshore structures. Journal of Structural Division of American
Society of Civil Engineers 110:2340-2356.
WIRSCIflNG, P. H., and Y. N. CHEN (1988). Consideration of probability based fatigue design criteria for marine
structures. Marine Structures 1:23-45.
WIRSCIflNG, P. H., and S. HSIEH (1980). Linear model in probabilistic fatigue design. Journal of the Engineering
Mechanics Division of American Society of Civil Engineers 106:1265-1278.
WIRSCIflNG, P. H., and M. C. LIGHT (1980). Fatigue under wide band random stresses. Journal of the Structural
Division of American Society of Civil Engineers 106:1593-1607.
WIRSCIflNG, P. H., and Y. T. Wu (1985). Probabilistic and statistical methods of fatigue analysis and design. In:
Pressure Vessel and Piping Technology: A Decade of Progress-1985. New York: American Society of
Mechanical Engineers, pp. 793-820.
WIRSCIflNG, P. H., T. Y. TORNG, and W. S. MARTIN (1991). Advanced fatigue reliability analysis. International
Jounral of Fatigue 13:389-394.
Wu, Y.-T., and P. H. WIRSCIflN6 (1987). A new algorithm for structural reliability estimation. Journal of Engi-
neering Mechanics of American Society of Civil Engineers 113:1319-1336.
Wu, Y.-T., H. R. MILLWATER and T. A. CRUSE (1990). An advanced probabilistic structural analysis method for
implicit performance functions. AIAA Journal 28:1663-1669.
8
PROBABILISTIC ANALYSIS OF
STRUCTU RAL SYSTEMS
FRED MOSES
1. INTRODUCTION
The application of probabilistic methods for codifying structural reliability in the design and checking
of individual structural components is well established. This follows the decades-long development of
rational criteria for selection of safety factors on the basis of economy and risk. National and interna-
tional regulatory and specification groups have incorporated new design manuals based on probabilistic
criteria. These include civil, highway, marine, and geotechnical applications; materials including steel,
aluminum, and concrete; and structural components such as beams, columns, connections, footings, and
piles.
It has been recognized that the incorporation of reliability in design specifications has two major
limitations.
1. The actual structure (or system) reliability may differ significantly from the individual component reliabil-
ities. Thus, economic optimization of the structure cannot incorporate in the component design the economic
consequences of failure. Further, Bayesian calibration of the reliability formulation to update statistical
parameters from observed performance is limited as long as the analysis deals with component risk and
the observations are actually the system risk.
2. Most of the reported actual failures in structures are not a consequence of the overload or understrength
phenomenon checked in a specification. Rather, failures mostly occur because of accidents and human
errors. Emphasis in recent years is on promoting structure safety with attention to structure redundancy and
toughness as well as design review and inspection processes. These developments will benefit from a system
risk analysis and optimum allocation of both costs and component risks to achieve a safe optimum structure.
Structure system risk analysis is quite complex in comparison to component risk assessments, al-
though the same requirements are evident, such as an accurate statistical database, probabilistic descrip-
tions of random variables, and calculations of risks. The major difference is that system analysis requires
the formulation and identification of the numerous potential collapse modes and their combination into
a single assessment of system risk. For example, in some structures, several components must fail in a
166
Probabilistic Analysis of Structural Systems 167
sequence before there is overall structure damage or failure. In other structure configurations or with
different materials, the failure of any single one of many significant members may lead to catastrophic
consequences.
Examples of system studies include fatigue assessment of tension leg platform (TLP) marine struc-
tures, assessment of fixed offshore steel jacket structures, definition of redundancy criteria for highway
bridge structures, and evaluation of seismic safety of structures. Because of its considerable and inherent
economic implications, considerable attention in system analyses has been directed toward evaluation
of existing structures. These include both intact and damaged structures, to determine if reliability target
levels can be satisfied by existing system reserve even when single components may have inadequate
safety margins. Such approaches, even in a deterministic format, have been considered in highway
structures for many years.
This chapter presents some system models and highlights developments as well as present limitations.
In all areas, the intent is to focus analysis on the goals of structure reliability, which are to influence
design decisions and promote optimal allocation of material and inspection resources.
2.1. Notations
2.2. Abbreviations
3.3.1. System geometry. Geometry is often looked on as the main descriptor of system formula-
tion. Numerous solutions have been published for series and parallel systems, which are distinct ide-
alized representations of the geometry. A series system analogous to a "weakest-link" chain fails if
any link fails. Statically determinate structures and independent subsystems such as cladding, super-
structure, and foundation of a building are examples of series systems.
Probabilistic Analysis of Structural Systems 169
A parallel system implies that each component participates independently in carrying load, such that
upon failure of one member portions of its load or load increments can be transferred to adjacent
members in the parallel network. In fact, several codes refer to this geometric interpretation as a target
for acceptable design and fulfillment of "redundancy" requirements. Actually, some studies, including
common cause failure mode analysis in seismically excited nuclear power systems as well as other
structures, have confirmed that a parallel array of members, by itself, may not ensure reserve capacity.
In some instances, the parallel system can even be less reliable than a series system.
3.3.2. Material performance. The second element in system performance is material response.
This aspect refers particularly to behavior following the attainment of the limit state of an element.
Different material models have been studied in the context of system reliability, including elastic plastic
(ductile), elastic brittle, strain hardening (in which loads may increase after component yielding), and
semibrittle, in which loads gradually decrease following the limit state condition. Because of the com-
plexity of load redistribution models, finite element analysis is needed to predict behavior and charac-
terize the system performance.
3.3.3. Statistical correlation. The third system characteristic in the modeling is perhaps the most
difficult to quantify, namely, the statistical correlation of either loads or resistances. Correlation between
component resistances arise as a result of common material sources, similar fabrication, inspection,
testing and control procedures, and even a uniform interpretation of specification guidelines by the
designer. Load correlation arises because of common sources of environmental excitation simultaneously
affecting many resisting components, including seismic, thermal, wind, and other inputs as well as
tolerance limits influencing dead load and residual load effects.
For a series system, there is no difference between ductile or brittle components because any failure
causes system collapse. Increasing the correlation between failure mode events decreases the system
probability of failure because the survival of any component means there is less chance of other com-
ponents failing. In the limit, when correlation approaches 1.0, the failure probability of the series system
approaches the failure probability of the individual component with the highest probability of failure.
This would also be the case, for example, for a series system in which the load uncertainty (coefficient
of variation [COV]) is much greater than resistance uncertainty. Risk of failure then depends essentially
on whether a certain load level is exceeded and is independent of the number of links in the weakest-
link series system.
In a parallel system, the nature of the component failure, that is, whether it is ductile or brittle or in
between, has a significant effect on the system reliability. This is because a parallel system does rely
on load sharing but, more importantly, following any component yielding there can be load redistribution
in which members that are not fully loaded carry additional portions of the loading. Therefore, in the
case of parallel systems, increasing the statistical correlation between element strengths increases the
probability of system failure. This is in contrast to series systems and is because, when the correlation
is small, any scatter in component strengths of a parallel system allows weaker components to be
"balanced out" by stronger components. Increasing the correlation takes away from this balancing.
This characteristic is important in such applications as yam materials, fiber-reinforced composites, or
even buildings supported by rows of columns or piles.
Similarly, if the load COY is greatly above the component strength COY, then much of the benefit
of parallel or so-called "fail-safe" systems is lost. This is especially true if optimization methods are
used to size components so that all elements are fully utilized under a given load application. In such
cases, there is little possibility of load redistribution and the system failure probability depends almost
entirely on the distribution of the loading variable.
Codes that reward-such parallel geometry as, say, the Guide Specification for Safe Life Assessment
of Steel Bridges (AASHTO, 1990), which specifies higher allowable fatigue stresses for parallel load
170 Probabilistic Analysis of Structural Systems
paths, should be reviewed. The actual benefits of parallel paths may be a fortuitous consequence of
design practices such as making all members the same size, rather than an inherent consequence of
parallel geometry.
Parallel systems are also affected by whether components are ductile or brittle. In some instances,
such as low- to average-strength COY and moderate safety factors, a parallel brittle system may be less
safe than a corresponding nonredundant single load path structure. The reason is that the parallel brittle
system may have more elements whose failure can trigger system collapse because the redistribution
capability may be small. Parallel brittle composites and yarn materials may still be effective, however,
if their individual component safety factors are large. In such instances, there are sufficient safety
margins to allow for redistribution in the event of a single strand failure, so that system reliability is
greater than a single element reliability. This benefit depends strongly on the redistribution mechanics.
For example, closely spaced fibers in a composite may cause high stress concentrations following a
single failure so that there is a cascading effect or a "progressive" collapse following an element failure
with little or no system benefits.
3.4. Examples
The system failure probability for a series system can be written as follows:
where R j is the resistance of the ith component in the series, and S is the load variable. The variable n
is the number of components in series, FR'(S) is the cumulative distribution function of R j at R j = s, and
fs(s) isthe probability density function of Sat S = s. Survival is seen to depend on the survival of all
the components.
In a simple parallel load sharing case, the system resistance R can be written as
(8-2)
;=1
(8-3)
where S is the load variable. System failure probability is reduced as the COY of R decreases with the
number of components in parallel, provided these component strengths are statistically independent.
(Both Eqs. [8-1] and [8-3] assume statistical independence between component failures.)
In a similar way different arrangements of series and parallel members can be analyzed to give
system failure probability. Figure 8-1 shows the ratio of system to component failure probability for
different numbers of members (k) and increasing strength coefficient of variation (COY). This ratio
decreases with increasing COY. The higher the COY, the greater the chance that one of the series
components will fail. In Fig. 8-1c, the ratio of system to component failure probability is shown as a
function of correlation coefficient between component failures. As the coefficient approaches 1, the
Probabilistic Analysis of Structural Systems 171
1 2
. . .. k S
k=1
f=O
~--------------------f=1
k
Number of components
Figure 8-1. Series system model: Ratio of system to element probability of failure.
system failure probability approaches the member failure probability regardless of the number of
components.
Parallel elements are shown in Fig. 8-2, with the ratio of system failure probability to component
failure probability decreasing with number of components and increasing with higher correlation co-
efficient. Although the series and parallel systems in Figs. 8-1 and 8-2 are idealized cases, a realistic
172 Probabilistic Analysis of Structural Systems
//h 'h
1 2 3 4 • • • • • • k-l k
~ S
~ _________________________y=1
y=o k
Number of components
Figure 8-2. Parallel system model: Ratio of system to element probability of failure.
application is shown in Fig. 8-3, which models the supports for a tension leg offshore oil platform.
There are n cables in series making up a tendon where there are, in tum, m tendons in parallel resisting
the load. Each element has a failure probability, p, which is assumed independent. Reliability is im-
proved by increasing the length of the failure path, which means increasing the number of components
that must fail before the structure fails. This is done by increasing the number of components in parallel.
However, increasing the structure redundancy also increases the number of possible element failure
modes that may initiate the cascading effect leading to system collapse. Some preliminary observations
were that the greatest marginal benefits achieved would occur for the system to have two to three
degrees of redundancy. These may be determined by examining results (Gorman, 1985) as shown in
Fig. 8-4 for both series-parallel and parallel-series models.
4.1. Simulationl
The idealized geometries described above are useful for only a few realistic cases. In general, struc-
tures possess a mixture of series and parallel members, as well as different degrees of member ductility
and strength correlation. For example, Fig. 8-5 shows a model of an offshore steel jacket structure.
1A detailed discussion of simulation techniques, with emphasis on component reliability analysis, may be found in Chapter
4 of this book. The general principles and procedures discussed there are applicable to system reliability analysis also.
Probabilistic Analysis of Structural Systems 173
tendon
system
- - - - -1- - - - -
~ ::::R
1 1 1 1
1 1 1 1
1 1 1 1 n elements
1 1 1 1
LEVEL
VARIATION
onnn
1- ________ _
m tendons m tendons
CURRENT (fatigue,
=;> extreme
loading)
tendon
element
----'O::;;;;:;?-..,~----<:;;:;;:;;l...- (n per
tendon)
Parts of it can be modeled by ideal series and parallel network, but in fact there are many different
element failure types that must be integrated into the system probability of failure.
In this formulation it is assumed herein that there is available a deterministic analysis model of the
structure. That is, if the loads, geometry, and component strength variables are all specified determin-
istically, then it is possible to determine from the analysis model whether the structure has failed or
survived. A deterministic analysis that accurately characterizes the structure performance is assumed to
be available. This applies whether the system is linear of nonlinear. If such a model is available then
one approach for estimating system risk is by direct simulation. Samples of structures and loads can be
generated by Monte Carlo methods. The best estimate for system probability of failure is the percentage
of samples that fail. That is,
Pf = f G(x)f(x) dx (8-4)
where G(x) is an indicator function that equals 1 if the sample structure fails and 0 otherwise. x is the
vector of the random variables and f(x) is their joint distribution.
Large numbers of samples are required in most Monte Carlo applications because Pr is small. Im-
proved efficiency can be gained in some cases by using importance sampling, as described below.
Another approach is similar to experiment design. Samples are generated for preassigned values of the
random variables, for example, mean, mean plus and minus one standard deviation, mean plus and
minus two standard deviations, etc. Probabilities are assigned to each sample on the basis of the dis-
tribution function. These weighting probabilities are used to estimate the system probability of failure,
by summing probabilities over the samples in which the structure fail.
174 Probabilistic Analysis of Structural Systems
~1~' . . . . . ..
1'0r=========m=3~r----~
::;. p.....
..............
=3 ...
~
6 0.6 m=1 _ ,!,
==
~ m- :::
eIemads
t
i R=[1-(I-p)"')"
m=2 p=0.5
0.2
1.0F===::::::::==::::::::::::::::=-----f------,
::~ p=0.9
~
6 0.6 m=1
==
~
II -ffi~}
.........
. . ..... .
...:....
t
I
~
0
Pile to Jacket
Connections
Structural
Joints
0
0 0
o 0 Bracing
o0 0 0 Piles
o 0 Axial Soil
o 0 Failure
o DOD Lateral Soil Failure
including load and strength. Using regression analysis, a failure function g is fit to the response at these
points. A safety index (13) can then be computed in the usual manner from the function g, using the
distributions of the random variables. It is important to make sure that the region of the failure point,
x*, is accurately fit by the regression analysis. Otherwise, the process must be iterative in fitting g in
the assumed region of the sample space and then updating g with additional points selected near x*.
(The response function method [response surface method] is described in Chapter 3 of this book.) This
approach has been used by Baker and Turner (1992) for finding safety indices (l3's) for jack-up rigs.
These analyses need highly complex and time-consuming time history studies to verify failure given
sample points describing the sea state, etc. Similarly, the author developed response functions (Moses
et al., 1993) for continuous-span highway bridge girder systems, in which large displacements in the
nonlinear response region controlled failure. Significant nonlinearities occur partly accompanied by
member unloading. Each analysis check used significant computer time, so that the regression fit for g
could not utilize a large number of sample points.
been found. The modeling leads to a failure mode expression for each mode (j) of the form
(8-5)
Individual collapse modes are correlated through common resistance or load variables in the failure
mode equations. Research (Moses and Stahl, 1978; Moses, 1982; Moses and Rashedi, 1983; Nordal
et al., 1987) has led to inclusion of different element resistance behavior, in:cluding ductile, brittle, and
various nonlinear responses. A general method for deriving the g expressions has been developed. It
has two parts. The first is to arrive at a modal expression for any specific failure path. The second part
is to enumerate the significant modes that may affect the overall system reliability.
For small-scale structures and elastoplastic behavior, the identification and enumeration of failure
modes expressed in terms of loads and component resistances are straightforward. Linear programming
methods may be used to arrive at failure mode expressions similar to Eq. (8-5) given above (Rashedi
and Moses, 1986). For structures with more complex nonlinear behavior, techniques similar to the
incremental analyses described in Section 5 are now available.
(8-7)
A sampling density introduced by Fu and Moses (1987, 1988) is known as the weighted general
normal sampling density (WGNSD), and it defines a weighted sum of M subdensities corresponding to
the M significant failure modes that are identified.
(8-9)
where pix) are normal densities selected with the mth design point x* as its mean vector with the
same covariance matrix as the original density f(x). Weighting factors, W m , are obtained by solving the
following M linear equations:
(8-11)
These equations force the sampling density p(x) to have the same ratios at design points as the
original density f(x). This approach is based on the concept that the optimal sampling density is pro-
portional to the original density (Fu and Moses, 1987). Several examples (Fu and Moses, 1988; Fu et
al., 1989; Liu and Moses, 1992a) have shown that the WGNSD is both efficient and accurate for large
systems and also for systems with high modal correlation. Results show that importance sampling with
just 1000 samples give values of system failure probability comparable to those provided by conven-
tional Monte Carlo simulation using 10,000,000 samples. General rules for using importance sampling
are not possible, however, researcQ continues in this area (Ibrahim and Cornell, 1988; Liu and Moses,
1992a; Karamchandani et al., 1989).
On the basis of the above discussion, a general system formulation requires models to recognize different
geometries and redundancies, and varying types of component postfailure stiffness. An incremental
loading model has been developed (Moses, 1982; Gorman, 1985; Moses and Stahl, 1978; Moses and
Rashedi, 1983; Nordal et al., 1987) in order to investigate failure modes for such structures. These
incremental loading models utilize existing structure analyses and at each load level permit some com-
ponent to change stiffness, often a characteristic of a component failure mode. The stiffness analysis is
continually updated in this approach. Since the component resistances are random variables, the se-
quence of component failures cannot be predicted. Figure 8-6a shows the load response for a structure
with ductile members; Fig. 8-6b is for a structure with some brittle members. In the latter case, each
peak shown corresponds to a separate g function. This leads to a fault tree model in which failure
sequences branch out. An example (Moses and Rashedi, 1983) is shown in Fig. 8-7.
For a large framework, the potential number of modes or branches can be enormous and therefore
some logical rules for selecting paths must be prescribed. Each path leads to another failure expression,
gj. These expressions can be obtained by summing load increments in terms of member resistances.
The steps in enumerating the system modes are as follows.
WAD WAD B
RESPONSE RESPONSE
<a) (b)
Figure 8-6. Load response models for <a) ductile and (b) brittle systems.
[1985] have also utilized branch and bound methods for selecting probabilistically significant modes and
applied these techniques to structures that exhibit the formation of plastic hinges.)
3. After selecting a component for a failure sequence, the stiffness matrix should be altered to reflect that
failure.
4. Following the new load redistribution an additional component should be selected for the failure path.
5. This process is continued until the system fails either by reaching a maximum load level or else by attaining
such large displacements that the structure is no longer serviceable.
The incremental analysis must be repeated to obtain other failure sequences as illustrated in the fault
tree given in Fig. 8-7. These fault tree procedures may be aided by several considerations. The structural
analysis following a component failure may be extrapolated, using the initial intact analysis, and a
reanalysis algorithm introduced similar to those used in structural optimization routines. Also, often two
or three failed components in a path may be sufficiently accurate because there is deterioration of the
stiffness matrix leading to large displacements.
Incremental analysis models have been applied, especially to large offshore platform structures (Nor-
dal et al., 1987). Also, the models have been extended to include a variety of component behavior,
including strain-softening and work-hardening members. Other applications have been reported for
bridge structures (Liu and Moses, 1991).
g=O.667R3+0.555R8-1.0S1
g=1.000RS+O.555Rj-l.OS1
g=O.555R;+O.555Rj-l.OS1
g=0.333Rt +O.500R(j-l.OS1
g=I.000R(j+O.500R8+0.555Rl1rl.OS1
g=0.333R4+0.277Rfo-l.0S1
g=O.555R9+0.555Rl1rl.OS1
g=O.667Rl+ 1.OOOR(j+O.555Rl1rl.OS1
etc.
structures for consideration. For example, the Ontario Highway Bridge Design Code (Ontario Ministry
of Transportation and Communications, 1983) states that structures should have multiple load paths.
No criteria are given, however, for this definition, nor is it explained how tied arches, single-column
supports, or even suspension bridges can be built using these criteria. Other codes, such as those of
Switzerland or several other European countries, discuss "hazard scenarios." That is, an agreement is
made between the owner and designer to explore and mitigate, when necessary, possible failure scenarios
beginning with some component failure that may then cascade into a collapse mode.
Such code approaches, however, are mostly subjective without a strong analytical base. For example,
the rod holding the walkway that subsequently collapsed in the Hyatt Hotel in Kansas City was clearly
a single load path system. Most engineers would view a parallel stringer highway bridge as having
multiple load paths. Yet, if this stringer system were optimized under the extreme design load case,
then in fact there may not be any redundancy in the sense of additional capacity to withstand additional
extreme overloads.
Simply counting load paths as suggested in some cOdes does not always ensure redundancy. In many
structure designs it has been observed that redundancy is not the result of deliberate design decisions
or rules but rather a consequence of other causes. For example, fabrication requirements may dictate
180 Probabilistic Analysis of Structural Systems
uniform member sizes, or a symmetric load pattern may be used. Thus, reserve strength is built into
many structures but without any quantitation or analyses.
One criterion proposed for both offshore and highway bridge systems is a "systems factor" in the
design checking (Ghosn and Moses, 1992). 1Ypically, a component would be checked by the equation
(8-12)
Equation (8-12) represents the typical load and resistance factor design (LRFD) format and notation
now found in most North American and European codes (i.e., c!>r is the resistance factor, 'Yd and 'YL are
load factors for the dead and. live loads D and L, respectively, and Rn is the nominal resistance). c!>. is
added here as a system factor. The system factor must be calibrated by code writers after examining
representative structure configurations. In work being done (Moses et ai, 1993) for highway structures,
the system factor is based on both the ductility of a member and the structure geometry.
Calibration of system factors can be done by comparing system reliability indices (13 values) com-
puted for representative structures. A target may be selected so that the system 13 exceeds the member
13 by some desired margin. By reviewing different representative designs, system factors can be intro-
duced that depend on member ductility and member geometry, for example, number of load paths. It
is obviously difficult to generalize such design system factors.
Alternatively, it has been suggested that system factors based on a deterministic nonlinear structure
analysis be used. Member strengths are set equal to their mean values and the loading scaled until the
structure collapses. The deterministic ratio of the collapse load divided by the load at which the first
member fails is some measure of the system overload capacity. This ratio can be further correlated to
system 13 by using the load COY and typical strength COY. This procedure, which in actuality identifies
only a single collapse mode, is accurate only if load COY is greater than strength COY.
The simplified approaches to introducing system target reliability with a system factor applied to
member design checking have some limitations. The first is economic, because the cost of increasing
strength and reliability of some components, such as connections, is usually much lower than increasing
main member betas. The second limitation is that only overload conditions are usually considered in
the member design checking. In fact, as discussed further below, the major concern for system failure
is the consequence of an accident scenario. That is, some component(s) are damaged as a result of fire,
fatigue, or other causes and the structure must still be safe against collapse. In such cases, the system
redistribution is of major concern and generalizations using component system factors may be difficult.
system requires a high member resistance to crushing, which in tum depends on reinforcement ratio,
confinement steel, etc.
Similar conclusions were also reported (Zettlemoyer, 1988) for an analysis of behavior of members
and joints in offshore structures. The response patterns that follow a member reaching a failure limit
state can be difficult to predict and depend on the relative ratios of axial load and moment about the
two axes. Such behavior is important in determining whether a structure will act as a weakest-link
system or as a parallel fail-safe system.
6.3. Optimization
Optimization is discussed in Chapter 16 of this handbook. It suffices to note here some of the criteria
that may be invoked for finding an "optimum" structure using system reliability models. The conven-
tional optimization is to select member sizes to minimize weight or cost while satisfying a constraint
such that the system reliability is above some target level. Alternatively, the weight can be fixed and
the reliability maximized. Other types of problems include the selection of an optimum geometry. That
is, what level of redundancy using topological variables will optimize the cost and satisfy the system
reliability constraint.
Another major ingredient in system reliability optimization is to consider simultaneously both over-
load situations and accident scenarios. That is, find the "best" structure, in terms of member topology
and member sizes, that achieves both a high reliability against overload as well as acceptable reliabilities
given an accident scenario has been initiated. Some of these examples are discussed below.
6.4. Inspection
It has often been stated that a major source of structural failures is human errors and conditions not
normally considered in the design calculations. To mitigate such sources of failure requires consideration
of quality assurance (QA) and quality control (QC) procedures. In production systems, QNQC methods
typically prescribe sampling techniques that balance the costs of testing with the likelihood of error
disclosure. Structural QNQC is more complex than in production systems because of the great variety
of consequences associated with errors in analysis, fabrication, and construction.
A potential major area for utilizing system models is in the allocation of resources for quality
.assurance and inspection. The incremental loading model discussed in Section 5 can guide such resource
allocation strategies because it identifies the consequence of member failures conditional on an individ-
ual member failure. The resulting fault tree in the incremental model can be used to identify failure
paths that significantly affect the system reliability. The component failures that initiate such failure
paths are likely candidates for higher degrees of quality control and inspection during the design process
(checking of calculations) and during the subsequent fabrication and even material testing.
Examples of such studies on several frameworks have been presented by Rashedi and Moses (1988).
For given load cases, the results identified components whose failure caused a significant reduction in
residual system 13 values.
6.5. Evaluation
1\vo of the major areas of application of system reliability have been with respect to offshore plat-
forms and highway bridges. One of the motivating factors has been the continuing requirement for
evaluation of existing designs during their service life. Reasons for reevaluation include the following.
1. Changes in design requirements make it appear that older designs may not be adequate. That is, elements
and components show utilization ratios exceeding 1.0 when checked by new standards. Often system ca-
pacity is brought into the analysis to show the structure is adequate.
182 Probabilistic Analysis of Structural Systems
2. Deterioration of the structure may have caused some reduction in member capacities. This situation also
requires that full-system capacities be considered.
3. Changes in structure use may lead to increased capacity needs. Strengthening is often not economically feasible.
4. Changes in safety criteria may lead to an investigation of system capabilities. For example, structures are
now required to be redundant to withstand possible accident scenarios beyond anything considered in the
original design.
The added attention being paid to the infrastructure in the United States often leads to requirements
for the evaluation of existing structures and not just attention to replacement structures. The most
noteworthy example is probably the highway bridge system, in which each of the more than 600,000
highway structures in the United States is supposed to be inspected and load rated every 2 years.
Significant numbers of deficient structures have been revealed in various published surveys, due pri-
marily to the higher bridge design loads now in existence and also to the levels of deterioration expe-
rienced by existing structures as a result of environmental and other causes of decay.
Redundancy and system considerations may be directly involved in such evaluations, especially when
fatigue lives are being checked. The AASHTO provisions do distinguish between single and multiple
load paths in selecting allowable fatigue stress levels. Because the new codes also strongly restrict such
single load path systems, there have been a number of instances in which existing single load path
structures have been recommended for replacement, regardless of loading analysis. The fatigue life issue
was studied (Moses et ai., 1990), which led to AASHTO (1990) specifications for evaluating remaining
safe lives. Typical stringer bridges were examined and target reliability indices (\3 values) were pre-
scribed for single load path (around 3.5) and for multiple load path (about 2). These member checks
for fatigue lead to similar levels of system risk against failure.
A further study (Verma and Moses, 1989) considered the strength evaluation of existing steel and
concrete bridges. This also led to an AASHTO Guide specifications that may be used for the biannual
ratings that are needed. It uses a load and resistance factor format. Target \3 values for components were
set at about 3.5 for nonredundant single load path systems and at about 2.5 for redundant structures. The
latter are typically parallel stringer bridges. These have sufficient reserve capacity because the members
have the same strength and not all members are simultaneously loaded by the same load condition. Hence,
reserve capacity is always present and system reliability exceeds member reliability levels.
Typical spans and load statistics were used to calibrate the corresponding load and resistance factors
for the evaluation ratings. A system factor is implicit in these evaluations. These factors are not nec-
essarily the same values used in the AASHTO LRFD bridge design specifications. This is because of
different load statistics appropriate for evaluation (2-year exposure) compared to lifetime statistics for
design, and also because of additional information gained for evaluation from a field inspection of the
as-built structure. Economics also plays a role because the cost of increasing section sizes in a new
design is marginally much lower than the cost of strengthening or replacing existing deficient structures.
Further intensive study of evaluation techniques is underway for offshore marine structures. This is
because of the aging population of such structures and also because of changes in design and safety
criteria. In many individual project analyses, system reserves have been introduced to add to the overall
reliability to show acceptance.
cases examined the weight of the structure needed to reach a target system 13 is lower for nonredundant
diagonal braced structure than for the redundant X-braced structure. If, however, accident scenarios are
considered then a different situation arises. Bridge accidents arise, for example, as a result of collision
of oversized vehicles (believed by some to be the major source of structural failures in bridges), fatigue
brittle fracture, and corrosion. In offshore structures, accident scenarios in members have been reported
to result from fatigue as well, but also from boat collision, fire, and especially dropped objects.
The system reliability remaining after such an accident event has been termed the residual reliability
(Liu and Moses, 1991). It should incorporate the exposure period in the load model corresponding to
the likely time between the occurrence of the accident and detection (which may mitigate loss of life
and property) and repairs that would restore the original strength. The residual system reliability is
different from the reserve reliability, which corresponds to the system reliability for the intact structure
and lifetime exposure period. It represents the overload situation normally encountered in design
practice.
For statically determinate structures, the residual reliability may be zero, meaning no strength exists
after an accident. The use of these different reliabilities is the best way of selecting optimum geometries
and levels of redundancy. Some examples are described in the next section. In general, the system
failure probability should be approximated as
where Pf(overload-intactsystem) is the risk usually developed from a system reliability analysis of the undamaged
structure. Pf(residual reliability) is the risk given the accident event has occurred and P(accident event) is the corre-
sponding probability of occurrence of the accident. The summation should be made over all possible
accident scenarios. The approximation in this system expression in Eq. (8-13) is due to the correlation
of failure events that may arise in which the same member sequences may occur in different accident
scenarios as well as in the intact system.
On the basis of examples to date the calculation of this overall system failure probability is straight-
forward. The major obstacle in its implementation is in obtaining sufficient data on (1) the occurrence
of accident events and their possible severity (e.g., accidents could involve only partial damage rather
than total failure of members), and (2) the exposure periods between accident, detection, and repair.
Failure probability is greatly influenced by the exposure period when dealing with extreme environ-
mental events.
Despite these data limitations, this formulation of system risk has potential for isolating the most
important objectives that most people consider when they say risk, safety, and redundancy.
Figure 8-8. 1\vo-dimensional platform framework with X-bracing and horizontal members.
Q1--
Figure 8-9. 1\vo-dimensional platform framework with X-bracing (without horizontal members).
Probabilistic Analysis of Structural Systems 185
(8-14)
7. CONCLUDING REMARKS
System reliability plays an important role in expressing the goals of structure safety. Codes based on
member design factors such as the newly published LRFD or partial factor limit state codes relate only
186 Probabilistic Analysis of Structural Systems
to members and components. These codes need to be augmented by considerations of the system
consequences of member failure.
The formulation of system models must account for the potentially large number of failure modes,
the statistical correlation between loadings and between member strengths, performance of members
after reaching their limit state condition (e.g., ductile or brittle), and geometry. Merely counting parallel
load paths can lead in some cases to unconservative assessments and in other cases to unrealistic
requirements.
Formulation of system models through fault tree searches is feasible even for large structures, such
as offshore platforms. The main limitation, as always in reliability applications, is sufficient data, on
the one hand (such as correlation values), and accurate member postfailure performance predictions on
the other.
System reliability may be used to formulate partial system design factors for generic types of struc-
tures such as multistringer bridges. It can also be used to evaluate specifk geometries and design
capacities of structures. Its greatest potential, however, may be in resolving issues related to accident
scenarios. Most failures are due to such unintended usages, errors, or omissions and are not due to
design loading exceeding resistance. Increasingly, codes are recommending that engineers review the
potential consequences of hazards in terms of overall damage to the structure. System reliability presents
a tool for identifying which scenarios are important and should lead to design changes.
Further development is needed to make system models more accurate and consistent and also more
accessible to designers for making risk-benefit tradeoffs.
REFERENCES
AASHfO (1990). Guide Specifications for Safe Life Assessment of Steel Bridges. Washington, D.C.: American
Association of State Highway and Transportation Officials.
BAKER, M. J., and P. C. TuRNER (1992). A unified solution approach to structural system reliability analysis. In:
Proceedings of the Sixth Conference on Probabilistic Mechanics and Structural Reliability. New York: Amer-
ican Society of Civil Engineers.
CoRNELL, C. A. (1967). Bounds on the reliability of structural systems. Journal of the Structural Division, ASCE
93(1): 171-200.
DI1LEVSEN, O. (1979). Narrow reliability bounds for structural systems. Journal of Structural Mechanics 7:435-
451.
Fu, G. K, and F. MOSES (1987). A sampling distribution for system reliability assessment. In: Proceedings of the
First IFf:> Conference on Reliability and Optimization of Structural Systems. P. Thoft-Chrisensen, Ed. Berlin,
Germany: Springer-Verlag.
Fu, G. K., and F. MOSES (1988). Importance sampling methods in structural system reliability. In: Proceedings of
the Fifth ASCE Speciality Conference on Probabilistic Mechanics and Structural Reliability. P. Spanos, Ed.
New York: American Society of Civil Engineers.
Fu, G. K., D. VERMA, and F. MOSES (1989). Advanced simulation methods in system reliability. In: Computational
Mechanics of Probabilistic and Reliability Analysis. W. K. Liu and T. Belytschko, Eds. Lausanne, Switzer-
land: Elme Press International.
GHOSN, M., and F. MOSES (1992). Calibration of redundancy factors for highway bridges. In: Proceedings of the
Sixth ASCE Speciality Conference on Probabilistic Mechanics and Structural Reliability. New York: Amer-
ican Society of Civil Engineers.
GORMAN, M. R. (1985). Resistance modeling. In: Short Course on Structural Reliability of Offshore Platforms.
New York: American Society of Civil Engineers.
IBRAlDM, Y., and C. A. CORNELL (1988). Experiences with applications of importance sampling in structural
Probabilistic Analysis of Structural Systems 187
reliability. In: Proceedings of the Fifth ASCE Speciality Conference on Probabilistic Mechanics and Struc-
tural Reliability. New York: American Society of Civil Engineers.
KARAMCHANDANI, A, P. BJERAGER, and C. A CORNELL (1989). Adaptive importance sampling. In: Proceedings
of the International Conference on Structural Safety and Reliability (ICOSSAR). New York: American So-
ciety of Civil Engineers.
LIU, Y. w., and F. MOSES (1991). Bridge design with reserve and residual reliability constraints. Journal of Struc-
tural Safety 11:29-42.
LIU, Y. w., and F. MOSES (1992a). Use of importance sampling constraints in system optimization. In: Proceedings
of the Sixth ASCE Speciality Conference on Probabilistic Mechanics and Structural Reliability. New York:
American Society of Civil Engineers.
LIU, Y. w., and F. MOSES (1992b). Truss optimization including reserve and residual reliability constraints. Com-
puters and Structures 42(3).
MOSES, F. (1982). System reliability applications in structural engineering. Journal of Structural Safety 1(1):3-13.
MOSES, F. (1990). New directions and research needs in system reliability research. Journal of Structural Safety
7:93-100.
MOSES, F., and Y. W. LIU (1992). Methods of redundancy analysis for offshore platforms. In: Proceedings of the
Offshore Mechanics and Artie Engineering Symposium. New York: American Society of Civil Engineers.
MOSES, F., and M. R. RASHEDI (1983). The application of system reliability to structural safety. In: Proceedings
of the Fourth International Conference on Applications of Statistics and Probability in Soil and Structural
Engineering (Florence, Italy). Bologna, Italy: Pitagora Editrice.
MOSES, F., and B. STAHL (1978). Reliability analysis format for offshore structures. Proceedings of the Offshore
Technology Conference (Houston, Texas). Dallas, Texas: Offshore Technology Conference Publications.
MOSES, F., S. RAJU, and C. SCHILLING (1990). Reliability calibration of fatigue design and evaluation procedures.
Journal of Structural Engineering, ASCE 116:1356-1369.
MOSES, F., N. KHEDEKAR, and M. GHOSN (1993). System reliability of redundant structures using response func-
tions. In: Proceedings of the International Conference on Structural Safety and Reliability (ICOSSAR).
MUROTSU, Y., H. OKADA, S. MATSUZAKI, and S. KATSURA (1985). Reliability assessment of marine structures. In:
Proceedings of the Offshore Mechanics and Artie Engineering Symposium. New York: American Society of
Civil Engineers.
NORDAL, H., C. A CORNELL, and A KARAMCHANDANI (1987). A structural system reliability case study of an
eight-leg steel jacket offshore production platform. In: Proceedings of the Marine Structural Reliability
Symposium, Arlington, Virginia.
Ontario Ministry of Transportation and Communications (1983). Ontario Highway Bridge Design Code. Downs-
view, Ontario, Canada: Ontario Ministry of Transportation and Communications.
RAMACHANDRAN, K. (1985). New reliability bounds for series systems. In: Proceedings of the International Con-
ference on Structural Safety and Reliability (ICOSSAR) (Kobe, Japan). I. Konishi, A H.-S. Ang, and M.
Shinozuka, Eds. New York: International Association for Structural Safety and Reliability.
RASHEDI, M. R., and F. MOSES (1986). Applications of linear programming to structural system reliability. Com-
puters and Structures 24(2).
RASHEDI, M. R., and F. MOSES (1988). Identification of failure modes in system reliability. Journal of Structural
Engineering, ASCE 114(7).
VERMA, D., and F. MOSES (1989). Calibration of a bridge strength evaluation code. Journal of Structural Engi-
neering, ASCE 115(6)
ZETTLEMOYER, N. (1988). Developments in ultimate strength technology for simple tubular joints. In: Proceedings
of the UEG Conference, OTJ.
9
PROBABILISTIC STRUCTURAL
MECHANICS IN SYSTEM AND
PLANT RISK ASSESSMENT
1. INTRODUCTION
Structures in industrial facilities such as power, chemical, and manufacturing plants as well as in com-
plex engineered products such as aircraft, ships, and space vehicles interface and interact with other
mechanical, electrical, and electronic components that, together with the structures, form the engineering
systems. 1 Structural failures could affect the system performance as well as the performance of the
nonstructural components that interface with the structures. Similarly, nonstructural component failures
could create excessive loads on the structures and thus adversely affect their performance. Thus struc-
tural and nonstructural components affect each other's performance as well as the system performance.
A comprehensive system reliability and risk assessmene should therefore consider structural reliabilities
as part of the evaluation.
Moreover, in complex systems with redundancies, failure of a single structure may not necessarily
produce any system malfunction. 1\vo or more structural failures or a structural failure and one or more
nonstructural component failures may be needed to produce system malfunction. Structural failures and
their probabilities are therefore best considered within the totality of the system, and not as isolated
incidences. Reductions in structural failure probabilities and the benefits of such reductions should be
considered within the context of the system as a whole.
System reliability and risk assessment is usually performed by system reliability engineers and the
necessary structural reliability data are provided by structural reliability engineers. Although structural
reliability engineers do not usually perform the system reliability analysis, a basic knowledge of the
fundamentals of system reliability analysis and how structural reliability information is integrated into
system reliability analysis will greatly improve the interaction and communication between structural
IStructural systems are discussed in Chapter 8. Here we discuss engineering systems. A structure consisting of a number of
structural components (structural elements) is called a structural system, for example, a truss or a frame. Engineering systems
are different from structural systems. An engineering system may consist of structures (or structural components) as well as
nonstructural components such as mechanical, electrical, and electronic equipment.
2System reliability and risk assessment is sometimes referred to as probabilistic risk assessment (PRA) or quantitative risk
assessment (ORA) in the literature.
188
Probabilistic Structural and Mechanics in System and Plant Risk Assessment 189
reliability engineers and system reliability engineers. Also, a basic knowledge of system reliability
analysis would make structural engineers look at structural reliabilities within the overall context of
systems, and that is a healthy perspective.
The interaction between structural reliability engineers and system reliability engineers should not
be viewed or treated as a mere exchange of data. They may, and should, engage in fruitful dialog
about how structural reliabilities affect the system reliability and how improvements in selected struc-
tural reliabilities could improve the latter. Practicality and cost of improving structural reliabilities may
be discussed and, together, the structural and system engineers may be able to design optimal structures
and systems that provide the best system performance at the least cost.
This chapter provides an introduction to system reliability and risk assessment methods and discusses
some applications in which structural reliabilities are used in system reliability and risk assessment.
2.1. Notations
F Ordinate of the system fragility curve (or undesired event fragility curve)
f Ordinate of the structural fragility curve
H Ordinate of the hazard curve
Ik Vesely-Fussell measure of importance of the kth basic event with respect to system risk
4 Vesely-Fussell measure of importance of the kth component with respect to system risk
ik Vesely-Fussell measure of importance of the kth basic event with respect to the top event
i; Vesely-Fussell measure of importance of the kth component with respect to the top event
PI Probability of the initiating event
Pj Probability of the jth minimal cut set
PT Probability of the top event
PH Probability of the undesired event due to hurricanes
Pk Probability of the kth basic event
R Total risk
R. Risk due to the nth undesired event
V Hurricane wind speed
2.2. Abbreviations
Some of the widely used methods of system reliability and risk assessment, namely the failure modes
and effects analysis, fault tree analysis, and event tree analysis, are discussed briefly in this section.
Other methods, not discussed here, include the reliability block diagram approach (Green and Bourne,
1972) and the GO method (Gatelby et aI., 1968).
3More detailed discussion on failure modes and effects analysis may be found in Sundararajan (1991).
Probabilistic Structural and Mechanics in System and Plant Risk Assessment 191
Thble 9-1. Sample Entries in a Failure Modes and EffeCts Analysis Sheet
Annual
Failure failure Effect of Criticality
Component mode Cause of failure probability failure of effect
of the building; equipment in the vicinity of the wall could be damaged; injury and death possible;
plant may have to be shut down for a week or more."
Criticality of effects: Criticality may be recorded in a number of ways, depending on the information
available about the effects of the failure. If no quantitative information is available about property
damage and fatalities, a qualitative ranking system such as the following may be used.
I-Insignificant: very little property damage and very little effect on system; no injuries or fatalities
II-Minor: some property damage and/or affect system functions and reliability somewhat; no injuries or
fatalities
III-Major: significant property damage and/or affect system functions and reliability significantly; no or minor
injuries; no fatalities
IV-Critical: Fatalities and/or major injuries; there mayor may not be property damage; system functions and
reliability mayor may not be affected
One may establish other schemes of criticality ranking and use them consistently in a project. If
quantitative information about the effects of the failure is available, that information may be noted in
terms of property damage, injuries, and fatalities. Indirect damage such as pollution should be stated in
dollar amounts or some other measure. Property damage and indirect damage such as pollution may be
lumped together as economic loss.
In addition to the above six items, some FMEA sheets include columns for failure detection (how
the failure will first become apparent to operating personnel), safety features (provisions built into the
system that would reduce the failure probability or mitigate the effects of the failure), and remarks.
A sample FMEA sheet is given in Table 9-1. Some other examples of FMEA sheets may be found
in Gangadharan et al. (1975) and the American Society of Mechanical Engineers (1991).
Risk of a component failure is given by the product of its failure probability and the consequences
(economic loss and/or fatalities). Risk may be stated as, for example, $10,000 and 10- 1 fatalities per
year.
If (1) all the component failures are included in the FMEA, (2) all system-level failures are caused
by single-component failures (i.e., there is no possibility of system-level failures by combinations of
two or more component failures), and (3) the component failures are statistically independent of each
other, then the total probability of system-level failures is given by the sum of the individual component
failure probabilities and the system risk is given by the sum of the individual component risks.
Component failures may be ranked according to their contribution to the system failure probability
or according to their contribution to the risk. These two rankings may not necessarily be the same.
Failure modes and effects analysis is a forward logic approach because it progresses from component
failures to system failures and consequences. It is also an inductive approach because it induces the
effects of component failures.
A method known as the hazard operability method (HAZOP method) has been used in the process
192 Probabilistic Structural and Mechanics in System and Plant Risk Assessment
industry and also by the U.S. Department of Energy. This method is similar to FMECA and readers are
referred to Lee (1980) for further details.
3.3.1. Fault tree construction. The basic concepts of fault tree construction may best be explained
through the following simple example. A system diagram for the operation of an electric motor is shown
in Fig. 9-1. We are interested in constructing a fault tree for the system-level undesired event, 'motor
overheats. '
First, we place the undesired event at the top of the tree, within a rectangle. This is the top event of
the tree. We ask the question, "How can the motor overheat?" Motor can overheat either because of
(1) an internal malfunction of the motor, or because of (2) excessive current supplied to the motor.
These two events are therefore placed under the top event and connected by an OR gate. 'Motor
malfunction' is a basic failure for which we have the failure probability. We place all basic failures
within circles and they are called basic events.
We treat the event 'excessive current to motor' as an intermediate event and expand it further.
(Intermediate events are placed within rectangles.) The motor may receive excessive current if (1) the
fuse fails closed, and (2) there is excessive current in the circuit. These two events are therefore placed
under the intermediate event and connected by an AND gate. Fuse failure (fuse fails closed) is a basic
failure. It is therefore placed in a circle. 'Excessive current in circuit' is an intermediate event and we
expand it further.
Excessive current in the circuit is either due to (1) a short circuit in the wiring, or due to (2) a power
surge in the power supply. We treat both these events as basic events and place them in circles. They
are connected by an OR gate. This completes the fault tree. The tree is shown in Fig. 9-2.
This simple example brings out the basic logic behind fault tree construction and introduces basic
events, intermediate events, top events, AND gates, and OR gates. There are many other types of events
and gates. Complex fault trees may contain hundreds of basic events and gates, and could be many
pages long. Computer programs are available for fault tree construction.
A system could have more than one undesired event; for example, (1) system malfunction resulting
in its shutdown, (2) explosion, (3) fire, and (4) poisonous gas leak, etc. A separate fault tree may be
constructed for each undesired event.
3.3.2. Qualitative fault tree analysis. Qualitative fault tree analysis consists of the determination
of minimal cut sets and/or minimal path sets. Each combination of basic events that is sufficient and
necessary to cause the top event to occur is called a minimal cut set. A fault tree may have a number
of minimal cut sets. Each minimal cut set may contain one or more basic events.
There are three minimal cut sets for the example fault tree shown in Fig. 9-2. They are
'More detailed discussion on fault tree analysis, including a list of computer programs, may be found in Sundararajan (1991).
Probabilistic Structural and Mechanics in System and Plant Risk Assessment 193
Switch
Fuse
U----o-.. . .
Power
Supply
Wire
1. 'motor malfunction'
2. 'short circuit in wiring' AND 'fuse fails closed'
3. 'power surge' AND 'fuse fails closed'
We are able to deduce these minimal cut sets just by examining the fault tree. This is not possible for
large fault trees. Some complex fault trees may have hundreds of minimal cut sets. Formal methods
and computer programs are available for minimal cut set determination.
Motor Overheats
Excessive current
to motor
Excessive current
in circuit
which means that the second minimal cut set M2 contains the basic events B4 , B7 , and Bg. In other
words, the second minimal cut set represents the combination of basic events B4 , B7 , and Bg. That is,
occurrence of basic events B4 , B7, and Bg would result in the top event of the fault tree. These three
basic evens are called the elements of minimal cut set M2.
Minimal cut sets are more widely used than minimal path sets and so we do not discuss minimal
path sets here.
3.3.3. Quantitative fault tree analysis. Probability of the top event is computed during quanti-
tative fault tree analysis. Statistical dependencies between basic events, if any, should be considered
during fault tree quantification. The top event probability may be computed directly from the fault tree
or through the minimal cut sets. Computer programs are available for both approaches. If the top event
probability has to be computed repeatedly with different sets of basic event probabilities (parametric
studies), then quantitative analysis through minimal cut sets is more economical. .
Risk due to an undesired event (top event) is given by the product of the undesired event probability
and its consequences (economic losses and fatalities). If the system has more than one undesired event
associated with it, the total system risk is equal to the sum of the risks over all the undesired events.
This summation procedure is valid if two or more undesired events do not occur at the same time. Such
an assumption may be made if the probability of two or more undesired events occurring at the same
time is very small compared to the probability of each undesired event. Throughout this chapter we
make this assumption.
3.3.4. Importance ranking. Fault tree analysis results may be used to rank the basic events ac-
cording to their importance in causing the top event. A number of importance measures are available.
1. Vesely-Fussell measure
2. Birnbaum measure
3. Criticality measure
4. Upgrading function measure
5. Barlow-Proschan measure
6. Sequential contributory measure
Each measure has its own benefits and limitations and each measure is well suited for some types
of applications. The Vesely-Fussell measure is used widely and we limit our discussion to this measure
only.s Details on other measures may be found in Lambert (1975) and Vesely et al. (1983). Computer
programs are available to compute these measures and rank the basic events according to these measures.
Basic event ranking according to one measure may not necessarily be the same as the ranking according
to another measure.
First, let us discuss the Vesely-Fussell measure (V-F measure) of importance of basic events with
respect to top event probability of the fault tree. (Equations presented in this chapter for the V-F
measure of importance assume statistical independence between basic events.) The V- F measure for
the kth basic event with respect to top event probability is given by (Lambert, 1975)
(9-1)
where Pj is the probability of the jth minimal cut set (explained at the end of this paragraph) and PT is
the probability of the top event (determined by quantitative fault tree analysis). The summation is over
all the minimal cut sets that contain the kth basic event. Probability of a minimal cut set is given by
the product of the probabilities of the basic events in that minimal cut set.
As an example, consider a fault tree that contains five basic events Bb B2 , B3 , B4 , and Bs. Let the
minimal cut sets of the fault tree be Ml = (B2), M2 = (Bb B3, B4), and M3 = (Bb B4, Bs). The V-F
measure of importance of the fourth basic event with respect to top event probability is given by
where Pk is the probability of the kth basic event, Pj is the probability of the jth minimal cut set, and
PT is the probability of the top event.
In general, the basic event probabilities could be functions of time. In that case Pj' Pn and ik will
also be functions of time.
As discussed earlier, there could be more than one undesired event associated with a system and
each undesired event may have a specific risk associated with it. The basic events (basic failures of the
components of the system) may be ranked according to their contributions to the system risk (Sundar-
arajan, 1992b). The V-F measure of importance of the kth basic event with respect to system risk is
given by
(9-2)
where ik.n is the V-F measure of importance of the kth basic event with respect to the nth undesired
event probability, Rn is the risk due to the nth undesired event, and R is the total system risk. The
summation is over all the undesired events associated with the system. If the risk includes both economic
losses and fatalities, basic events may be ranked according to economic losses and fatalities separately.
Some components may have more than one failure mode associated with them. Sometimes each
failure mode is treated as a separate basic event, and so a component may have more than one basic
event associated with it. In such cases, either each component or each basic event may be ranked
according to its importance. Whether a component has one or more basic events, the basic event ranking
does not change; Eqs. (9-1) and (9-2) apply.
The V-F measure of importance of the jth component with respect to top event probability is given
by
(9-3)
where ik is the V-F measure of importance of the kth basic event with respect to top event probability.
The summation is over all the basic events associated with the jth component.
Similarly, the V-F measure of importance of the jth component with respect to system risk is given
by
(9-4)
where It is the V-F measure of importance of the kth basic event with respect to system risk. The
summation is over all the basic events associated with the jth component.
Basic events and/or components are ranked in descending order according to their importance values.
196 Probabilistic Structural and Mechanics in System and Plant Risk Assessment
Either the importance with respect to a specific undesired event (top event) probability or with respect
to system risk is used, as appropriate.
What is the practical significance of the V-F importance ranking with respect to top event proba-
bility? One by one, if each basic event probability is decreased by a specific percentage (say, 10 or
20%) and the effect of each such decrease on the top event probability is computed, the reductions in
the top event probability will be in the same order as the basic event ranking. That is, a reduction of
10% in the probability of the mth ranked basic event will produce more of a reduction in the top event
probability than a 10% reduction in the probability of the nth ranked basic event if m < n. Therefore,
if the costs of decreasing the probabilities of the mth and nth ranked basic events by 10% are the same,
then it is prudent to decrease the probability of the higher ranked basic event than the probability of
the lower ranked basic event.
6More detailed discussion on event tree analysis may be found in reports by the Nuclear Regulatory Commission (1975,
1983).
7It is possible to construct a fault tree for the complete plant and analyze it. Such a fault tree may become unduly large for
analysis if the plant contains a large number of components. Event trees become useful in such cases. Even when the plant fault
tree is not unduly large, some analysts prefer to analyze individual systems separately and then combine them through event
trees.
Probabilistic Structural and Mechanics in System and Plant Risk Assessment 197
insignificant
B·success
PI(i-~
A·success
PI(i-~PB $80,000;
no death;
B·faiIure no injury
Pi
PI~ft-PB)
insignificant
B·success
PI~
A·faiIure
PI~PB $300,000;
4 deaths
B·failure
time. This assumption may be made if the probability of two or more initiating events occurring at the
same time is small compared to the individual initiating event probabilities.
Event tree analysis is a forward logic approach because it progresses from system level to plant
level. This is also an inductive approach.
Structural engineering aspects of system reliability and risk assessment are discussed by Sundararajan
(1992a). That paper forms the basis for parts of this section. In the context of the integration of structural
reliabilities into system reliability, two types of structural failure scenarios (situations) should be
considered.
These two scenarios warrant somewhat different types of system reliability analysis.
198 Probabilistic Structural and Mechanics in System and Plant Risk Assessment
1. Determine the annual probability of the event as a function of its magnitude. (This step is referred to as
the hazard analysis).
2. Determine structural failure probabilities at different magnitudes of the event (structural fragility analysis).
3. Determine the system failure probability at different magnitudes of the event (system fragility analysis).
4. Determine the annual system failure probability due to all possible magnitudes of the event (system relia-
bility analysis).
5. Determine the annual system risk due to the event (system risk assessment).
E-7r---------~----------------------------------__,
E-5
HI
E-3
E-IL---------~----------~--L-------~--------~~
80 100 VI 120
4.2.1. Hazard analysis. Probabilities of different magnitudes of the hazardous event are plotted
as a hazard curve. A sample hazard curve for hurricanes is shown in Fig. 9-4. Similar curves may be
drawn for tornadoes, earthquakes, and explosions. The horizontal axis of the curve would be wind
speed, peak ground acceleration, and peak pressure in the case of tornadoes, earthquakes, and explosions,
respectively.
Hazard analysis for natural events is based on historical data, mathematical models of the natural
phenomenon, and/or expert opinion (McDonald, 1983; Coats and Murray, 1985; Bernreuter et al., 1987;
Reed and Ferrell, 1987; EPRI, 1989). Hazard analysis for internal explosions (explosion within the
system or plant) is based on historical data in similar plants, expert opinion, and system reliability
analysis. Hazard analysis for external explosions (explosions outside the plant) is also based on similar
considerations.
Hazard analysis is not necessarily a structural engineering task. Hazard curves for natural events are
usually developed by seismologists and meteorologists.
4.2.2. Structural fragility analysis. First the loads imposed on each structure at different mag-
nitudes of the event are determined, and then the structural failure probabilities at these loads are
determined. The results are presented in the form of structural fragility curves (Fig. 9-5). The probability
/1 on the vertical axis of Fig. 9-5 is the conditional probability of structural failure given that a hurricane
of wind speed VI has occurred. Each structure in the plant may have a different fragility curve.
Structural fragility curves are developed through structural reliability analysis and expert opinion.
Sufficient historical data are seldom available to develop fragility curves. Development of fragility
curves for earthquakes is discussed in Chapter 19 and fragility curves for tornadoes and hurricanes are
discussed in Chapter 20.
4.2.3. System fragility analysis. System reliability analyses are conducted at different magnitudes
of the event. For example, in the case of hurricanes, system reliability analyses are conducted at wind
speeds V = 80, 90, 100 mph, and so on. System failure probability or undesired event probability is
determined at each of these wind speeds. To determine the undesired event probability at V = VI> a
quantitative fault tree analysis is conducted with basic event probabilities at V = VI' Many of the basic
200 Probabilistic Structural and Mechanics in System and Plant Risk Assessment
1.r-------------------------------------~------_.
.5
n -------------------------
o.~----------~----~--~--~------~--------~
80 lOOVl 120
Hurricane Wind Velocity (V)
event probabilities associated with mechanical, electrical, and electronic components may be independ-
ent of wind speed because wind forces may not affect their failure probabilities. On the other hand,
many basic event probabilities associated with structures are dependent on wind speed. Structural failure
probabilities at wind speed V = Vi are taken from the structural fragility curves and used in the system
reliability analysis. What we obtain from the system reliability analysis is the conditional probability.
For example, in Fig. 9-6, F is the conditional probability of undesired event occurrence given that a
hurricane of wind speed V has occurred. The system reliability analysis is repeated for different wind
speeds and a graph of undesired event probability (or system failure probability) versus hurricane wind
speed is plotted. This graph is known as the system fragility curve or the undesired event fragility curve
(Fig. 9-6).
4.2.4. System reliability analysis. We have thus far computed conditional probabilities of the
undesired event at different hurricane wind speeds. Our goal is to compute the undesired event prob-
1. .--------------------------~--~-----_.
/
.- .-
/
/
Fl _______________________ _ /
/
/
/
.5 ,', f
F
I
I I
I,
)
/1
/ I
I
O.~----~--~-----~~-~----L------~
80 looVl 120
Hurricane Wind Velocity (V)
- - - Undesired Event
- - - - - Structure
Figure 9-6. Sample system (undesired event) fragility curve for hurricanes.
Probabilistic Structural and Mechanics in System and Plant Risk Assessment 201
ability due to any hurricane (of any wind speed) that might occur during a year. This probability is
computed as follows.
1. Divide the hurricane hazard curve into a number of equal intervals of length ~ V. About 10 intervals would
be sufficient in most cases. The ith interval is from V; to V;+h where V;+I = V; + ~V.
2. Compute the annual probability of hurricanes with wind speeds between V; to V;+I' This value is given by
where H(V;) is the ordinate of the hazard curve (Fig. 9-4) at V = V;.
3. Determine the conditional probability of the undesired event given that a hurricane of wind speed V; has
occurred. This value, F(V;), is the ordinate of the undesired event fragility curve (Fig. 9-6) at V = Vi'
Similarly, determine F(Vi + l ) also.
4. The annual probability of the undesired event due to hurricanes of all possible wind speeds is given by
(9-6)
5. APPUCATIONS
Some specific industrial applications in which probabilistic structural mechanics played a role in the
system reliability and risk assessment are discussed in the following subsections.
A number of interval event PRAs of nuclear plants have been conducted since the Reactor Safety Study;
for example, PRAs have been conducted by the Commonwealth Edison Company (1981), Power Authority
of the State of New York (1982), Consolidated Edison Company of New York (1982), Houston Lighting
and Power Company (1989), and Nuclear Regulatory Commission (1989), to name just a few. The Nuclear
Regulatory Commission (1983) published a guide to performing nuclear plant PRAs in 1983. The analysis
procedure used in these PRAs is essentially the same as the one used in the Reactor Safety Study.
Because most structural failures are not included in the system reliability analysis and even the few
pressure vessel and piping failure probabilities included are estimated from historical data, probabilistic
structural mechanics (PSM) plays only a minor role in these internal event probabilistic risk assessments.
(There are a few special cases of internal event PRAs in which PSM played a significant role. One
such case is described in Section 5.3.)
Internal event PRAs have also been conducted for some nonnuclear plants. Again, structural failures
are seldom included because of their very low probabilities compared to the failure probabilities of
mechanical, electrical, and electronic components.
Internal event PRAs are used not only to estimate the risk but also to rank components according
to their contributions to the risk. If structures are to be ranked, they should be included in the PRA
even if their failure probabilities are much lower than those of nonstructural components.
bility engineers and system reliability engineers can combine their expertise to develop economical
methods of risk reduction in complex technological systems.
Pressurized thermal shock is initiated by undesired events such as loss of coolant accidents (WCAs)
or steam generator tube ruptures (SGTRs) (these undesired events are called transients in the nuclear
power industry). If one of these transients occurs, a number of safety systems come into action auto-
matically or by operator intervention to mitigate the effects of the transient. It is possible (although
remote) that some of these systems malfunction or that operators make mistakes. Depending on which
systems/operators function correctly and incorrectly, a number of different scenarios (event sequences)
result. These event sequences and their probabilities are determined by event tree analysis. A different
event tree is constructed for each transient. (The transient is the initiating event of the tree.) Each event
tree may have dozens of event sequences.
A transient, in conjunction with some system malfunctions and/or operator errors, may produce severe
temperature drop and pressure rise in the primary system of the reactor. The pressure and temperature
changes as a function of time are determined by thermal-hydraulic analysis. These pressure and tem-
perature time histories depend on which systems and/or operators malfunctioned. Therefore each event
sequence of the event trees has a pressure and temperature time history associated with it.
One area most adversely affected by the pressure rise and temperature drop is the beltline region of
the reactor pressure vessel (RPV). This region of the RPV has reduced fracture toughness because of
irradiation. The cool temperature also reduces the fracture toughness. Under such a weakened condition,
tensile stresses caused by the pressure increase could propagate any existing flaws in the pressure vessel
and thus possibly breach the integrity of the vessel. Probabilistic fracture mechanics techniques are used
to compute the failure probability of the reactor pressure vessel. A number of simulation techniques for
the probabilistic fracture analysis of reactor pressure vessels under PTS conditions are discussed by
Gamble and Strosnider (1981), Sundararajan (1982), Balkey and Furchi (1984), and Witt (1984).
The PTS risk assessment consists of the following steps (Turner et al., 1984).
1. System reliability engineers perform event tree analysis for each transient and identify the possible event
sequences and their probabilities. Let Pi.j be the probability of the ith event sequence due to the jth transient
(initiating event).
2. Thermal hydraulic engineers determine the pressure and temperature time histories for each event sequence.
Approximate, conservative methods are used at this stage of the analysis.
3. Structural engineers compute the failure probability of the reactor pressure vessel for each pressure and
temperature time history. Approximate, conservative methods are used at this stage of the analysis. Let f,j
be the conditional failure probability given the ith event sequence of the jth transient.
4. Probability of reactor pressure vessel failure due to the ith event sequence of the jth transient is
) ~
p= L.J p.
t,}
5. Identify the event sequences that contribute the most to P. Usually only a few event sequences will be
dominant contributors. Many dozens of event sequences will be found to be insignificant contributors.
6. Repeat steps 2 to 4 for the dominant event sequences identified in step 5. More accurate thermal hydraulic
analyses and probabilistic fracture mechanics analyses are conducted at this stage of the analysis. This
provides a more accurate reactor pressure vessel failure probability than step 4.
204 Probabilistic Structural and Mechanics in System and Plant Risk Assessment
If the PTS-induced reactor pressure vessel failure probability thus computed is within acceptable
levels, no further action is necessary. If not, modifications must be made to the systems or operating
procedures to reduce the risk (Turner et al., 1984; Moylan et al., 1987). Effective communication
between system, thermal hydraulics, and structural engineers is necessary to arrive at viable and cost-
effective modifications. Feedback between these engineers about how modifications to systems or op-
erating procedures would change event sequence probabilities, the pressure rise, temperature drop, and
fracture toughness of the beltline region of the vessel, and how the changes in the pressure rise, tem-
perature drop, and fracture toughness would affect the reactor vessel failure probability, is essential to
develop effective and economical risk reduction strategies.
The emphasis of the discussion here is on the synergism of structural reliability analysis and system
reliability analysis to develop a unified PTS risk assessment procedure. Structural reliability aspects are
only briefly noted. More details on the probabilistic fracture mechanics and structural reliability analysis
aspects of PTS risk assessment may be found in Chapter 22.
also any decrease in pollution, decrease in public risk, and other intangible benefits. Within the context
of risk reduction projects, the primary cost is hardware, maintenance, repair, operation, and management;
benefits are primarily reduction in fatalities, property damage, and plant downtime. There could be
secondary costs and benefits and they should be considered in the cost-benefit analysis.
If the cost and benefit could be stated in some monetary units (say, dollars), then a straightforward
comparison could be made. A risk reduction scheme is considered cost effective if the benefit is greater
than the cost. However, putting a monetary value on injuries, fatalities, and environmental damage could
be a controversial issue.
If the cost and benefit are stated in monetary terms, both the benefit and cost should be stated in
present dollars. Methods of converting future costs and revenues to present dollars may be found in
economics books (e.g., Grant et al., 1982). An Electric Power Research Institute report by Cohn et al.
(1979) on value-impact analysis also discusses the conversion of future costs to present dollars, taking
into account inflation, interest rates, and other factors.
The general concepts and procedure of cost-benefit analysis may be illustrated by the following
hypothetical example adapted from Sundararajan and Gupta (1991).
During a routine inspection of an industrial plant, a number of pipe support anchor bolts were found
to have been improperly installed. Removal of these bolts and installing new bolts is a time-consuming
and costly effort. Total cost was estimated at $350,000. A cost-benefit analysis was performed to decide
whether to replace the bolts or leave them as they were.
The plant consisted of six pipelines and a number of mechanical and electrical equipment. Each
pipeline was supported by a number of supports and e;!ch support was anchored into the foundation by
four to eight anchor bolts. Thus the plant had hundreds of anchor bolts and some of these bolts were
improperly installed. The improperly installed bolts increased the failure probabilities of the pipelines.
Failure of these pipelines would result in some equipment damage and plant shutdown. The estimated
cost of such an event is $2,000,000. Replacement of the improperly installed anchor bolts would de-
crease the failure probabilities of pipelines and thus the probability of equipment damage and plant
shutdown.
The cost-benefit analysis of replacing the improperly installed anchor bolts consisted of the following
steps.
1. Failure probability of each pipeline was computed with the existing (improperly installed) anchor bolts and
with replaced (properly installed) anchor bolts. Results are summarized in Table 9-2.
2. A fault tree for the system was constructed (Fig. 9-7). PI to P6 refer to the six pipeline failures, and Nl
to N6 refer to failures of other equipment or groups of equipment. Failure probabilities of the equipment
are given in Table 9-3. Equipment failure probabilities are not affected by anchor bolt replacement.
3. A quantitative fault tree analysis was performed with the failure probabilities of pipelines with improperly
installed anchor bolts (failure probabilities in column 2 of Table 9-2). Probability of the top event (equipment
damage and plant shutdown) thus computed was 9.6 X 1O- 3 /year. The plant is expected to operate for
another 35 years. Therefore the top event probability over the remaining life is 3.36 X 10- 1• Cost of
equipment damage and plant shutdown is $2,000,000. The risk over the remaining life is given by the
product of the top event probability and its consequences. Therefore the risk is $672,000.
4. The quantitative fault tree analysis was repeated with the failure probabilities of pipelines with anchor bolts
replaced (failure probabilities given in column 3 of Table 9-2). The top event probability thus computed
was 1.2 X 1O- 3/year. Risk over the remaining life was computed as before and was found to be equal to
$84,000.
5. Reduction in risk over the remaining life is the difference between the risks computed in steps 3 and 4,
and is equal to $588,000. That is, the benefit of replacing the improperly installed anchor bolts is $588,000.
The cost of this replacement is $350,000. So the net benefit is $238,000. Because there is a positive net
benefit, it was recommended that the anchor bolts be replaced.
206 Probabilistic Structural and Mechanics in System and Plant Risk Assessment
P1 3 X 10-3 1 X 10- 4
P2 3 X 10- 3 1 X 10-4
P3 1 X 10- 3 1.5 10- 4
X
P4 2 X 10-3 1.5 X 10-4
P5 4 X 10-3 2 X 10-4
P6 2 X 10- 3 1 X 10-4
EQUIPMENT DAMAGE
AND
PLANT SHUTDOWN
Figure 9-7. Fault tree for the hypothetical cost-benefit analysis example.
Probabilistic Structural and Mechanics in System and Plant Risk Assessment 207
Nl 1 X 10- 2
N2 1 X 10- 2
N3 6 X 10- 5
N4 3 X 10- 3
N5 3 X 10- 3
N6 4 X 10- 4
The foregoing example is straightforward because the risk was only financial loss and there were no
possible injuries or fatalities. If injuries and fatalities are involved, a conservative monetary value has
to be assigned for them.
In addition to "yes" or "no" decisions on repair/replacement questions, cost-benefit analysis may
also be used for choosing between alternate risk-reduction schemes. The net benefits of alternate
schemes are computed and the scheme providing the highest net benefit is selected.
There is always some uncertainty associated with the estimation of failure probabilities and conse-
quences (economic losses and fatalities). These uncertainties become particularly important in cost-
benefit analysis if the failure probabilities are very low and the consequences are very high. Such is
the case in postulated nuclear power plant accidents. It is customary in such cases to compute the upper
and lower bound values (or 95% confidence bounds) for the risk in addition to the best estimate. These
bounds and the best estimate value are used when comparing alternate risk reduction schemes. For
further details on uncertainty analysis, readers are referred to the PRA Procedures Guide (Nuclear
Regulatory Commission, 1983).
More details on cost-benefit analysis may be found in Cohn et al. (1979), Huberlin et al. (1983),
Dasgupta and Pearce (1972), Mishan (1973), and Sassone and Schaffer (1978). The first two reports
deal with cost-benefit analysis in the context of nuclear power plant risk analysis and the others discuss
cost-benefit analysis in a general context.
6. CONCLUDING REMARKS
The importance of integrating structural reliabilities into system and plant risk assessment and the
methods of doing so are discussed in this chapter. A number of applications are also discussed. The
applications include internal event, external event, and PTS risk assessments, prioritization of structures
for in-service inspection, and cost-benefit analysis.
The vast majority of applications to date are in the nuclear power industry. But some of the methods
developed in the nuclear power industry are being adapted for use in nonnuclear industries. The coming
years should see a gradual increase in the use of structural-cum-system reliability analysis techniques
in the fossil power, petroleum, and process industries.
REFERENCES
American Society of Mechanical Engineers (1992). Risk-Based Inspection-Development of Guidelines, Vol. 2: Light
Water Reactor Nuclear Power Plant Components. New York: American Society of Mechanical Engineers.
BALKEY, K. R., and E. L. FURClll (1984). Probabilistic fracture mechanics sensitivity study for plant specific
evaluations of reactor vessel pressurized thermal shock. In: Advances in Probabilistic Fracture Mechanics.
C. Sundararajan, Ed. New York: American Society of Mechanical Engineers, pp. 71-85.
BERNREUI'ER, D. L., J. B. SAVY, and R. W. MENSING (1987). Comparison of Seismic Hazard Estimates Obtained
by Using Alternative Seismic Hazard Methodologies, NUREG/CR Report. Washington, D.C.: Nuclear Reg-
ulatory Commission.
COATS, D. W., and R. C. MURRAY (1985). Natural Phenomena Hazards Modeling Project: Extreme Wind/Tornado
Hazard Models for Department of Energy Sites. UCRL-53526. Livermore, California: Lawrence Livermore
National Laboratory.
COHN, M., J. A. DRACUP, R. C. ERDMANN, E. HUGHES, and J. von HERRMANN (1979). Value-Impact Analysis.
Palo Alto, California: Electric Power Research Institute.
Commonwealth Edison Company (1981). Zion Probabilistic Safety Study. Chicago: Commonwealth Edison
Company.
Consolidated Edison Company of New York (1982). Indian Point Probabilistic Safety Study. New York: Consol-
idated Edison Company of New York.
CuMMINGS, G. E. (1986). Summary Report on the Seismic Safety Margins Research Program. NUREG/CR-443l.
Washington, D.C.: Nuclear Regulatory Commission.
DASGUPTA, A. K., and D. W. PEARCE. (1972). Cost-Benefit Analysis: Theory and Practice. London: Macmillan
Press.
ELUNGWOOD, B., and T. A. REINHOLD (1982). Tornado Damage Risk Assessment. NUREG/CR-2944. Washington,
D.C.: Nuclear Regulatory Commission.
EPRI (1989). Probabilistic Seismic Hazard Evaluations at Nuclear Power Plant Sites in the Central and Eastern
United States: Resolution of the Charleston Earthquake Issue. EPRI NP-6395-D. Palo Alto, California:
Electric Power Research Institute.
GAMBLE, R. M., and J. STROSNIDER (1981). An Assessment of the Failure Rate for the Beltline Region of PWR
Pressure Vessels during Normal Operation and Certain Transient Conditions. NUREG-0778. Washington,
D.C.: Nuclear Regulatory Commission.
GANGADHARAN, A. C., G. D. GUPTA, and I. BERMAN (1975). Reliability evaluation of a sodium heated steam
generator. In: Reliability Engineering in Pressure Vessels and Piping. A. C. Gangadharan, Ed. New York:
American Society of Mechanical Engineers, pp. 51-68.
GATELBY, W., D. STODDARD, and R. L. WIlLIAMS (1968). GO: A Computer Program for the Reliability Analysis
of Complex Systems. Kaman Sciences Corporation.
GRANT, E. L., W. G. IRESON, and R. S. LEAVENWORTH (1982). Principles of Engineering Economy. New York:
John Wiley & Sons.
GREEN, A. E., and A. J. BOURNE (1972). Reliability Theory. New York: Wiley-Interscience.
HALL, R. E., M. A. AzARM, and J. L. BOCCIO (1984). The identification of the safety importance of seismically
sensitive nuclear power plant components through the use of probabilistic risk assessment. In: Seismic Events
Probabilistic Risk Assessment. P. Y. Chen and C. I. Grimes, Eds. New York: American Society of Mechanical
Engineers, pp. 45-5l.
HUBERLIN, S. W., et al. (1983). A Handbook for Value-Impact Assessment. NUREG/CR-3568. Washington, D.C.:
Nuclear Regulatory Commission.
HOSSER, D., and H. LIEMERSDORF (1991). Seismic risk analyses in the German risk study-Phase B. Nuclear
Engineering and Design 128:259-268.
Houston Lighting and Power Company (1989). South Texas Project Probabilistic Safety Assessment. Houston:
Houston Lighting and Power Company.
KREMDDIAN, A. (1985). Seismic Risk to Major Industrial Facilities. Palo Alto, California: Stanford University.
Probabilistic Structural and Mechanics in System and Plant Risk Assessment 209
KREMDIJIAN, A, K. ORTIZ, R. NIELSEN, and B. SAFAVI (1985). Seismic Risk to Major Industrial Facilities. Report
No. 72. Palo Alto, California: Stanford University.
LAMBERT, H. E. (1975). Fault Trees for Decision Making in Systems Analysis. UCRL-51829. Livermore, California:
Livermore National Laboratory.
LEE, E (1980). Loss Prevention in the Process Industries~azard Operability (HAZOP) Methods. Boston:
Butterworths.
McDoNALD, J. R. (1983). A Methodology for Tornado Hazard Probability Assessment. NUREG/CR-3058. Wash-
ington, D.C.: Nuclear Regulatory Commission.
MISHAN, E. J. (1973). Cost-Benefit Analysis. New York: Praeger.
MOYlAN, M. E, K. R. BALKEY, C. B. BOND, and V. A PERONE (1987). Reactor vessel life extension. ASME
Paper 87-PVP-15. New York: American Society of Mechanical Engineers.
Nuclear Regulatory Commission (1975). Reactor Safety Study: An Assessment ofAccident Risks in U.S. Commercial
Nuclear Power Plants (WASH-1400). NUREG-75/014. Washington, D.C.: Nuclear Regulatory Commission.
Nuclear Regulatory Commission (1982). NRC Staff Evaluation of Pressurized Thermal Shock. Policy Issue SECY-
82-465. Washington, D.C.: Nuclear Regulatory Commission.
Nuclear Regulatory Commission (1983). PRA Procedures Guide: A Guide to the Performance of Probabilistic Risk
Assessment for Nuclear Power Plants. NUREG/CR-2300. Washington, D.C.: Nuclear Regulatory
Commission.
Nuclear Regulatory Commission (1987). Format and Content of Plant-Specific Pressurized Thermal Shock Safety
Analysis Reports for Pressurized Water Reactors. Regulatory Guide 1.154. Washington, D.C.: Nuclear Reg-
ulatory Commission.
Nuclear Regulatory Commission (1989). Severe Accident Risks: An Assessment for Five u.s. Nuclear Power Plants.
NUREG-1150. Washington, D.C.: Nuclear Regulatory Commission.
Philadelphia Electric Company (1983). Severe Accident Risk Assessment for Limerick Generating Station. Phila-
delphia: Philadelphia Electric Company.
Power Authority of the State of New York (1982). Indian Point Probabilistic Safety Study. New York: Power
Authority of the State of New York.
RAVINDRA, M. K, and W. H. TONG (1991). Seismic risk analysis of conventional and chemical facilities. In:
Proceedings of the International Conference on Probabilistic Safety Assessment and Management. G. Apos-
tolakis, Ed. Beverly Hills, California, pp. 881-885.
REED, J. w., and W. L. FERRELL (1987). Extreme Wind Analysis for the Turkey Point Nuclear Plant. NUREG/CR-
4762. Washington, D.C.: Nuclear Regulatory Commission.
Research Triangle Institute (1981). Extreme Wind Risk Analysis of the Indian Point Nuclear Generating Station.
Report No. 44T-2171. Raleigh, North Carolina: Research Triangle Institute.
SASSONE, P. G., and W. A SCHAFFER (1978). Cost-Benefit Analysis: A Handbook. New York: Academic Press.
SUNDARARAJAN, C. (1981). Probabilistic Assessment of Risks due to Natural Hazards. San Francisco: Impell
Corporation.
SUNDARARAJAN, C. (1982). A Simulation Technique for the Probabilistic Fracture Analysis of Reactor Vessels
under Pressurized Thermal Shock. San Francisco: Impell Corporation.
SUNDARARAJAN, C. (1991). Guide to Reliability Engineering: Data, Analysis, Applications, Implementation, and
Management. New York: Van Nostrand Reinhold.
SUNDARARAJAN, C. (1992a). Structural engineering aspects of plant risk assessment. In: Proceedings of the Process
Plant Safety Symposium. New York: American Institute of Chemical Engineers, pp. 940-950.
SUNDARARAJAN, C. (1992b). Plant Risk Assessment and Components Prioritization. Humble, Texas: EDA
Consultants.
SUNDARARAJAN, C., and P. GUPTA (1991). Structural Reliability Applications in Process Plant Risk Management.
Humble, Texas: EDA Consultants.
210 Probabilistic Structural and Mechanics in System and Plant Risk Assessment
SUNDARARAJAN, C., T. DESMOND, R. D. WHEATON, and A GHOSE (1981). Seismic Risk Assessment in the Nuclear
Industry. San Francisco: EDS Nuclear, Inc.
SUNDARARAJAN, C., V. LEE, and S. CHENG (1990). Risk-Based Pipe Weld Inspections in Process Plants. Humble,
Texas: EDA Consultants.
TuRNER, R. L., K. R. BALKEY, and J. H. Pmu.IPs (1984). A plant specific risk scoping study of reactor vessel
pressurized thermal shock. In: Advances in Probabilistic Fracture Mechanics. C. Sundararajan, Ed. New
York: American Society of Mechanical Engineers, pp. 87-103.
TwISDALE, L. A, and W. L. DUNN (1983). Probabilistic analysis of tornado wind risks. American Society of Civil
Engineers Journal of the Structural Division 109(2):468-488.
VESELY, W. E., T. C. DAVIS, R. S. DENNING, and N. SALTO (1983). Measures of Risk Importance and their
Applications. NVREG/CR-3385. Washington, D.C.: Nuclear Regulatory Commission.
Vo, T. V., B. F. GORE, E. J. ESCHBACH, and F. A SIMONEN (1989). Probabilistic risk assessment based guidance
for piping inservice inspection. Nuclear Technology 88:13-20.
Vo, T. V., B. W. SMITH, F. A SIMONEN, and S. R. DOCTOR (1990). Development of generic in-service inspection
priorities for pressure boundary systems. Nuclear Technology 92:291-299.
WITT, F. J. (1984). Development and applications of probabilistic fracture mechanics for critical nuclear reactor
components. In: Advances in Probabilistic Fracture Mechanics. C. Sundararajan, Ed. New York: American
Society of Mechanical Engineers, pp. 55-70.
10
HUMAN ERRORS AND
STRUCTURAL RELIABILITY
R. E. MELCHERS
1. INTRODUCTION
This chapter is concerned with human error as it affects the products of the structural engineering
profession and the construction industry. Yet is must be said at the outset that the structural engineering
industry has an excellent record in achieving structural safety and structural serviceability. The risk of
death as a result of structural failure while a structure is in use is very low, as indicated in Table W-
I. The statistics for structural failure therein refer to buildings, bridges, etc., but even extending the
definition to the structural components of aircraft, trains, aerospace vehicles, and motor cars will not
change the conclusions significantly. It is clear that only very occasionally do significant structural
failures occur. Why, then, should we be concerned about structural reliability and in particular the
influence of human error on structural reliability?
The reasons are twofold. One is concern with the safety of new, perhaps inherently risky ventures
and the safety of personnel using them and the other is concern with the general safety of members of
society. New forms of bridge construction and extension of techniques beyond previous applications
are well known to have been the apparent cause of failure in the past. Similarly, development of ever
more complex systems, both structural and non structural, has been accompanied by occasional, some-
times spectacular, failures. Examples include the Tay bridge, the Tacoma Narrows bridge, and the West
Gate bridge and nonstructural systems such as the Flixborough and Bhopal chemical plants and the
Chernobyl nuclear plant. The implication is that whenever new or particularly hazardous systems are
being designed, constructed, and used, there should be particular interest in their safety and this applies
to structural engineering as much as to complex systems more generally.
Society is interested in structural reliability only in the sense that a structural failure with significant
consequences shatters confidence in the stability and continuity of one's surroundings. Buildings,
bridges, and other such structures are seen as "rock solid," "strong," and very much part of our
permanent environment. History shows that buildings and bridges usually last a long time: perhaps on
the order of hundreds of years. Society does not expect structures to fail. However, it is much less
surprised at deaths due to motor car accidents and accepts aircraft crashes somewhat less easily. Clearly,
there is a difference in expectations: the risk levels for buildings and bridges are usually associated with
211
212 Human Errors and Structural Reliability
involuntary risk (i.e., the background risk associated with day-to-day living) and are much lower than
the risk associated with voluntary activities, such as travel, mountain climbing, deep sea fishing, or
those associated with an occupation.
As most structural engineers know, structural failure is actually quite common. Detailed investigation
of any structure will reveal some observable defects, perhaps even almost immediately on construction.
Usually, however, there are no immediate or significant short-term consequences, although there may
be long-term problems such as those due to corrosion and fatigue. It seems that structural failures are
really only of interest to society when the overt consequences of failures are sufficiently serious. Thus,
even if an almost insignificant error leads to a serious consequence, the importance attached to it will
be very large indeed. Conversely, major errors in structural engineering may occur without anyone ever
becoming aware of them unless there is a structural failure or serious defect. It should be evident,
therefore, that it is not the nature of the error that is committed, but rather the consequences, that govern
not only society's perception of the safety of structures but also what is recorded in history. It is clear,
also, that figures such as those in Table 10-1 reflect consequences, given that failure has occurred. These
two aspects, the failure event and its consequences, cannot be separated. The seriousness of conse-
quences will color any statistics on the reasons for structural failure, a matter perhaps not as widely
recognized as it should be.
Our interest herein is with situations in which human error may not be discounted in assessing the
reliability of a system. For structural engineers this might become necessary when dealing with a novel
structural design or with a novel construction technique (e.g., a new type of offshore structure, or a
new form of bridge). For others it might be necessary when developing proposals for particularly
hazardous facilities, such as nuclear power facilities, chemical plants, or liquefied petroleum gas depots.
One of the difficult aspects of structural reliability theory is the possibility of relating observed rates
of failure of structures to the numbers calculated by reliability theory. The discrepancy has been noted
many times (Brown, 1979; Ellingwood, 1987). Typically, annual rates calculated using high-quality
descriptions of loading, material strength, etc. are one to two orders of magnitude lower than those
Approximate Estimated
death rate typical Typical risk of
(X 10- 9 deathslhr exposure death
Activity exposure) (hr/year) (X 1O- 6/year)"
·Values rounded.
Source: Melchers, R. E. (1987b). Structural Reliability Analysis and Prediction. Chichester, England: Ellis Horwood/John Wiley
& Sons. Reprinted with permission.
Human Errors and Structural Reliability 213
observed. Little can really be said about these comparisons because the "observed" database is very
scanty indeed and is in any case of doubtful validity owing to (1) poor recording, and (2) inhomogeneity
due to differing design standards with time and location (Kupfer and Rackwitz, 1980). It is clear,
however, that human error plays an important part in the failure of structures and other complex systems,
a matter widely recognized and the subject of a number of conferences (Schneider, 1983; Nowak, 1986)
as well as some review papers (Lind, 1983a; Ellingwood, 1987).
Section 3 of this chapter reviews aspects of several surveys of structural failures and draws some
implications. The matter of "gross error," so dominant in the earlier literature on human error, is
discussed, as is the related issue of "unimaginables."
Section 4 looks at the nature of human error and how its description has been approached by different
professional groups. Various classifications have been given and different degrees of quantitation have
been accorded some types of errors. Significantly, from the point of view of structural engineering, little
attention has so far been given to cognitive errors, that is, those concerned with thinking processes.
The question of intent in relation to action taken is also raised.
Section 5 deals with the question of whether human errors can be incorporated sensibly in prob-
abilistic analyses. In other words, can human errors be modeled in some way, or do they defy rational
description? Although there are schools of thought that suggest that rational description is not possible,
and others that claim only a "fuzzy set" -based description is possible, the approach taken herein is
that probabilistic descriptions are both possible and rational, provided we do not expect these descrip-
tions to be necessarily simple or all embracing. There may be many matters that are not well understood
and that can be described, at best, only subjectively, but this is not considered herein to invalidate
probabilistic descriptions. A pragmatic approach to probability is taken. Philosophical discussions about
the interpretation of probability have raged for many years without much practical result-these dis-
cussions, although interesting, are best left to others (Barnett, 1973).
Having nailed our colors to the mast, so to say, we explore in Sections 5 through 10 the modeling
of the structural engineering design-construction-use system incorporating the effect of human error,
using ideas also employed in industrial psychology or ergonomics. Most attention is directed toward
the process of design, as most research has been performed in this area. Also, some attention is directed
toward modeling of checking processes. These models may then be combined with more conventional
structural reliability analysis procedures to produce estimates of structural reliability incorporating as-
pects of human behavior and human error. Comments about research directions and needs close the
chapter.
2.1. Notations
D Failure domain
D' Safe domain
G(.) Limit state function
Q Load random variables
~.) Resistance random variables
Ro(.) Resistance random variables in the absence of human error
X Vector of basic random variables
j{.) Probability density function
PI Probability of failure
214 Human Errors and Structural Reliability
2.2. Abbreviations
The success of a particular structure depends on the effectiveness of its design, documentation, and
construction and also on the manner in which the structure is employed and maintained. Much of this
is concerned with the actions (or inactions) of human beings. To some extent this is revealed by surveys
of structural "accidents" (see Table 10-2). It is seen that, in general, errors committed in the execution
of the processes required for planning and design of structural projects are of considerable importance
and that construction errors are of only slightly less importance. It is notable that for some structural
types, failure during construction is an important part of the failure statistics (Table 10-3). Not shown
here are the statistics for failure due to long-term deterioration such as reinforcement corrosion, spalling,
surface abrasion, and cracking. Such information is much more difficult to obtain although there is
some evidence that actual collapse or cases of considerable damage account for 20-30% of all failures
(see Table 10-4). When individual failure cases are examined in more detail (Table 10-5), it is seen that
ignorance, negligence, and carelessness as well as lack of knowledge are major factors leading to
structural failure but that several other causes also exist. These matters are, clearly, a direct result of
human input or lack of it-they are human errors.
Importantly, the data suggest that so-called unimaginables or unforeseeable events occur only very
rarely. This means that only very seldom does a completely new phenomenon occur: that is, a phenom-
enon that could not have been predicted by the designers (or constructors). Even when failure events
Buildings
(housing, Industrial Highway
T1Ille period offices, etc.) buildings construction
During construction 53 35 69
During occupation 43 64 29
During demolition 4 1 2
•All percentages sum vertically.
Source: Adapted from Hauser (1979).
Collapse 25 63 20
Loss of safety (distress) 35 63 40
Loss of serviceability 40" 37 40
·Considered to be underrepresented.
Source: Adapted from Melchers et al., (1983).
are considered to be in this novel category, it may be simply that the knowledge was not available to
the designers or was ignored. A case in point is the behavior of the Tecoma Narrows bridge under wind
load conditions. There were antecedents during the 1800s for the "galloping" behavior observed during
the hours preceding the failure of this bridge, but these do not appear to have been known to its designers
(Sibly and Walker, 1977) or were ignored by the profession generally (Brown, 1986). It follows that
the availability of information and proper research and its recording in accessible locations are all
essential preconditions to successful engineering. Also, designers and constructors (and the profession
generally) must be on the alert for possible "new" conditions not predicted by existing design standards
and thinking. History shows that there is an unfortunate trend for designers and others to believe that
past experience can be linearly extrapolated (petrosky, 1985). How these matters are to be tackled in
practice is not at all clear (Knoll, 1986), and little empirical research appears to be available.
Some of the early discussions about human error appear, now, to have been excessively preoccupied
by the notion of gross errors (Schneider, 1983). They were defined as large deviations from commonly
accepted practice or the result of matters totally overlooked during design or construction, but their
definition always caused difficulty. Probably a better way of considering this matter is to recognize that
the term gross error was more concerned with outcomes rather than attempts to study in detail the cause
of the subsequent event(s). In this way the issue of unimaginables (see above) becomes relevant and
some gross errors were undoubtedly of this type. Equally, other gross errors are simply the combination
of a set of individual events, either not predicted or considered so unlikely as to have been ignored.
The need to have a (large) number of individual errors occurring to attain the more uncommon forms
of structural failure has been canvassed by Lind (1983a).
For individual actions or tasks, the concept of a gross error can be accommodated readily through
the use of a probability density function description of task performance, with the extreme tail(s) of the
distribution function describing gross error. A combination of one or more of such extreme events may
then lead to a gross error-type failure, depending on the sensitivity of the structure to such events. This
approach to human error is described in a little more detail in Sections 6 through 8.
Investigations of many failure cases (both structural and nonstructural) suggest that the organizational
setting in which design, construction, and use of a facility occur can have a major influence on the
safety and/or adequacy of the facility. Matters such as poor communications, the nature of the man-
agement structure, the decision-making processes, interdepartment, interoffice, and interpersonal rival-
ries, factional infighting, or as Pugsley (1973) would have it, the "climate" of the organization, are all
important factors. Only more recently has this broad issue started to receive the attention it deserves.
It should be evident that the problem of human error is not restricted to structural engineering. The
aerospace industry, in particular, has been at the forefront of investigating and recording the effectiveness
with which humans perform particular tasks, such as monitoring, tracking, and responding. The work
involved here is essentially a man-machine system and has developed into the discipline of ergonomics.
This type of work should be of great interest to structural engineering and in particular to construction.
It has less relevance perhaps for the design and documentation phases that are particularly critical to
civil engineering projects.
4.1. General
The social and psychological nature of human error is complex and still rather poorly understood.
It is not possible within this chapter to give much detail, but reference might be made to some aspects
of human error in the context of psychology. Much of this stems from the ergonomics work, which
Human Errors and Structural Reliability 217
itself stems from the study of the interaction between humans and machine (Chapanis, 1959; McCor-
mick, 1964; Reason, 1990). Others with somewhat different perspectives include Drury and Fox (1975),
Kletz (1985), and Rasmussen (1976).
It will be useful to give a somewhat "engineering-flavored" view of the nature of human error. In
particular, this will be helped by looking at some of the classifications that have been given in the
literature. Many of these were preconditioned by the industry involved and tend, in general, to focus
on the operation of a system (e.g., chemical plant, nuclear power plant, aircraft) rather than the design-
construction-use (or operation) sequence so important in structural engineering. The categorization
given here takes a broad view. It considers (1) organizational errors, (2) procedural errors, and (3)
cognitive errors. First, however, a comment about intent.
viewpoints, including the effectiveness of the organizational system. Turner argued that a number of
preconditions must exist for disasters to occur. These are as follows.
Perrow (1984) argued that some man-made systems are so complex that a general overview of the
system by anyone individual is no longer possible and that this presents a potential problem when
there are tight linkages or "coupling" between various subgoals and/or targets.
The design of organizational systems to ensure a high level of system safety has gradually come to
be recognized as an appropriate response to the complexity of systems. An example here is the require-
ments imposed by the U.K. Health and Safety Executive on the offshore industry following the findings
of the Cullen report on the Piper Alpha disaster (Department of Energy, 1990). The preparation of a
safety case, previously required for other hazardous process industries, is now required for new projects
in the U.K. sector of the North Sea (and elsewhere also). It is an argued and documented case showing
the measures such as the systems and the organizational structures to be taken or put in place to achieve
acceptable notional levels of personnel safety. Hence from the viewpoint of regulatory agencies the
safety case is concerned with the auditing of procedures and organizational structures and their likely
effectiveness. Although these requirements are aimed mainly at ensuring the safety of personnel, rather
than the avoidance of system failure per se, the underlying recognition is that a safety culture must be
fostered at all levels within an organization, not least at management level. Typically this requires (1) an
organization sensitive to the outcome of its actions, (2) commitment by all concerned to this sensitivity,
(3) establishment of appropriate standards, procedures, and rules, (4) feedback to appropriate personnel,
(5) a nonpunitive attitude toward achievement of safety and other goals, and (6) flexibility within the
organization to deal with new problems in appropriate ways (Pidgeon et ai, 1990; Rivas and Rudd,
1975; Turner, 1989).
One can be excused for questioning how such an ideal organization might be attained and whether
it can ever achieve the high degree of safety. The first part can be achieved, of course, through legislation
and a government watchdog such as the Health and Safety Executive in the United Kingdom. The
effectiveness of such a course of action remains to be determined, particularly because other, only
slightly different, cultures have not followed suit, relying perhaps on the self-regulation of the industry
and the threat of civil legal redress (e.g., as in the United States). Are such approaches appropriate for
"ensuring" structural safety?
In most countries the organizational systems in which a structural engineering project will be de-
signed, constructed, and operated are well established (Cibula, 1971). Typically the organizational sys-
tem is focused on the safety of users of the structure and has components comprising the following:
Codes of practice or mandatory codes set out the collective wisdom and experience of the relevant
profession(s), based both on the experience gained from past failures and on assumed knowledge of
risk acceptability by society for similar projects. This includes allowance for certain types of human
error (mainly slips, etc.). Codes are also useful in helping to avoid particular errors of conceptualization
but they have little influence on the effectiveness of the execution of the design and of construction.
Some degree of control over these aspects can be achieved by the checking and inspection systems.
However, there appears to be little objective information about the effectiveness of existing checking
systems or the degree of checking that is optimal.
Design codes playa central role in the assurance of structural safety. Typically, a design code specifies
load factors (and/or factors of safety or partial factors), and in the case of modem codes is calibrated
to allow for uncertainties in loading definition, in resistance descriptions, and in various other parameters
(such as the modeling of previous experimental work by theoretical, mathematical, and other models
used in design). Although some allowance is made for variability in matters such as workmanship,
design codes are concerned primarily with safety assessment or verification in a "perfect" environment,
that is, one in which no significant human errors are assumed to occur. It follows, therefore, that design
codes define minimum requirements but do not necessarily guarantee adequate execution either in design
or construction. Hence, the satisfaction of design and other codes is a necessary but not sufficient
condition for structural safety (Bosshard, 1979). It also follows that any estimate of structural safety
obtained through the satisfaction of code requirements can be viewed only as a "nominal" or "formal"
measure of structural safety, a matter to which we shall return.
The legal sanctions system has already been mentioned. It has a passive role. The threat of litigation
and the prospect of possible deregistration or loss of professional standing (or worse) is equivalent to
a pressure for self-regulation, but it is also a lottery. Normally, legal action will be instituted only if a
failure event of significant magnitude occurs and the likelihood of a "pay-out" is sufficiently high (this
applies, in general, also to class actions). Yet for a significant event to occur it is usually necessary for
a considerable number of errors to have been committed before the usual structural safety factors are
exceeded (Lind, 1983a). It follows that a designer may play a lottery, erring on the unsafe side of
conventional practice to be economically competitive and hoping not to commit sufficient errors to
erode the usual factors of safety sufficiently to cause failure.
In a sense, the above organizational system is a "megasystem" within which other organizations
such as designers, consultants, and contractors must operate. There is little formalized control over the
organizational systems of these participants, nor would it be sensible to have some control unless it can
be demonstrated that there are clearcut advantages for the existence of such controls for society as a
whole. To date such a case does not appear to have been made.
crane). This behavior seldom involves much conscious effort. Rule-based behavior requires more effort
because memorized or written procedures must be followed (such as complex arithmetic tasks, use of
rules). Such behavior requires a longer response time and is more prone to error than is skill-based
behavior. Both have some application in routine applications in structural engineering design and in
construction.
Knowledge-based behavior is that which involves complex cognitive processes, such as is associated
with problem solving in unfamiliar situations. The greater complexity results in increased response time
and a higher likelihood of error. Data for skills-based response and for procedural responses are rela-
tively easy to obtain, but it is more difficult to obtain data for cognitive response or behavior. The latter
is also more likely to be subject to various kinds of "environmental" influences and pressures (Reason,
1990).
Working mainly in the aerospace industry, Swain (1978) suggested a somewhat different classification
for human errors:
It is evident that much of the concern here is with stimulus-response behavior, typified in the man-
machine interface situations common in process control, aircraft flying, etc. It is also assumed that
the operator is physically and mentally able to carry out the task required. Hence these types of tasks
and their related errors might be classified also as psychomotor tasks. Again, errors associated with
knowledge-based behavior have direct relevance to structural engineering and to construction of projects.
Clearly, the various descriptions above suggest that errors might be modeled as discrete events (Harris
and Chaney, 1969; Melchers, 1977). It is important to note, however, that in certain situations the
occurrence of an error itself is not a sufficient description because the size or magnitude of the error
may have a bearing on the outcome. Hence modeling of an error requires, in general, that the size also
be considered.
There is a school of thought, for example, Elms and Turkstra (1992), which suggests that human action
cannot be modeled like technical matters and that pyschometric studies on one individual are not readily
transferable to another. They also note that much of the information on human behavior would be fuzzy
rather than crisp and a good deal would be anecdotal. Others, including the author, believe that it is
possible for human errors to be modeled within the framework of probability theory. Dealing with the
second point first, we simply note that it is accepted within the conventional subjective probability
framework that uncertainty ("fuzziness") arising from different sources can be considered legitimately
in probabilistic analysis. It may be and indeed it is likely that the analysis of uncertainties related to
human behavior and organizational matters is much more complex than that due to physical quantities,
but the existence of different types of uncertainties does not, per se, invalidate the analysis. Further,
anecdotal material may be of direct interest when no other information is available. Apart from its use
in identification of uncertainties and helping to describe them, anecdotal information can help in reaching
decisions as to whether further or more detailed investigations are warranted. The objection to the use
of probabilistic models of human error has at its heart the controversial question as to whether prob-
ability theory can represent all types of uncertainty or whether other techniques such as "fuzzy logic"
are to be preferred. It is not proposed to pursue that matter here but simply to note that in the author's
view nothing is gained by moving outside the conventional probabilistic framework.
Turning now to the first point, we note the considerable research effort in areas such as safety
engineering, psychology, and sociology. Further, if the psychometric studies on one individual are not
readily transferable to another, then the discipline of ergonomics would be of no value and the enormous
commitment to aerospace safety assurance, for example, would be founded on false premises. Ergo-
nomics is predicated on the assumption that there are strong commonalities in human behavior and that
such behavior can be measured and documented (Chapanis, 1959). Naturally variability from one in-
dividual to another will occur. Such variability may be considerably greater than the variability found,
for example, in the strength of structural steel. But this does not invalidate the use of such uncertainty
information in mathematical or probabilistic models. Nor is it likely that human behavior is fundamen-
tally unpredictable . In a statistical sense, most individuals will respond to a particular stimulus or a
particular situation in much the same way. Totally unpredictable behavior is extremely rare and is
222 Human Errors and Structural Reliability
therefore little different from the unimaginables mentioned earlier (and of which there is further dis-
cussion below).
It is well known that human behavior is not necessarily governed by ideas of optimization. The
concept of "satisfying" (Simon, 1957, 1969) plays an important role, and suggests that human behavior
tends to be such that a set of perceived pay-offs govern behavior. Generally such pay-offs are based
on incomplete or even selected information. The decision is thus a "satisfactory" rather than an optimal
one. It is also clear that human behavior is influenced by environmental and selected and perceived
factors and that such perceptions may be considerably removed from the truth, but there has been no
suggestion that there are large, totally irrational components.
Undoubtedly human error modeling is more complex than that for the physical parameters and that
the modeling must account, at least in principle, for many factors. This might involve the modeling of
the interaction between individual behavior and the organizational structure or "climate" but in many
cases a simplified model will be adopted, in the same way that simplified models are used for certain
poorly understood physical processes.
Probably underlying the objection to including human errors in structural reliability estimates is the
complexity of attempting to do so. It is likely that there are many more factors influencing human
behavior than there are factors influencing the behavior of a typical physical component in a system.
But this does not invalidate reliability analyses incorporating human error factors.
From a practical point of view, it is imperative that human error information be at least considered
for incorporation in any risk analysis. A risk analysis without accounting for human error has little
absolute meaning, although in some circumstances this may be sufficient. One role is as a relative
measure of nominal safety in the manner widely employed for structural design code calibration. How-
ever, it is important to recognize that where human error cannot be ignored entirely, relative risk
assessments are valid only if the human error component can be assumed to be approximately the same
or proportional for each of the systems under consideration. In these circumstances the human error
contribution relates directly to the "human error-free" reliability assessment, relativity is maintained,
and ratios of relative risk are directly comparable with the (undetermined) ratios of absolute risk (Mel-
chers, 1987b). In general this situation does not hold when comparing the risks inherent in quite different
structural engineering designs or when comparing a structural risk with the risk involved in a nuclear
power plant, an offshore facility, or an liquefied petroleum gas facility. Under these conditions, measures
of absolute risk should be used.
It is important to note also that the precise numerical outcome of a comprehensive risk analysis is
not necessarily of great significance. In many cases the process of deriving the outcome and the need
to properly consider all possible failure modes, and the influence of human error at various stages in
the project, will have been of major benefit. It is likely that such analysis will have identified items
that might otherwise have been overlooked and that it will have forced a more detailed consideration
of items the significance of which might have been over- or undervalued. Indeed, the recognition that
the process of analysis has a value perhaps greater than the analysis outcome is well established in the
area of probabilistic risk analysis in the process and nuclear industries (International Study Group, 1985).
It is considered that essentially similar arguments hold for the risk analysis of complex structural
systems.
We should recognize that any model of a real-world situation is no more than a model. Mathematical
models are based on the assumption that the real world can be disaggregated into components and that
the collection of modeled components may be used to model the real world. Any "holistic" aspects
are thereby ignored. This observation is not new and has been much discussed in the system modeling
literature (although perhaps it has been of little interest to engineers). It follows that the degree of detail
is at all times within the discretion of those doing the modeling and the output of the model is controlled
largely by the modelers.
Human Errors and Structural Reliability 223
The extent to which human error is incorporated in a reliability analysis is a modeling decision. One
cannot expect good results from a model that ignores human error when all the evidence indicates its
importance. The aim should be to develop the most appropriate model given the level of understanding,
the possible decision implications, and the available resources.
6. SYSTEM MODELING
Much information about the causes of structural failure can be obtained from insurance company rec-
ords, newspaper reports, and committees of inquiry (Melchers et aI., 1983). The accuracy and the degree
of detail vary considerably with source, as shown in Table 10-7. Most illuminating are the reports
stemming from properly constituted inquiries. Often these deal with organizational and other nontechn-
ical matters and are, therefore, a significant source of information for our purposes. A number of authors
have attempted to summarize the various lessons that can be drawn from such reports (Matousek and
Schneider, 1976; Walker, 1981; Schneider, 1983). This has led to summaries such as given in Tables
10-2 through 10-5. Attempts have also been made to provide frameworks (as derived from artificial
intelligence ideas or fuzzy set theory) for indicators when human error is likely to be critical in the
reliability of a structural project (Pugsley, 1973; Blockley, 1977, 1986; Shibata, 1986).
The link between committing error during one or more of the processes of design, construction, etc.
and structural failure is not always clear from the available historical studies. Often errors are committed
in seemingly simple tasks. In other cases, a large number of errors have had to be committed before
structural failure occurred (Lind, 1983a). Although critical errors have been pinpointed in various studies
of individual failure cases, it has proved almost impossible to extract from such cases a categorization
of critical detailed errors. To some extent the relatively poor understanding of error occurrence and its
significance for structural failure is reflected in the lack of understanding of what is required for effective
error control.
Estimated
reliability of Effect of
Prime source Evaluators evaluation profession
Formal reports (e.g., Royal Engineers and lawyers Very high High
Commission)
"In-House" reports (not Engineers High Medium
published widely) (e.g., for
insurance purposes)
Newspaper reports Nonengineers Unreliable Very low
Individual observation (formally Engineer/nonengineer Medium Medium
reported)
Individual observation (not Engineers Medium Sporadic/uneven
formally reported)
Formalized data banks Engineers (with Medium/high Very low as yet,
nonengineers) potentially
high
Source: Melchers, R. E., M. J. Baker, and F. Moses (1983). Evaluation of experience. In IABSE Worbhop on Quality Assurance
within the Building Process. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), pp.
9-30. Reprinted with permission.
224 Human Errors and Structural Reliability
Rather than expecting to extract all useful information about the influence of human error from
historical antecedents an alternative and more positive approach is to attempt to develop models to
describe the relationship between human error and structural reliability. Various such models, at various
levels of sophistication, have been reported (Rackwitz, 1977, 1986; Lind, 1983a: Ditlevsen and Hasofer,
1983; Melchers, 1978, 1984; Baker and Wyatt, 1979). Efforts have also been made to model the sen-
sitivity of structural failure probability to changes in one or more parts of the structure, the change
being postulated as that due to human error (Nowak, 1979; Nowak and Carr, 1985; Frangopol, 1986).
The most comprehensive attempts at modeling have been to investigate directly the relationship
between human behavior, human error, and structural performance. Fundamentally, such an approach
is difficult, relying on sociological and psychological aspects as well as on more established ergonomics
ideas together with simulation of the structural design and construction processes. It is a long-term
proposition, only gradually yielding results and it is not seen, necessarily, to have application to every
project. Some of this work is described in the sections 7 through 10.
It should be evident that the most significant effect of human error is on the strength or resistance
of the structure. This is because the actual applied loadings to the structural system are external to the
design-construction-use system. Similarly, only rarely is an unpredictable (not unpredicted) loading
applied to a structure through human error. This would involve cases such as structural abuse (as in
industrial buildings) and sabotage or acts of war. All other effects of human error modify the resistance
of the structure either through the conceptualization process, the design process, documentation, or
construction. Schematically, this might be indicated as the modification of the probability density func-
tion representing some resistance parameter (see Fig. 10-1). Also, it is important to recognize that human
error leading to system overstrength is of little relevance to the present discussion; it is only human
error leading to understrength or unsatisfactory performance of the whole system that is critical. This
does not mean that local overstrength possibilities can be ignored, as a little reflection will show.
In attempting to integrate human error information into a reliability calculation procedure, it is
necessary to have both a mathematical framework for so doing and appropriate data. Although several
approaches have been previously discussed in the structural engineering literature (Rackwitz, 1977;
Melchers, 1978; Nowak, 1979; Lind 1983a), the basic idea is that the probability of failure, including
human error, is given by
(10-1)
where PE is the probability of human error occurrence, Po is the (conditional) probability of system
failure without human error occurrence, and PI is the (conditional) probability of system failure, given
human error occurrence.
modification original fR ( )
due to human
modified fR ( ) for human
error effects: R,.. = E. R.
Figure 10-1. Modification of resistance probability density function for human error and human intervention effects.
(Source: Melchers [l987b]. Reprinted with permission.)
Human Errors and Structural Reliability 225
Pf = U PiPfli (10-2)
i-o
where Pi is the occurrence probability for the ith independent error state, and Pili is the conditional
probability of system failure given state i, with i = 0 denoting the system with no error content.
This formulation is sufficient for just one failure mode and must be extended for more than one.
This requires a system analysis procedure (cf. Kupfer and Rackwitz, 1980; Melchers, 1979; Ellingwood,
1987) to develop the structural failure modes from errors in design, construction, etc. The manner in
which this aspect has been investigated by the author and others is through the use of event trees using
both binary branching and a random variable approach to error magnitude. For structural engineering
reliability calculations it is important that a proper account is kept of error magnitude and the magnitudes
of resulting structural component or system strengths. A schematic event tree for a generic part of a
design process is shown in Fig. 10-2 (Melchers, 1989). This may be simplified to the basic unit shown
in Fig. 10-3, which represents a binary decision tree and the variability model. By using such elements
together with a complete understanding of the process it is possible to develop a complete event tree
representing the process and all possible outcomes from it. All the possible combinations of decisions
and outcomes can each be described by a vector of probability density functions for structural resistance
R. In the conventional use of event trees, the outcomes are associated only with a point estimate of
probability. It follows that the approach adopted for the present work is more general but also much
more complex than the conventional event tree approach, because convolution of probability information
from one step to the next is required. This might be done using well-known first-order second moment
(FOSM) methods. However, it has been found more convenient to employ Monte Carlo simulation.
This also allows specific elements of design code computation to be carried out without the need to
A = Calculated Resistance
r- ----------------------1
I ___ ~o~e~ ~n~~ction~ ~a~A.!p~ca~io~ _ _ _ _ J
Figure 10-2. Typical event tree (decision tree). (Source: Melchers, R. E. [1989]. Human error in structural design
task. Journal of Structural Engineering, ASCE 115[S1'7]: 1795-1807. Reprinted with permission from the American
Society of Civil Engineers.)
226 Human Errors and Structural Reliability
simplify the rules to suit FOSM techniques. Details of the procedure have been described in a number
of publications (Melchers, 1989; Stewart, 1990, 1991a, 1992a,b,c). The amount of computation required
for the simulation is considerable, even for a relatively simple design process or a simple structural
system.
By repeating the analysis as in a Monte Carlo approach, estimates may be made of the probability
distributions for the resistance vector R E, which includes the effect of human error. The calculation of
the probability of structural failure follows then as
where f(x) is the joint probability density function of the basic variables X; D is the failure domain
described by G(Q, R:S) :s 0; RE = ~(X); and Q represents the load random variables.
It is important to note that Q represents the actual loading on the structure and not the design load
set. If RE is replaced by Ro, the resistance random variable vector in the absence of human error
components, the usual (nominal) probability of system failure is obtained.
The model used to represent a design or construction sequence must include all relevant aspects of
the process being modeled. This means that all error types must be included and allowance should be
made for the factors that affect those error rates. In most cases insufficient information is available at
present to describe fully all the various effects: this means that average rates will need to be used and
that the results should be subject to sensitivity analyses.
An important aspect of the modeling is that analysis of various processes shows that in practice
various checking mechanisms occur. For example, in construction, the "look" of some item may alert
to the possibility of a design error, or of a drawing misinterpretation error. This means that modeling of
internal as well as external checking and inspection processes is also required. Models to represent various
types of checking processes have been developed-these are described in more detail in section 9.
It must be possible to verify the outcome predicted by the generalized event tree model of the process.
One way in which this might be done is to compare the outcomes of the simulations with survey results
for the actual process as obtained from practising engineers and others. Evidently this is not practical
for construction processes and for major design exercises, but it can be done for relatively limited
T
1
I
- - - -EITOrof- - -
Omission?
YES NO Task" i "
,--'''-----, Error of
Commission:
fEi ~a~:s~lity
Performance
------
I
, ,
,
Error of I
, ,
,
I I Omission: I I
" ..,' Taski+f '.... ",'
Figure 10-3. Binary decision and variability model as component for event tree. (Source: Melchers, R. E. [1989].
Human error in structural design task. Journal of Structural Engineering, ASCE 115[STI]:1795-1807. Reprinted
with permission from the American Society of Civil Engineers).
Human Errors and Structural Reliability 227
component parts of a larger project. Such limited tasks, still composed of a large number of individual
actions or "microtasks," have been termed "macrotasks " in this chapter.
7. MICROTASK STUDIES
To make the above procedures work, it is necessary to have available error occurrence probabilities for
various procedures, as well as the more conventional data for structural reliability calculations. Data on
simple cognitive and psychomotor tasks (e.g., button pushing and dial reading) exist in the human
factors literature (Harris and Chaney, 1969; Meister, 1966) and in the aircraft industry (Swain, 1978).
Typically, reliability rates are in the range 0.9-0.999, depending on the complexity and subjective
difficulty of the task, with 0.99 being a reasonable mean, although it is well known that environmental,
psychological, and organizational factors can have a major influence on these rates (Poulton, 1971).
"Performance-modifying factors" have been suggested in the ergonomics literature to account for these
influences, but at present these are only a rough measure of the changes that might be brought about
in practice. Probably a more useful approach is to perform a sensitivity analysis to ascertain the effect
of changes in the human error rates on the overall assessment.
Much research continues to be carried out on some types of tasks, principally those of interest to
the nuclear and aerospace industries, but as noted earlier there is a lack of information in the cognitive
area (Embrey, 1976). In the studies to be described, it was found to be necessary to obtain error rates
for such apparently simple operations as calculator computation, table reading or lookup, and table
interpolations. These are all elementary tasks necessary in the conventional design process (Melchers
and Harrington, 1982, 1984; Stewart, 1987). Details of these studies can be obtained from the literature
or summaries (Melchers, 1989; Stewart and Melchers, 1988). Suffice it to note here that data were
obtained both from practising engineers and from large-scale surveys on engineering students in dif-
fering institutions. Although there were some differences, these did not invalidate the use of data
obtained from later year engineering students performing reasonably simple tasks.
In general, the available data are insufficient to develop verifiable probabilistic models. A range of
models can be fitted empirically (Stewart, 1992a) and some models have been adopted on the basis of
information in other industries, such as the lognormal model for the performance of experienced op-
erators in the aerospace industry (Swain and Guttman, 1983; Stewart, 1992b). In general, however,
better fundamental understanding of the mechanism of error causation still is required to enable sound
theoretical models to be postulated. It is not necessarily useful to employ models that are convenient
for calculation purposes but have no apparent theoretical basis, such as the loglinear models proposed
by Lin and Hwang (1992). Similarly, models based on the assumption of errors being random events
over some task or a time interval (Kupfer and Rackwitz, 1980; Nessim and Jordaan, 1983) are not
supported by the available data, which suggest that the error rate is closely related to task complexity.
For example, in calculator use the error rate increases with the number of mathematical operations
required (Melchers and Harrington, 1982, 1984). Related work in other areas of research can be useful
here. Thus the error rate in keyboard entry of data has some relevance to error rates in calculator use.
Because of the expense and effort required, macrotask studies for design processes have been carried
out only for a limited range of tasks, including (1) loading determination, given basic site information
and structural configuration, (2) design criterion selection given basic project information, and (3) struc-
tural steel member design (a rafter in a rigid portal bent). In each case mailed survey questionnaires
228 Human Errors and Structural Reliability
were employed, despite their obvious drawbacks, as the only practical approach. In all cases, certain
criteria were specified and the respondent was requested to complete the task, being told only in the
vaguest terms that the research project was concerned with an investigation of design processes and
that the respondents should spend no more time on their responses than would be usual in their esti-
mation for the type of task they were asked to perform. The response rate was not high, presumably
due to the amount of work involved, but those responses that were received proved to be useful. In
particular they showed the manner of working, including false steps and corrections of mistakes and
minor errors, information that was later used also in the development of checking models. The responses
were carefully scrutinized for obvious incompetence and then standardized to allow comparison and
the estimation of statistical parameters. Details of this and related work have been given elsewhere and
are not described here (Melchers, 1989; Stewart and Melchers, 1988).
With the inclusion into the system model also of a self-checking process model (described in more
detail in the next section), it was possible to use the microtask data in the design process model and to
compare the outcome with the macrotask results. A typical comparison for the design of a rafter of a
rigid frame bent is shown in Figure 10-4. The hatched histogram is obtained from the simulation
procedure described in Section 6, whereas the unhatched histogram is that obtained from the macrotask
survey. It is evident that the comparison is not close, but is not unreasonable given the small sample
size of the survey and the inherent uncertainties in the simulated results. By adjusting the data used in
the simulated result it might be possible to match more closely the survey results, but this was considered
inappropriate. The given results are based entirely on a priori best estimates of all data: a similar
approach was used for the other two macrotasks studied (Stewart, 1987).
0.030
Node 1
0.020
0.010
0.000
-200 -100 o 100
0.030
i... Node 2
==
Q 0.020
i<'lI 0.010
e
~
~
0.000
50 150 250
0.040
Node 3
0.030
0.020
0.010
0.000
-200 -100 o
Bending Moment [kNm]
Figure 10-4. Comparison of macrotask survey results (unhatched) and simulated results (hatched). (Source: Me1-
chers, R.E. [1989]. Human error in structural design task. Journal of Structural Engineering, ASCE 115[ST7]:
1795-1807. Reprinted with permission from the American Society of Civil Engineers).
Human Errors and Structural Reliability 229
It is of interest to note that when the simulation process is continued to include also the actual
selection of a member size, based on calculated internal actions (bending moment, axial force, and
shear force combinations) and their various worst combinations, the outcome in terms of member size
is rather less sensitive to the occurrence of human error than Figure 10-4 would suggest. Even with
errors, most design simulations produced about the correct member size. Where the member size most
appropriate is close to the bottom of the range of stock sizes available, the influence of human error is
likely to lead to oversizing more than undersizing, as would be expected (Stewart and Melchers, 1988,
1989a).
The simulation procedure outlined in Section 6 has been applied, to date, to only a limited number
of problems. These include structural design of a steel portal frame (Melchers, 1989), steel beam design
(Stewart, 1990), design loading (Stewart and Melchers, 1988), safe load tables (Stewart, 1991b), and
reinforced concrete design (Stewart, 1992c). The application to construction processes is discussed
briefly in Section 10.
9.2. Self.Checking
There is some evidence (Rabbitt, 1978) to suggest that in self-checking, errors of omission (i.e.,
failure to perform a task) are detected less often than errors of commission (i.e., incorrect performance
of a task). Presumably this is a function of the type of task, but in procedural situations, such as in
operating an aircraft, this is not surprising. Whether it applies more generally to design-type situations
requires further investigation.
An indication of the rate of self-checking for errors of commission was obtained by examination of
over 800 individual responses in examination scripts of undergraduate engineering students (Stewart
and Melchers, 1989a). Each response was carefully examined for evidence of correction, which, given
examination conditions, was unlikely to have been prompted by other than self-checking. The results
suggest that self-checking detects only smaller or minor errors that may occur in calculations (Norman,
1981) and that it cannot safeguard adequately against errors due to misconceptions, oversights, or
230 Human Errors and Structural Reliability
misunderstandings (Grill, 1984). It seems that the latter type of error in particular is the result of a
deliberate and conscious decision, which, once taken, is seldom doubted by the decision maker.
1.0 .....- - - - - - - - - - - - - - - - - - - - - - - ,
Figure 10-5. Checking efficiency as a function of checking time. (Source: Stewart, M. G., and R. E. Melchers.
[1989a]. Checking models in structural design. Journal of Structural Engineering, ASCE 116[ST6]:1309-1324.
Reprinted with permission from the American Society of Civil Engineers.)
Human Errors and Structural Reliability 231
subsumed in self-checking.) Again, little information existed about the effectiveness of overview check-
ing. Some indication was obtained from a survey conducted among practising structural engineers.
The respondents were asked to rate 11 different, simple, structural designs in terms of simple de-
scriptors of their structural adequacy for strength, given the loading conditions. It was stated clearly
that the decisions were to be based on personal judgment and to be made without the aid of calculation
or aids. Although there was no way of ensuring that this request was adhered to, the results are inter-
esting nevertheless. A typical result is given in Fig. 10-7, which shows the variation of the judgment
"safe" with changing error magnitude (Stewart and Melchers, 1989a). The data were also analyzed for
the decision as to whether the member was considered oversized and a result generally similar to that
shown in Fig. 10-7 was obtained. There appeared to be no relationship between time taken and decision
made, or, somewhat surprisingly, between experience and decision made. This despite the conventional
wisdom that a number of factors influence error control measures (Ingles and Nawar, 1983; Ingles,
1986) and that lack of experience is a major factor in contributing to structural failure (Matousek and
Schneider, 1976; Walker, 1981). However, an independent analysis of structural failure suggests that
experience may not be such an important parameter (Blockley, 1977).
0.8-
0.6-
0.4
u-!
O.O;-----r----.----~--_.r_--_.----._I---,----~
0.0 0.5 1.0 1.5 2.0
Error Magnitude
Figure 10-6. 'JYpicaI "independent detailed design checking" efficiency model as a function of error magnitude.
(Source: Stewart, M. G., and R. E. Melchers. [1989a]. Checking models in structural design. Journal of Structural
Engineering, ASCE 116[ST6]:1309-1324. Reprinted with permission from the American Society of Civil
Engineers.)
232 Human Errors and Structural Reliability
The importance of human error during construction should not be underestimated (Brown and Yin,
1988) (see also Table 10-2). There is some evidence that in the United States at least, construction error
is related strongly to subcontractors, which perform much of the actual work (Eldukair and Ayyub,
1991). Compared to structural design, however, construction in general is a much less standardized
activity. However, for particular sections of the industry, such as reinforced concrete building construc-
tion, or steel structures, considerable commonalities in activity occur, rendering process modeling fea-
: -.- -_--;f-/f----------l r 1
-:::---f----------------------------------:-:=I--
0.6-
0.4-
;1 ~"'Y
/1/
data (+/- one standard deviation)
0.2-
-_ .........
O~-r_,--.-._-.-.--._.__r~~r_~,--r_,~
Figure 10-7. Comparison of "safe" model and survey data. (Source: Stewart, M. G., and R. E. Melchers. [1989a].
Checking models in structural design. Journal of Structural Engineering, ASCE 116[ST6]:1309-1324. Reprinted
with permission from the American Society of Civil Engineers.)
Human Errors and Structural Reliability 233
sible. Hence it should be possible, given sufficient resources and sufficient understanding of the pro-
cesses involved, to perform simulation of the processes including the influences of human error.
To date, few attempts have been made to use models to simulate construction errors. Yamamoto and
Ang (1985) have discussed the reliability of braced excavation systems in construction and, in an
approach not unlike that described in Section 6 for design studies, Hadipriono and Lin (1986) used a
simple fault tree model to analyze bridge falsework failures. These were considered to be due to enabling
and triggering events (construction errors), each partly due to human error and described by a censored
Poisson process model, with mean error rates and other factors selected by consultation with highway
officials.
Detailed modeling of the construction process for in situ reinforced concrete beams together with
modeling of human error and inspection have been performed by Stewart (1992b). He used a procedure
similar to that described in Section 6 for the study of design errors. As expected, he found that human
error tends to lower reliability of the beam when subjected to applied loads but that inspection during
construction increases it. Nevertheless, there remains a significant difference of the obtained reliability
estimate compared with the estimate of the nominal reliability (i.e., the reliability estimate without
consideration of human error).
Within the simulation framework sketched in the above sections, it is clear that a great variety of better
quality information might be necessary to improve the predictions. This includes error rates and their
probabilistic description, the influence of various factors affecting error rates, and the influence of
organization and culture. We have argued already that simplistic models, such as point estimates, are
not likely to be useful and that good-quality descriptions are essential. It is important, however, that
efforts not be misdirected. Attention should be given to those matters having the greatest influence on
the outcome.
Available research suggests that organizational factors have been involved in many man-made dis-
asters. However, there appear to have been no control experiments or observations to ascertain how
successful projects have been despite their organizations and their "climate." Further, the importance
of organizational factors might well vary with different engineering systems, with process-type industries
more vulnerable than some others, such as structural engineering, for which much design is circum-
scribed by design standards, checking systems, etc., and for which construction is also subject to much
control and inspection. Further, for construction, there is often an element of prior warning that errors
are being committed. Also, the consequences of failure during construction do not necessarily have the
same implication(s) for personnel safety as has, for example, the failure of chemical processes. This
suggests that, perhaps, the structural engineering/construction industry is somewhat more robust against
human error consequences than some other industries.
As Lind (1986) has suggested, it is not sufficient to limit attention to only one or two error control
strategies-many strategies must be used because some are more effective against some types of errors
than others. The work of Stewart (1987) has focused on design checking processes, but there are many
other matters still awaiting research. For example, just how effective is the threat of legal sanction and
how does this depend on the legal system itself? Are there differences in engineering reliability between
the United States and Great Britain, or the United States and Germany or France, say, attributable to
their differing legal systems? Or are there other differences sufficient to mask or confuse the compar-
isons? Also, is there sound evidence that supports the popular supposition that experience, or staff
selection, or training has an influence on structural reliability and, if so, can we suggest appropriate
levels of such measures?
234 Human Errors and Structural Reliability
12. CONCLUSION
In this chapter we have reviewed some aspects of the importance and influence of human error in
structural engineering and in particular for structural reliability assessment. Trends in the techniques
being developed for including human error effects in reliability assessment were sketched and an in-
dication given of the state of the art in this area. It was noted that there is not universal agreement that
human aspects can be integrated in structural reliability assessment. However, the view was put herein
that reliability measures without such input can at best be used to make relative comparisons of risk
and even such comparisons are not valid between systems or structures differing significantly from each
other. In this sense the aspect of human error and human performance is very much part of the proper
modeling of the system under consideration: the degree, intensity, and extent of modeling is a matter
for careful consideration by analysts. The techniques described herein enable some degree of human
error modeling and the modeling of certain control measures.
REFERENCES
ARAPAH, A. M., and A. S. NOWAK (1986). Sensitivity analysis for human errors. In: Modeling Human Errors in
Structural Design and Construction. A. S. Nowak, Ed. New York: American Society of Civil Engineers, pp.
170-178.
BAKER, M. J., and T. A. WYATI (1979). Methods of reliability analysis for jacket platforms. In: Proceedings of the
2nd International Conference on Behavior of Offshore Structures. Cranfield, U.K.: British Hydrodynamics
Research Association, pp. 499-520.
BARNETI, V. D. (1973). Comparative Statistical Inference. New York: John Wiley & Sons.
BWCKLEY, D. I. (1977). Analysis of structural failures. Proceedings of the Institution of Civil Engineers, Part I
62:51-74.
BWCKLEY, D. I. (1986). An AI tool in the control of structural safety. In: Modeling Human Errors in Structural
Design and Construction. A. S. Nowak, Ed. New York: American Society of Civil Engineers, pp. 99-115.
BOSSHARD, W. (1979). Structural safety-a matter of decision and control. In: IABSE Surveys. Zurich, Switzerland:
International Association of Bridge and Structural Engineering, pp. 1-27.
BROWN, C. B. (1979). A fuzzy safety measure. Journal of Engineering Mechanics Division, ASCE 105(EM5):855-
872.
BROWN, C. B. (1986). Incomplete design paradigms. In: Modeling Human Error in Structural Design and Con-
struction. A. S. Nowak, Ed. New York: American Society of Civil Engineers, pp. 8-12.
BROWN, C. B., and X. YIN (1988). Errors in structural engineering. Journal of Structural Engineering, ASCE
114(ST4):2575-2593.
CHAPANIS, A. (1959). Research Techniques in Human Engineering. Baltimore, Maryland: John Hopkins Press.
CIBULA, E. (1971). The Structure of Building Control-an International Comparison. British Royal Society Current
Paper 28/71. London: Her Majesty's Stationery Office.
Department of Energy (1990). Report on the Public Inquiry into the Piper Alpha Disaster. Report No. CM13lO.
London: Her Majesty's Stationery Office.
DITLEVSEN, 0., and A. M. HASOFER (1983). Design Decision Mode Considering Mistake Proneness. Report No.
273. Lyngby, Denmark: The Technical University of Denmark.
DRURY, C. G., and J. G. Fox (1975). Human Reliability in Quality Control. London: Taylor and Francis.
ELDUKAIR, A. A., and B. M. AYYUB (1991). Analysis of recent United States structural and construction failures.
Journal of Performance of Constructed Facilities, ASCE 5(1):57-73.
ELLINGWOOD, B. (1987). Design and construction error effects on structural reliability. Journal of Structural En-
gineering, ASCE 113(ST2):409-422.
Human Errors and Structural Reliability 235
ELMS, D. G., and C. J. TuRKSTRA (1992). A critique of reliability theory. In: Engineering Safety. D. I. Blockley,
Ed. London: McGraw-Hill, pp. 427-445.
EMBREY, D. E. (1976). Human Reliability in Complex Systems: An Overview. Report No. NCSR-RlO. London:
United Kingdom Atomic Energy Authority.
FRANGOPOL, D. M. (1986). Combining HE in risk assessment. In: Modeling Human Errors in Structural Design
and Construction. A. S. Nowak, Ed. New York: American Society of Civil Engineers, pp. 144-159.
GRILL, L. (1984). Present trends and relevant applications to increase reliability of structures. In: Proceedings of
the Seminar on Quality Assurance, Codes, Safety and Risk in Structural Engineering and Geomechanics.
Victoria, Australia: Monash University, pp. 85-90.
HADIPRIONO, F. C., and C. S. S. LIN (1986). Errors in construction. In: Modeling Human Errors in Structural
Design and Construction. A. S. Nowak, Ed. New York: American Society of Civil Engineers, pp. 57-64.
HARRIS, D. H., and F. B. CHANEY (l969). Human Factors in Quality Assurance. New York: John Wiley & Sons.
HAUSER, H. (1979). Lessons from European failures. Concrete International, ACI 11(12):21-25.
INGLES, O. G. (1986). Where should we look for error control? In: Modeling Human Errors in Structural Design
and Construction. A. S. Nowak, Ed. New York: American Society of Civil Engineers, pp. 13-21.
INGLES, O. G., and G. NAWAR (1983). Evaluation of engineering practice in Australia. In: Quality Assurance within
Building Process. Report No. 47. J. Schneider, Ed. Vienna: International Association for Bridge and Struc-
tural Engineering.
International Study Group (1985). Risk Analysis in the Process Industries. International Study Group Report.
London: Institution of Chemical Engineers.
KLETZ, T. A. (1985). An Engineer's View of Human Error. London: Institution of Chemical Engineers.
KNOLL, F. (1986). Checking techniques. In: Modeling Human Errors in Structural Design and Construction. A. S.
Nowak, Ed. New York: American Society of Civil Engineers, pp. 26-42.
KUPFER, J. and R. RACKWITZ (1980). Models for human error and control in structural reliability. In: Final Report
of the 11th Congress. Zurich, Switzerland: International Association for Bridge and Structural Engineering,
pp. 1019-1024.
LIN, Y.-L., and S.-L. HWANG (1992). The application of the log-linear model to quantify human errors. Reliability
Engineering and System Safety 37:157-165.
LIND, N. C. (1983a). Models of human error in structural reliability. Structural Safety 1:167-175.
LIND, N. C. (1983b). Management of gross errors. In: Proceedings of the 4th International Conference on Appli-
cations of Statistics and Probability in Soil and Structural Engineering, Bologna, Italy: Pitagora Editrice,
pp. 669-681.
LIND, N. C. (1986). Control of human error in structures. In: Modeling Human Errors in Structural Design and
Construction. A. S. Nowak, Ed. New York: American Society of Civil Engineers, pp. 122-127.
MCCORMICK, E. J. (1964). Human Factors Engineering. New York: McGraw-Hill.
MATOUSEK, M., and J. SCHNEIDER (1976). Untersuchungen zur Struktur des Sicherheitsproblems bei Bauwerken.
Bericht No. 59. Birkhauser Verlag, Institut fur Baustatik und Konstruktion, ETH Zurich (see also Hauser,
1979).
MEISTER, S. (1966). Human factors in reliability. In: Reliability Handbook. W. G. IRESON, Ed. New York: McGraw-
Hill.
MELCHERS, R. E. (1977). Influence of organization on project implementation. Journal of Construction Division,
ASCE 103(4):611-625.
MELCHERS, R. E. (1978). The influence of control processes in structural engineering. Proceedings of the Institution
of Civil Engineers, Part II 65:791-807.
MELCHERS, R. E. (1979). Selection of control levels for maximum utility of structures. In: Proceedings of the 3rd
International Conference on Applications of Statistics and Probability in Soil and Structural Engineering.
Randwick, Australia: Unisearch Limited, pp. 839-849.
236 Human Errors and Structural Reliability
MELCHERS, R. E. (1984). Human Error in Structural Reliability Il-Review of Mathematical Models. Research
Report 311984. Victoria, Australia: Monash University.
MELCHERS, R. E. (1987a). Human errors, human intervention and structural safety predictions. In: IABSE Pro-
ceedings. Report No. P-119-1987. Zurich, Switzerland: International Association for Bridge and Structural
Engineering, pp. 177-190.
MELCHERS, R. E. (1987b). Structural Reliability Analysis and Prediction. Chichester, England: Ellis Horwood/John
Wiley & Sons.
MELCHERS, R. E. (1989). Human error in structural design task. Journal of Structural Engineering, ASCE 115(ST7):
1795-1807.
MELCHERS, R. E., and M. V. HARRINGTON (1982). Human Error in Simple Design Tasks. Research Report 3/1982.
Victoria, Australia: Monash University.
MELCHERS, R. E., and M. V. HARRINGTON (1984). Human Error in Structural Reliability I-lnvestigation of Typical
Design Tasks. Research Report 2/1984. Victoria, Australia: Monash University.
MELCHERS, R. E., M. 1. BAKER, and F. MOSES (1983). Evaluation of experience. In: IABSE Workshop on Quality
Assurance within the Building Process. Zurich, Switzerland: International Association for Bridge and Struc-
tural Engineering, pp. 9-30.
NESSIM, A. M., and I. JOROAAN (1983). Decision making for error control in structural engineering. In: Proceedings
of the 4th International Conference on Applications of Statistics and Probability Theory. Bologna, Italy:
Pitagora Editrice, pp. 713-728.
NORMAN, D. A. (1981). Categorisation of action slips. Psychological Review 88(1):1-15.
NowAK, A. S. (1979). Effect of human error on structural safety. Journal of the American Concrete Institute 76(9):
959-972.
NowAK, A. S., Ed. (1986). Modeling Human Errors in Structural Design and Construction (Proceedings of the
National Science Foundation Workshop). New York: American Society of Civil Engineers.
NOWAK, A. S., and R. I. CARR (1985). Sensitivity analysis of structural errors. Journal of Structural Engineering,
ASCE 111(8):1734-1746.
PERROW, C. (1984). Normal Accidents: Living with High-Risk Technologies. New York: Basic Books.
PETROSKY, H. (1985). To Engineer Is Human. New York: St. Martin's Press.
PIDGEON, N. F., J. STONE, D. I. BLOCKLEY, and A. B. TURNER (1990). Management of safety through lessons from
case histories. In: Safety and Reliability in the 90's: Will Past Experience or Prediction Meet or Needs?
M. H. Walter and R. F. Cox, Eds. London: Elsevier Applied Science Publishers, pp. 201-216.
POULTON, C. (1971). Skilled performance and stress. In: Psychology at Work. P. B. WARR, Ed. Harmondsworth,
U.K.: Penguin Books.
PUGSLEY, A. C. (1973). The prediction of proneness to structural accidents. Structural Engineering 51(6):195-196.
RABBITT, P. (1978). Detection of errors by skilled typists. Ergonomics 21(11):945-958.
RACKWITZ, R. (1977). Note on the Treatment of Errors in Structural Reliability. Report No. SFB 96. Munich,
Germany: Berichte zur Sicherheitstheorie der Bauwerke.
RACKWITZ, R. (1986). Human errors in design and structural failure. In: Modeling Human Errors in Structural
Design and Construction. A. S. Nowak, Ed. New York: American Society of Civil Engineers, pp. 216-224.
RASMUSSEN, 1. (1976). The role of the man-machine interface in systems reliability. In: Generic Techniques in
System Reliability Assessment. E. J. Henley and 1. W. Lynn, Eds. Leyden: Noordhoff, pp. 315-323.
RASMUSSEN, J. (1979). What Can Be Learned from Human Error Reports? Riso Report N-17-79. Riso, Denmark:
Riso National Laboratory.
REASON, J. (1990). Human Error. Cambridge, England: Cambridge University Press.
RIVAS, J. R., and D. R. Ruoo (1975). Man-machine synthesis of a disaster-resistant system. Operations Research
23(1):2-21.
ROUSE, W. B. (1985). Optimal allocation of system development resources to reduce and/or tolerate human error.
IEEE Transactions on System, Man and Cybernetics 15(5):620-630.
Human Errors and Structural Reliability 237
Royal Commission (1971). Report of Royal Commission into the Failure of West Gate Bridge. Victoria, Australia:
Government Printers.
SCHNEIDER, 1., Ed. (1983). Quality Assurance within the Building Process. Report No. 47. Zurich, Switzerland:
International Association of Bridge and Structural Engineering.
SHIBATA, H. 1986. The role of regulatory documents in controlling HE. In: Modeling Human Errors in Structural
Design and Construction. A. S. NowAK, Ed. New York: American Society of Civil Engineers, pp. 43-56.
SIBLY, P. G., and A. C. WALKER (1977). Structural accidents and their causes. Proceedings of the Institution of
Civil Engineers, Part 162:191-208.
SIMON, H. A. (1957). Models of Man. New York: John Wiley & Sons.
SIMON, H. A. (1969). The Science of the Artificial. Cambridge, Massachusetts: MIT Press.
STEWART, M. G. (1987). Control of Human Errors in Structural Design. Ph.D. Thesis. Newcastle, Australia: Uni-
versity of Newcastle.
STEWART, M. G. (1990). Human error in steel beam design. Civil Engineering Systems 7(2):94-1Ol.
STEWART, M. G. (1991a). Probabilistic risk assessment of quality control and quality assurance measures in struc-
tural design. IEEE Transactions on Systems, Man and Cybernetics 21(5):1000-1007.
STEWART, M. G. (1991b). Safe load tables: A design aid in the prevention of human error. Structural Safety 10:
269-282.
STEWART, M. G. (1992a). Modeling human error rates for human reliability analysis of a structural design task.
Reliability Engineering and System Safety 36:171-180.
STEWART, M. G. (1992b). A human reliability analysis of reinforced concrete beam construction. Civil Engineering
Systems 9(3):227-247.
STEWART, M. G. (1992c). Simulation of human error in reinforced concrete design. Research in Engineering Design
4(1):51-60.
STEWART, M. G., and R. E. MELCHERS (1988). Simulation of human error in a design loading task. Structural
Safety 5(4):285-297.
STEWART, M. G., and R. E. MELCHERS (1989a). Checking models in structural design. Journal of Structural En-
gineering, ASCE 116(ST6):1309-1324.
STEWART, M. G., and R. E. MELCHERS (1989b). Error control in member design. Structural Safety 6(1):11-24.
STEWART, M. G., and R. E. MELCHERS (1989c). Decision model for overview checking of engineering designs.
International Journal of Industrial Ergonomics 4:19-27.
SWAIN, A. D. (1978). Estimating Human Error Rates and Their Effects on System Reliability. Report No. SAND-
77-1240. Albuquerque, New Mexico: Sandia National Laboratory.
SWAIN, A. D., and H. E. GUTTMAN (1983). Handbook of Human Reliability Analysis with Emphasis on Nuclear
Power Plant Applications. Washington, D.C.: Nuclear Regulatory Commission.
TuRNER, A. B. (1976). The organization and inter-organizational development of disasters. Administrative Sciences
Quarterly 21:378-397.
TuRNER, A. B. (1978). Man-Made Disasters. London: Wykeham Press.
TuRNER, A. B. (1989). How can we design a safe organization? Paper represented at the 2nd International Con-
ference on Industrial Organization and Crisis Management, November 3-4, 1989, Leonard M. Stem School
of Business, New York University (New York, NY).
WALKER, A. C. (1981). Study and analysis of the first 120 failure cases. In: Structural Failure in Buildings. London:
Institution of Structural Engineers, pp. 15-39.
YAMAMOTO, M., and A. H.-S. ANo (1985). Significance of gross errors on reliability of structures. In: Proceedings
of the 4th International Conference on Structural Safety and Reliability (ICOSSAR), (Kobe, Japan), Vol. 3.
New York: International Association for Structural Safety and Reliability, pp. 669-674.
ZAKAy, D., and S. WOOLER (1984). Time pressure, training and decision effectiveness. Ergonomics 27(3):273-284.
11
NONDESTRUCTIVE
EXAMINATION RELIABILITY
F. A. SIMONEN
1. INTRODUCTION
Structural components are inspected to establish their integrity and thereby ensure their safety and
reliability. These inspections can be performed during the fabrication process (preservice inspection),
after installation, or during the operational life of the component (in-service inspection). Many methods
are available for nondestructive examinations (NDEs) and the corresponding cost and effectiveness of
these methods in detecting structural degradation can vary widely. This chapter describes some of the
methods commonly used for NDE, discusses some of the factors that govern the reliability of these
methods, and presents representative data from research programs that have quantified the reliability of
NDE. The discussion emphasizes inspection of pressure vessels and piping systems at commercial
nuclear power plants, because of the high level of research activity in this industry.
There are many available NDE methods for assessing the integrity of structural components. The
method selected for a given application is based on the type of degradation that is to be detected, and
also on practical limitations such as the cost of performing the inspection. This chapter discusses the
advantages and limitations of alternative NDE methods. The focus is on the detection of cracklike
defects in metallic components, using ultrasonic testing (UT) methods, because recent efforts have
quantified the reliability of UT methods, and have resulted in data that can be used as inputs to prob-
abilistic structural mechanics calculations.
2. ABBREVIATIONS
AE Acoustic emission
ASM American Society for Metals
ASME American Society of Mechanical Engineers
ASNT American Society for Nondestructive Testing
BPVC Boiler and pressure vessel code
DAC Distance amplitude correction
238
Nondestructive Examination Reliability 239
The NDE methods used for assessing the integrity of structural components range from simple visual
examinations to those involving "high technology" such as ultrasonics coupled with advanced signal
processing techniques. Some of the common NDE methods and their strengths and limitations are
described below. Additional information can be found in other publications such as American Society
for Metals (ASM; 1989) and Bush (1983).
the AE events reliably and to discriminate the valid events from other sources of acoustic energy. The
sizing of defects of AE methods is relatively difficult. However, in many cases AE offers the potential
to monitor known defects continuously for evidence of growth that may occur during the operation of
the component.
The objective of an NDE program is to enhance the reliability of structural components by detecting
flaws that can degrade the strength of the components. Having detected flaws that are of structural
significance, corrective actions can then be taken to prevent service failures. These corrective actions
can consist of changing the operation of the power plant, or the repairing, retiring, or replacing of the
component. Ideally an inspection should detect all flaws. The inspection should also correctly charac-
terize their size, location, orientation, etc., so that only those flaws of structural significance will be
repaired, and other flaws will be correctly classified as being benign. In addition, the ideal NDE system
will never incorrectly indicate the presence of flaws that do not actually exist. Nor will the ideal system
incorrectly characterize flaws so that significant flaws are left unrepaired or benign flaws are unneces-
sarily repaired.
Given that real NDE systems perform imperfect inspections, a number of measures are commonly
used to quantify the reliability of a given NDE system (e.g., equipment, personnel, and procedures) for
a given application (e.g., ultrasonic examination of welds in stainless steel piping). Some of these
measures are defined in the following discussion.
sistently larger or smaller than the actual size) or random in nature. Sizing errors are typically made
up of a combination of systematic and random errors. The random types of errors are often described
by statistical distributions. A reliable NDE method should have both a high detection probability and
an acceptable sizing accuracy.
The reliability of an NDE method can be described by a probability of detection (POD) curve that gives
the probability that the method will detect a specific type of flaw as a function of the size of the flaw.
Such curves are useful for structural reliability calculations, but difficult to quantify, because POD is a
function of a large number of variables other than flaw size. These variables fall into two general
100
~
0
>. 80
:t:::::
.0
CO
.0 60 -
0~
a..
c:: 40 -
.-
0
+-'
0
Q)
+-'
Q)
20 -
0
o
Flaw Size
Figure 11-1. Typical form of a probability of detection curve.
Nondestructive Examination Reliability 243
categories: (1) variables associated with the particular NDE method and detail of its application, and
(2) variables associated with the particular component and type of flaws that are to be detected.
versus short flaws. However, these trends may be optimistic, because they are not supported by cases
of actual detection trails that compare detection of long and short flaws (Taylor et al., 1989). In practice,
common cause effects can result in nondetection for each scan. For such cases the flaw length has no
significant effect on POD. Therefore, caution should be exercised in the application of models that
increase PODs for longer flaws.
5.2.3. Flaw type. Flaws in structural components have various metallurgical origins; and the de-
tectability of flaws of different types, but with the same physical dimensions, can be very different. In
general, volumetric flaws (voids, nonmetallic inclusions, etc.) are generally more readily detected than
cracks, although cracks are usually more significant from the standpoint of structural integrity. The
discussion of NDE methods (Section 3) describes the relative capabilities of specific NDE methods to
detect different types of flaws. Even for one category of flaw (such as cracks) there can be significant
differences in detection performance between, say, fatigue cracks and stress corrosion cracks. In per-
forming field inspections the equipment, procedures and data interpretation are often optimized for one
type of flaw that is most expected or of greatest concern for the component being examined. In such
cases, other types of defects, should they be encountered, may be difficult to detect (interpreted to be
a defect) with the procedures being used. Therefore, it is recommended that the POD curves used in
probabilistic structural mechanics calculations be specific to the structural failure mechanism and NDE
method being addressed.
5.2.4. Flaw location. A flaw may extend to one or the other surface of the component or may
be confined entirely within the wall thickness of the component. Flaw location can have a significant
influence on detection. For example, surface examinations by visual or PT methods will never detect
subsurface flaws. In other inspection cases, the testing or examination (e.g., ultrasonic and eddy current)
is performed with access only to one surface, and the sensitivity of the methods can be significantly
degraded for flaws on the opposite surface or for flaws embedded within the wall of the component.
Knowledge of the most likely failure mechanism is often used to optimize the NDE method and pro-
cedure to maximize the detection for flaws expected at critical locations. For example, inspections may
focus on inner surfaces, because flaws at this location are more likely to initiate and will also grow
more rapidly due to the corrosive environments and/or due to enhanced levels of embrittlement.
5.2.5. Flaw orientation. For cracks and other defects, detection probabilities as well as the sig-
nificance of the flaws to structural integrity depend on the flaw orientation (e.g., the flaw is normal or
parallel to the surface of the component). An effective NDE method should detect flaws that are normal
to the surface of the component and/or normal to the directions of maximum stresses in the component.
In many cases, the most expected naturally occurring flaws (e.g., laminations and inclusions) will be
in the parallel direction, and thus detection of such flaws will be of little significance to structural
integrity. Nevertheless, such flaws in parallel orientation can be of concern, because these flaws may
interfere with the reliable detection of other (smaller) flaws that are significant to structural integrity.
5.2.6. Material type. Flaw detection can be more difficult in some materials than in other mate-
rials, sometimes to such an extent that some NDE methods become totally ineffective for certain ma-
terials. For example, the internal discontinuities inherent to fiber-reinforced composites make the NDE
methods that are reliable for metals ineffective for composites. Another example is that of cast stainless
steels, which have a metallurgical structure of large-oriented grains that results in a very low inspection
reliability for ultrasonic methods. In comparison to the cast material, wrought stainless steels (including
associated weld materials) are inspectable, but still not to the same level of reliability possible for
otherwise identical ferritic steel components. In conclusion, care must be taken in extrapolating NDE
reliability trends from one material to another material, because many materials can interfere with
transmission of the signals originating from the flaws of interest.
Nondestructive Examination Reliability 24S
5.2.7. Access to inspection location. Limitations relating to ease of access can arise from many
causes. In some cases inspection can be performed only from one surface (inside or outside a pipe),
whereas the flaws of interest are on the far side or opposite surface. For thick sections or complex
component configurations, the associated loss of NDE sensitivity may become unacceptable. In the
inspection of welds, obstructions may limit access of the transducer to only one side of the weld, and
NDE sensitivity may be reduced for flaws located on the far side of the weld.
Effective inspection of many components often requires disassembly to gain access. For practical
reasons disassembly may be precluded, and this will compromise the effectiveness of the inspection. In
other cases the disassembly may be possible, but the potential damage to the structure from the disas-
sembly process may exceed the potential benefits to be gained by the inspection. Such considerations
will dictate that inspections be performed only when the equipment is disabled for repairs or scheduled
maintenance.
5.2.8. Surface conditions. Inspection effectiveness can be degraded by surface conditions. Ex-
amples are surface roughness associated with weld deposits, cladding, or surface deposits from the
build-up of contamination. For ultrasonic inspection, surface roughness can increase the noise level and
reduce the signal level in the recorded signals by interfering with acoustic coupling between the trans-
ducer and component to be examined. For eddy current inspection, thin layers of contamination (e.g.,
copper deposits) can significantly influence the electromagnetic signal. For visual examinations, surface
conditions can obscure cracklike flaws, either by covering the flaws or by making it difficult to distin-
guish cracks from scratches and other surface irregularities.
5.2.9. Extraneous signals. There are often extraneous signals that can decrease the overall signal-
to-noise ratio or even give false indications of defects that are difficult to discriminate from real flaws.
Concerns with such factors as surface condition and large-grained materials are related to extraneous
signals. In other cases there are geometric and metallurgical features that are difficult to distinguish
from flaws in a component. Weld root and counterbore geometries are examples of such features.
5.2.10. Human factors. Issues relating to human factors as they influence inspection reliability
have been well recognized and discussed, for example by Wheeler et al. (1986). Section 4.3 of this
chapter discusses the concept of ROC curves that describe the need to balance a desire for reliable flaw
detection with a low tolerance for false calls. Probability of detection curves generated under laboratory
conditions may not reproduce the same motivational factors that will exist under field conditions, where
unnecessary repairs must be avoided along with the costs of lost production during the time needed to
make the repairs. Field inspections are often performed under time restraints and in hostile environments
(heat, humidity, poor lighting, protective clothing, confined spaces, etc.) that may not be imposed when
data for published POD curves were generated. All of these factors will influence NDE personnel and
can degrade their performance. Another influencing factor is that flaws are rarely encountered during
field inspections, whereas the inspection personnel encounter flaws in a large fraction of the specimens
being examined in laboratory trials. For this reason, the decision-making process in the field inspection
may be less disposed to declare the presence of flaws in actual components as compared to inspection
of laboratory samples.
6. MULTIPLE EXAMINATIONS
Detection probabilities such as established from round robin tests describe the performance for a single
inspection by a prescribed NDE method. Actual components are often inspected repeatedly by different
inspection teams, at different times over the life of the component, and by different NDE methods
246 Nondestructive Examination Reliability
(visual, ultrasonic, eddy current, etc.). Each specific examination mayor may not detect a given flaw;
and if detected, the flaw may be sized differently in each examination (Yang and Donath, 1983).
Probabilistic structural mechanics methods can be used to predict the overall impact of multiple
examinations. However, assumptions must be made in structural mechanics calculations regarding the
statistical independence of individual examinations. In this regard, detection of a given flaw is only
partly governed by random errors in the detection process. Often a given flaw will have characteristics
that make it either particularly difficult (or easy) to detect and size. For example, the surface of a fatigue
crack could be tightly closed at the time the inspections are performed by a state of residual stresses
acting on the surfaces of the crack. Such a condition would consistently inhibit crack detection for each
of the multiple inspections.
The most optimistic assumption that can be made is that the outcome of each inspection is inde-
pendent of other inspections. The net detection probability of n inspections is then
In the most pessimistic case, the outcome of each inspection will be highly correlated with the
outcome of the other inspections. The probability of at least one of the multiple inspections detecting
the flaw can be estimated as the maximum of the probabilities for individual inspections, so that
This relationship can be applied to the scenario of periodic inspection of a growing crack such that
as the crack depth increases, there will be an increasing probability of the crack being detected at the
time of each successive inspection. The governing detection probability is that for the final inspection
when the crack has its greatest depth.
Another critical consideration is the decision-making logic used to reach a conclusion that the crack
actually exists, based the findings of the multiple inspections. The two equations as given above both
assume that a flaw will be considered detected if anyone of the individual inspections successfully
detects the flaw. This is the most conservative logic that maximizes crack detecting, but tends to give
a greater number of false calls. In field practice, the initial detection of a flaw may lead to the component
being reinspected, typically by a different inspection team and/or by a different NDE technique. As a
result of such confirmatory inspections, some actual flaws eventually may be declared as not detected.
In the extreme case, a single negative finding from a large number of confirmatory inspections could
be sufficient to produce a declaration of nondetection. In this case, the overall detection probability
could be significantly less than any of the individual detection probabilities.
Over the last 20 years, efforts have been made to determine the reliability of NDE as practiced in the
nuclear power industry and elsewhere, and to take actions to improve NDE reliability. Early findings
showed a relatively low level of NDE reliability, even though the inspection methods often complied
with the minimum standards of existing codes as published by such organizations as the American
Society of Nondestructive Testing (ASNT) and the American Society of Mechanical Engineers (ASME).
Subsequent efforts have produced improved codes and standards, and the reliability of NDE as practiced
in the field has increased.
The traditional NDE practice has used well-defined geometric defects (e.g., machined notches and
flat-bottomed holes) to develop NDE procedures and for calibrations of equipment at the time of field
Nondestructive Examination Reliability 247
inspections. However, there were no additional requirements to use representative service-type defects
(i.e., cracks) for training and demonstrations of capability. To correct this situation, both the nuclear
and aerospace industries have made continuing efforts to enhance the level of NDE reliability. These
efforts have included a number of round robin inspection studies that used specimens with service-type
defects to establish NDE reliability. The round robin data have shown large team-to-team variations in
the detection and sizing of flaws. As shortcomings have been noted, the nuclear industry has responded
with steps to strengthen minimum requirements, such as in the ASME Code (AS ME, 1992), in order
to improve the inspection of reactor pressure vessels and piping systems. These long-term efforts have
led to continuing improvements in the reliabilities of ultrasonic NDE inspections.
The original requirements for NDE methods in the ASME Section XI Boiler and Pressure Vessel
Code (ASME, 1974) were highly prescriptive in nature. However, more recently, ASME Section XI
has adopted Appendix VIII (Cowfer, 1989), which follows a performance demonstration approach
through which inspection organizations must qualify the performance of equipment, procedures, and
personnel. In the new approach, inspection teams must achieve passing scores in tests of their capabil-
ities to detect simulated service-type flaws in a matrix of samples that simulate conditions in reactor
pressure vessels and piping. A passing score requires detection of a statistically significant fraction of
the flaws in the sample set, while maintaining an acceptably low rate of false calls. The performance
demonstrations also require that a team attains passing scores on flaw-sizing capability.
Performance demonstrations provide a basis to identify those NDE methods that are most reliable,
and those whose reliability is unacceptable. However, current performance demonstrations in the ASME
Section XI Code require only a specified overall POD level for the collection of flaws in the sample
set. The sample sets have a range of flaw sizes, beginning with the smallest size that is considered to
be structurally significant. As now practiced, performance demonstrations are not designed to generate
a full POD curve as a function of flaw depth, as is needed for purposes of probabilistic structural
mechanics calculations. To obtain a full, statistically based POD curve, additional detection data beyond
the minimum demanded by current performance demonstrations tests will be required. Nevertheless,
lacking such a complete set of data, one can still estimate POD curves. However, this requires consid-
erable engineering judgment and extrapolations from the currently available base of detection data as
generated from performance demonstrations.
A number of research studies have produced information on the reliability of NDE methods to detect
and size flaws. For example, Fig. 11-2 shows some early work of Packman et al. (1968), who developed
a reliability index for characterizing flaws (cracks) in aircraft structures for various nondestructive
examination methods. As indicated, the inspection methods exhibit a range of reliability levels with
ultrasonics and magnetic particles having a reliability in the 80% range, X-ray performing well below
50%, and dye penetrant performing at a level intermediate between ultrasonics and X-ray. A more recent
and complete summary of detection capabilities can be found in an ASM Metals Handbook article by
Rummel et al. (1989).
Efforts by the nuclear power industry have quantified the reliability of the ultrasonic NDE methods
that are used to inspect reactor pressure vessels and piping. Figure 11-3 is based mainly on such studies
in the nuclear power industry. For convenience these curves are expressed as probability of nondetection
(PND), which is equal to 1.0 - POD. A wide range of detection probabilities, as a function of crack
depth a, is evident. All curves show an improved detection probability for deeper cracks, with the better
procedures giving detection probabilities of greater than 90% for cracks deeper than 1.0 in. However,
248 Nondestructive Examination Reliability
1.0
0.9
0.8
0.7
)(
CD
0.6
'C
.E 0.5
:>.
:: 0.4
:ii
CIS
'i 0.3
a:
0.2
0.1
0
0 0.05 0.15 0.25 0.35 0.45
Actual Crack Length, 2c, Inch
Figure 11-2. Comparison of four NDE techniques on reliability of flaw indications in steel cylinders.
0.99
0.98
0.95
'iii
C 0.9
tJ..z
C
1
PNL (NUREGlCR-4486)
0.7 ~NearSunacelns~~n
~ of Vessels
"CjI 0.5
c
z0 0.3
'0
....
>.
:is 0.1
~tJ.. 0.05
0.02
0.01
10-3
10-4
0.01
Crack Depth (in.)
Figure 11-3. Probability of nondetection of a crack as a function of its depth for an ultrasonic inspection.
Nondestructive Examination Reliability 249
all procedures have a greater than 50% probability of missing relatively small defects of depths less
than 0.1 in.
Data from the Second Program for Inspection of Steel Components (PISC-II) trials (Nichols and
Crutzen, 1988) are presented in Figs. 11-4 and 11-5 to illustrate some of the factors that govern the
detection of defects in the specific case of reactor pressure vessels (ferritic steel vessels with wall
thickness up to 10 in.). Comparison of Figures 11-4 and 11-5 shows the improvement of NDE reliability
achieved as the industry developed special procedures to replace the original 20% distance amplitude
correction (DAC) approach of the ASME Boiler and Pressure Vessel Code. These plots also show how
characteristics of the cracks (other than depth) are a major factor in NDE reliability. Volumetric defects,
such as slag and lack of fusion, are relatively easy to detect. Smooth cracks with sharp edges, such as
produced by fatigue, are much more difficult to detect.
Efforts to measure and improve the reliability of NDE methods continue to be supported by the U.S.
Nuclear Regulatory Commission (Doctor et aI., 1990) and the Electric Power Research Institute (1989a,
b). Kennedy and Foulds (1990) report on the status of studies directed to reactor pressure vessels. Other
efforts have addressed reactor piping and steam generator tubes.
Table 11-1 lists a number of other useful sources of data on inspection reliability that relate to
various types of pressure boundary components in nuclear power plants. It is noted that each of these
studies has addressed a specific inspection method (e.g., ultrasonic and eddy current) for detecting a
specific type of metallurgical degradation (e.g., fatigue cracking, stress corrosion cracking, wall thin-
ning). The objective has been to establish the reliability of the methods, to identify the need for im-
provements in reliability, and to document the benefits gained from improved methods.
Data on flaw sizing capabilities are less extensive than data for detection probability. Figure 11-6
DDP I Plate 3
0.99 ASME20%DAC
0.98
0.95
A .. ' Difficult" Cracks
0.90
g B - 'Easy' Cracks
c
z C.. Volumetric Defects
a. 0.70
C
.Q
U
(IJ
O.SO
Qi
0 0.30
C:
0
z
'0 0.10
.~
.,
:is
.c
0.05
e
a.
0.02
0.01
10.3
Figure 11-4. Probability of nondetection of defects in vessels by ultrasonic inspection, using ASME Boiler and
Pressure Vessel Code requirements. (Source: Nichols, R. W., and Crutzen, S., Eds. [1988]. Ultrasonic Inspection
of Heavy Section Steel Components: The PISC-l/ Final Report. London and New York: Elsevier Applied Science
Publishers. Reprinted with permission.)
250 Nondestructive Examination Reliability
indicates the data relating true versus measured crack depth for inspection of reactor pressure vessels
from the PISC-II trials (Nichols and Crutzen, 1988). These curves show a relatively high level of NDE
reliability, which corresponds to the best of the teams that participated in the PISC-II study.
Figure 11-7 illustrates the case of intergranular stress corrosion cracking (IGSCC) of stainless steel
piping, in which a wide range of sizing capabilities was observed for the 17 teams participating in a
sizing round robin (Electric Power Research Institute, 1989b). Many sizing techniques were used by
the different teams. Subsequent efforts within the nuclear power industry have required inspection teams
to pass performance demonstration requirements (see ASME Code Section VI, Appendix VIII) to assure
that all teams will size flaws with a capability comparable to the better teams of Fig. 11-7.
A set of POD curves has been given in this chapter. These POD curves from round robin inspection
trials can serve as benchmarks for estimating NDE reliability for field inspections. In some cases the
types and sizes of flaws and inspection conditions for the round robins may be similar to those found
under field conditions, and the inspection procedures of the round robins may be reasonable simulations
of the actual field conditions. In such cases, POD curves from round robin studies can be used as input
to structured mechanics calculations.
In many cases, the confirmation of structural geometry, material inspection method, inspection pro-
cedure, and inspection personnel will fall outside the scope of an applicable POD curve. It is not possible
DDP I Plale 3
0.99 All Special Procedures
0.98
0.95
<0 0.90
C
z
Q..
A - 'DiHlcult' Cracks
C 0.70
0
i
B - 'Easy" Cracks
0.50 C - Volumetric EHacts
0
C.
0 0.30
z
0
~ 0.10
~
.J:J 0.05
e
Q..
0.02
0.01
10.3
10'"
0.01 0.1 0.2 0.5 1.0 2.5 5.0 10.0
Crack Depth (In.)
Figure 11-5. Probability of nondetection of defects in vessels by ultrasonic inspection using special procedures.
(Source: Nichols, R. W, and Crutzen, S., Eds. [1988). Ultrasonic Inspection of Heavy Section Steel Components:
The PISC-II Final Report. London and New York: Elsevier Applied Science Publishers. Reprinted with permission.)
Table 11-1. Status of Reliability Studies for ASME Section XI Inspection Methods
Damage
Component" Inspection method mechanism Organization responsible Reliability of method Ref.
Beltline of reactor Ultrasonics using Cracking PISC-I (Plate Inspection Detection capability PISC (1980)
pressure vessel manual embedded within Steering Committee); was marginal.
procedures and thickness of European Organization Much less reliable
past ASME plate for Economic than for the
Section XI Cooperation and procedures used in
practices Development the subsequent
PISC-II trials
Beltline and Ultrasonics Near-surface PISC-II (Program for Detection reliability Nichols and Crutzen
nozzles of PWR cracking caused Inspection of Steel relatively good; (1988)
reactor pressure by disposition of Components); sizing capability
vessels weld cladding, European Organization relatively poor
volumetric weld for Economic
defects, voids, Cooperation and
and porosity Development with
participation of other
organizations,
including U.S. Nuclear
Regulatory
Commission
Beltline/plates of Ultrasonics, using Cracks near vessel Risley Nuclear Effective detection Watkins et al. (1982)
reactor pressure methods inner surface, Laboratories, (Risley, reliability and good
vessel proposed as and within United Kingdom)- sizing capability
sufficiently cladding defect detection trials were demonstrated
reliable for (DDT)
British
regulatory
requirements
PWR primary Ultrasonics Fatigue cracking Pacific Northwest Very good reliability Doctor and Heasler
coolant piping, Laboratory for U.S. (POD> 90%) (1984)
carbon steel Nuclear Regulatory demonstrated by all
Commission-piping participating teams
N inspection round robin
VI
...
N
U1
N
Damage
Componenta Inspection method mechanism Organization responsible Reliability of method Ref.
BWR piping, Ultrasonics Intergranular stress Electric Power Research Early results of an Dau (1983)
wrought corrosion Institute ongoing effort
stainless steel cracking showed poor sizing
capability
BWR piping, Ultrasonics Intergranular stress Pacific Northwest Only the best teams Heasler et at. (1990)
wrought corrosion Laboratory for U.S. demonstrated
stainless steel cracking Nuclear Regulatory adequate
Commission-mini- performance in
round robin detecting flaws; the
majority of teams
had unacceptable
performance. All
teams were
unreliable in sizing
flaws
PWR primary Ultrasonics Fatigue cracking Pacific Northwest None of the Doctor and Heasler
coolant piping, Laboratory for U.S. participating teams (1984)
centrifugally cast Nuclear Regulatory demonstrated
stainless steel Commission reliable detection
for the coarse-
grained material
Steam generator Eddy current (ET); Pitting, wall Pacific Northwest Relatively good Kurtz et al. (1990)
tubing also ultrasonics thinning, Laboratory for U.S. reliability for
and profilometry denting, and Nuclear Regulatory detection and sizing
to limited extent cracking Commission of wall thinning
and pitting.
Relatively poor
reliability for
cracking
Table 11-1. (Continued)
Damage
Component" Inspection method mechanism Organization responsible Reliability of method Ref.
Steam generator Eddy current Stress corrosion PISC-III (Program for Multiyear effort with
tubing cracking, Inspection of Steel round robin testing
intergranular Components); underway
attack, wastage, European Organization
pitting for Economic
Cooperation and
Development
Steam generator Eddy current Wall thinning, Electric Power Research Future data will be
tubing pitting, denting, Institute based on a round
cracking, etc. robin interpretation
of existing ET
signals from actual
steam generator
inspections
Various nuclear Ultrasonics Cracking, slag Pacific Northwest A wide range of Bush (1983)
and nonnuclear inclusions, Laboratory for U.S. reliability in
components machined Nuclear Regulatory detection and sizing
notches and Commission is indicated by data
other types of in this
defects comprehensive
survey report
Aircraft structures Ultrasonics Cracks at fastener Lockheed-Georgia Best teams Boisvert et al. (1981)
with emphasis holes Company with demonstrated
on fastener participation of Air effective
joints Force maintenance inspections, but
facilities. A large-scale large team-to-team
inspection round robin variations were
popularly known as noted
"have cracks will
travel"
N
·PWR-Pressurized water reactor; BWR-boiling water reactor.
~
254 Nondestructive Examination Reliability
in this chapter to provide a comprehensive compilation of POD curves that cover all situations of interest
to probabilistic structural mechanics calculations.
Therefore this chapter has discussed the various NDE methods and the factors that influence their
reliability, in order to assist in the assignment of suitable NDE reliability levels for structural mechanics
calculations. Uncertainties in estimated POD curves should be addressed through sensitivity calculations.
It is also recommended that the structural analyst discuss the inspection application with the NDE
specialists to arrive at suitable estimates of NDE reliability. In the end, these estimates will be based
on data when such data are available, but will require exercising judgment to make realistic estimates
where such data are lacking.
Inspections are performed to prevent structural failures by detecting degradation, and thus permitting
repair or remediation of degradation before it progresses to a critical level. The benefits or efficiency
of an inspection strategy can be quantified by using a parameter such as "factor of improvement,"
which is defined as a ratio:
If no inspections are performed, the factor improvement is equal to one. For a perfect inspection
strategy (i.e., all potential failures are eliminated by inspection), the factor of improvement would
become infinite. Probabilistic structural mechanics evaluations indicate that a factor of improvement
approaching 10 (Simonen, 1990; Thomas, 1979) should be considered to be a reasonable goal for an
inspection strategy, and in general such a strategy will entail a rigorous level of inspection. It should
o
~~----~-----.------r-----.-----~
Statistical Summary :-
C! Slope 0.879
l4------+----~------+-----~~-c~
Intercept 2.238
Correlation coeff. 0.958
Eo Error of estimate 4.923
~~4------+----~----~~~~~----~
General information :-
"CI
L. Mean error -0.084
;: 0 0 .. '
g ...;+------+------:...:...,j~..,...;...--+-----~----~
Std deviation 5.415
ClM o RMS error 5.416
~
o Population 32
~4---~~~~~------+-----~-----1
o
o~~---r----~,------+_----~----_;
16.0 32.0 -48.0 6-4.0 80.0
True (mm)
Figure 11-6. Sizing performance for best performing single team in PISC-II trials for plate 3.
Nondestructive Examination Reliability 255
also be recognized that some inspections may result in damage to the structure (e.g., damage from
disassembly of components, or unnecessary and/or faulty repairs as a result of false calls). For such
cases, the factor of improvement could have a value of less than one.
Factors of improvement can be defined in terms of the reliability of an overall structure or system.
More often, it is defined in a more limited context such as for the reliability of a single weld in a
structure. Because monitoring of all potential failure sites is generally not possible, strategies are de-
veloped to maximize the benefits (i.e., factor of improvement) for the limited number of inspections
that are feasible from the economic standpoint. Factors of improvements are governed by three major
elements of an in-service inspection strategy:
1. Inspection technique
2. Sampling plan
3. Inspection frequency
8
100.00
80.00
~~~~~~ ____--------6
20.00
True Depth, % Through Wall
Figure 11-7. Results of EPRI IGSCC round robin. (Linear regression lines for 17 teams in IGSCC sizing round
robin. The teams used many different sizing techniques for sizing IGSCC in stainless steel pipe welds. The ultra-
sonically measured depth is plotted against the true depth. The teams were numbered arbitrarily, as indicated by
the numbers on the right side.) (Source: Copyright © 1989. Electric Power Research Institute (1989b). Accuracy
of Ultrasonic Flaw Sizing Techniques for Reactor Pressure Vessels, EPRI NP-6273. Palo Alto, California: Electric
Power Research Institute. Reprinted with permission.)
256 Nondestructive Examination Reliability
Nondestructive examination reliability as discussed in previous sections is but one of the factors that
govern the factor of improvement for a given inspection strategy.
Figure 11-8 shows how the three elements of the inspection strategy must interact to result in timely
detection of degradation. The inspection technique consists of the method (radiography, ultrasonics,
etc.), the specific equipment used, the inspection procedures, and the skill level of the inspection per-
sonnel. The sampling plan refers to the number of locations within the structure that are actually
inspected. A sampling plan might consist of all welds in the structure, a random sample of these welds,
or a specific selection of welds based on considerations of consequences and/or likelihood of failure.
Inspection frequency refers to how often the inspections are performed (e.g., once a year, once every
10 years). In other cases, inspections are performed only at such times that equipment is disassembled
for purposes of maintenance activities.
An effective inspection strategy achieves a balance between the three elements indicated by the
circles of Fig. 11-8. For example, the very best inspection technique (i.e., probability of detection
approaching one) will not be effective if the inspection frequency is inadequate, such that cracks can
initiate and grow to critical sizes over time periods between successive inspections.
Probabilistic structural mechanics methods are particularly useful for establishing tradeoffs between
improved NDE sensitivity and more frequent inspections. In this regard probabilistic structural me-
chanics provides the only method to compare relative benefits of alternative strategies. Service failures
are usually of low frequency and as such provide insufficient empirical data to compare factors of
improvement for different inspection strategies. The objective of probabilistic calculations is to guide
a decision-making process by comparing candidate inspection strategies that have not yet been
implemented.
WELD
JOINT
SAMPLING
PLAN
The methods of probabilistic structural mechanics can be used to quantify the benefits of in-service
inspection by predicting the reductions in failure rates associated with given inspection programs. In
this section we discuss some philosophical viewpoints about the objectives of in-service inspections
(ISIs) and how inspections do or do not impact structural integrity.
In this discussion we should make clear that NDE does not by itself reduce probabilities of failure;
rather it is the repairs made as a result of NDE that actually improve structural reliability. Nevertheless,
the discussion in this chapter attributes reductions in failure probabilities to inspections, with the im-
plication that needed corrective measures are taken as a result of lSI findings.
Repairs or corrective actions can take many forms. A simplified viewpoint is often taken that detec-
tion of degradation by lSI leads to a "perfect" repair of the component. In many cases the repair might
involve replacement of the component, or a repair might involve removal of a crack with associated
rewelding. In other cases the corrective action will involve operational changes that will prevent failures
of the components. Operational changes could involve mitigating stresses, reducing the severity of a
corrosive environment, or rigorous monitoring of the degradation to ensure that the component is re-
moved from operation before failure occurs. In other cases no corrective action is taken on the basis
of a conservative fracture mechanics evaluation that shows that a detected flaw is benign because its
size, location, orientation, etc., are such that the probability of the flaw causing failure is negligible.
Chapman (1983) has taken the view that inspections can serve two quite different purposes. In some
cases, inspections can be performed on a sample basis, which only increases confidence in structural
reliability with the inspections having no actual impact on the reliability; in other cases inspections can
be sufficiently extensive to actually reduce the failure probabilities.
When inspections are performed to increase confidence, it is assumed that the inspection program
NDE reliability, inspection sample size, and inspection intervals are insufficient to reduce the failure
probability by a meaningful extent. Nevertheless, such a limited inspection program is useful in in-
creasing confidence that the failure rates for the components of concern are in fact low. For example,
an unexpected increase in failure probability might imply the widespread development of cracking as
a precursor event prior to a catastrophic failure of the component. For this scenario, a limited inspection
would be sufficient to provide confidence that widespread cracking was not occurring in the population
of components of concern.
In the view of Chapman and Booth (1985), adverse inspection findings can reduce confidence in
previously estimated low failure probabilities, and such findings would lead to more intensive inspection.
In this case, the objective of the more extensive inspection program would correspond to the second
of Chapman's objectives; that is, inspections are performed to actually reduce the failure probability.
Another philosophical viewpoint is that inspections are one element in a "defense in depth" strategy
that ensures structural integrity. Inspection is not considered to be the front-line basis for structural
integrity. This is consistent with the often-stated concept that one cannot inspect quality into a product,
but that inspection provides information needed to make corrections and improvements to a production
process. From this philosophical viewpoint, probabilistic structural mechanics calculations will be overly
simplistic and may greatly underestimate the potential benefits of inspections. In this context, changes
in a production process, mitigation of an advance operating environment, selection of more appropriate
materials of construction, replacement of components of defective design, or improvement to mainte-
nance practices can greatly reduce failure probabilities by impacting the reliability of a large groups of
components other than those that are actually inspected. In such cases, the early detection of structural
degradation can be the reason that corrective actions are taken. However, it is difficult to use probabilistic
structural mechanics to quantify the reliability improvements that can be specifically allocated to in-
258 Nondestructive Examination Reliability
service inspection, as opposed to the improvement associated with corrective actions that address the
root causes.
In this chapter, we have discussed the subject of NDE reliability from the standpoint of probabilistic
structural mechanics. It has been seen that knowledge of NDE reliability can permit improved calcu-
lations of structural reliability to be performed. On the other hand, structural reliability calculations can
permit the benefits of alternative inspections strategies to be quantified and, thus, permit optimum
strategies to be selected for in-service inspection programs. A range of NDE methods has been described,
along with the factors that impact the reliability of given methods when applied to specific structural
applications. We have noted the ongoing efforts by NDE specialists to quantify and improve inspections
as practiced under field conditions. Examples of NDE reliability data have been presented and the reader
has been pointed to more extensive sources of such data. The limitations of such data have been
emphasized, along with guidance on how to make realistic estimates of NDE reliability in the usual
situation, for which application specific data are lacking. Finally, the role of NDE as a potential means
of reducing structural failure probabilities has been discussed from a philosophical perspective. It has
been noted that NDE is most often applied to inspect structural systems on a sample basis. Use of NDE
to actually reduce structural failure probabilities implies that the inspections use NDE methods with
high reliability, and that the inspections are focused on locations that make the major contribution to
the overall failure probability of the system. More limited inspections do not have a direct impact on
failure probabilities, but provide confidence or lack of confidence in the reliability of the inspected
structure. In this way, augmented levels of inspection, when necessary, can permit corrective actions to
the operation, maintenance, or replacement of components to be made in a timely manner.
REFERENCES
ASM (1989). Metals Handbook, 9th ed., Vol.17: Nondestructive Evaluation and Quality Control. Metals Park,
Ohio: ASM International.
ASME (American Society of Mechanical Engineers) (1974). Section XI Rules for In-Service Inspection of Nuclear
Power Plant Components, ASME Boiler and Pressure Vessel Code. New York: American Society of
Mechanical Engineers.
ASME (American Society of Mechanical Engineers) (1992). Section XI Rules for In-Service Inspection of Nuclear
Power Plant Components, ASME Boiler and Pressure Vessel Code. New York: American Society of
Mechanical Engineers.
BOISVERT, B. W., W. H., LEWIS, and W. H. SPROAT (1981). Uniform Qualification of Military and Civilian Non-
destructive Inspection Personnel. LG81 WP7254-003. Marietta, Georgia: Lockheed Georgia Company.
BUSH, S. H. (1983). Reliability of Nondestructive Examination, Vols. 1-3. NUREG/CR-3110 (prepared by Pacific
Northwest Laboratory). Washington, D.C.: Nuclear Regulatory Commission.
CHAPMAN, O. J. V. (1983). A statistical approach to the analysis of lSI data using the Bayes method. In: Proceedings
of the 7th Structural Mechanics in Reactor Technology (SMiRT) Conference, Vol. D. Amsterdam, The Neth-
erlands: North-Holland Physics Publishing.
CHAPMAN, O. J. V., and A. BoOTH (1985). Confidence through sample lSI. In: Proceedings of the 8th Structural
Mechanics in Reactor Technology (SMiRT) Conference, Vol. M. Amsterdam, The Netherlands: North-Holland
Physics Publishing.
COWFER, D. (1989). Basislbackground for ASME Code. Section XI. Proposed Appendix VIII: Ultrasonic exami-
Nondestructive Examination Reliability 259
nation performance demonstration. In: Nondestructive Evaluation: NDE Planning and Application. New
York: American Society of Mechanical Engineers, pp. 1-5.
DAU, G. 1. (1983). Ultrasonic Sizing Capability of IGSCC and its Relation to Flaw Evaluation Procedures. Char-
lotte, North Carolina: Electric Power Research Institute (NDE Center).
DOCTOR, S. R, and P. G. HEASLER (1984). A pipe inspection round robin test. In: Proceedings of the 6th Inter-
national Conference on NDE in the Nuclear Industry. Metals Park, Ohio: American Society for Metals, pp.
563-568.
DocroR, S. R, J. D. DEFFENBAUGH, M. S. GoOD, E. G. GREEN, P. G. HEASLER, L. D. REID, F. A SIMONEN, 1. C.
SPANNER, T. T. TAYLOR, and T. V. Vo (1990). NDE reliability and SAFf-UT final development. Nuclear
Engineering and Design 18:359-374.
Electric Power Research Institute (1989a). Evaluation of Flaw Sizing Techniques for Heavy Sections. Report
RPI570-2. Palo Alto, California: Electric Power Research Institute.
Electric Power Research Institute (1989b). Accuracy of Ultrasonic Flaw Sizing Techniques for Reactor Pressure
Vessels. Report NP-6273. Palo Alto, California: Electric Power Research Institute.
HARRIs, D.O., E. Y. LIM, and D. D. DEDHIA (1981). Probability of Pipe Fracture in the Primary Coolant Loop
of a PWR Plant, NUREG/CR-2189, Vol. 5 (prepared by Lawrence Livermore National Laboratory). Wash-
ington, D.C.: Nuclear Regulatory Commission.
HEASLER, P. G., T. T. TAYLOR,1. C. SPANNER, S. R DOcroR, and J. D. DEFFENBAUGH (1990). Ultrasonic Inspection
Reliability for Stress Corrosion Cracks: A Round Robin Study of the Effects of Personnel, Procedures,
Equipment and Crack Characteristics. NUREG/CR-4908 (prepared by Pacific Northwest Laboratory). Wash-
ington, D.C.: Nuclear Regulatory Commission.
KENNEDY, E. L., and J. R FOULDS (1990). Nuclear Reactor Pressure Vessel Flaw Distribution Development Phase
I-NDE Capability and Sensitivity Analyses. SAND89-7148 (prepared by Failure Analysis Associates, Inc.).
Albuquerque, New Mexico: Sandia National Laboratories.
KURTZ, R J., R A CLARK, E. R. BRADLEY, W. M. BOWEN, P. G. DOCTOR, R. H. FERRIS, and F. A SIMONEN
(1990). Steam Generator Tube Integrity Program/Steam Generator Group Project-Final Project Summary
Report. NUREG/CR-5117 (prepared by Pacific Northwest Laboratory). Washington, D.C.: Nuclear Regula-
tory Commission.
NICHOLS, R. w., and CRUTZEN, S., Eds. (1988). Ultrasonic Inspection of Heavy Section Steel Components: The
PISC-II Final Report. London and New York: Elsevier Applied Science.
PACKMAN, P. F., et al. (1968). The Applicability of a Fracture Mechanics Nondestructive Testing Design Criterion.
Report AFML-TR-68-32. Dayton, Ohio: Air Force Materials Laboratory, Wright-Patterson Air Force Base.
PISC (1980). Analysis of the PISC Trials Results for Alternative Procedures. Plate Inspection Steering Committee
Report No.6. EOR 6371 ED.
RUMMEL, W. D. (1983). Considerations of quantitative NDE and NDE reliability improvement. In: Progress in
Quantitative Nondestructive Evaluation, Vol. 2A D. O. Thompson and D. F. Chimenti, Eds. New York:
Plenum Press, pp. 19-35.
RUMMEL, W. D., G. L. HARVEY, and T. D. COOPER (1989). Applications of NDE reliability to systems. In: Metals
Handbook, 9th ed., Vol. 17: Nondestructive Evaluation and Quality Control. Metals Park, Ohio: ASM
International.
SIMONEN, F. A (1990). An evaluation of the impact of inservice inspection on stress corrosion cracking of BWR
piping. In: Codes and Standards and Applications for Design and Analysis of Pressure Vessel and Piping
Components. New York: American Society of Mechanical Engineers, pp. 187-193.
SIMONEN, F. A, and H. H. Woo (1984). Analyses of the Impact of Inservice Inspection Using a Piping Reliability
Model, NUREG/CR-3869 (prepared by Pacific Northwest Laboratory). Washington, D.C.: Nuclear Regula-
tory Commission. .
SIMONEN, F. A, K. I. JOHNSON, A M. LIEBETRAU, D. W. ENGEL, and E. P. SIMONEN (1986). VISA-Il~omputer
Code for Predicting the Probability of Reactor Pressure Vessel Failure, NUREG/CR-4486 (prepared by
Pacific Northwest Laboratory). Washington, D.C.: Nuclear Regulatory Commission.
260 Nondestructive Examination Reliability
C. (RAJ) SUNDARARAJAN
1. INTRODUCTION
Engineers use their experience and judgment routinely in the informal safety evaluations of structures
but formalized methods of collecting, analyzing, and aggregating the opinions of a number of experts
are relatively new to probabilistic structural mechanics (PSM). Use of formal expert opinion surveys
in the quantitative assessment of structural reliabilities is the subject of this chapter.
Expert opinion surveys have been used for many years in military intelligence, medicine, weather
forecasting, and economics, with differing levels of success. Forecasts and outcomes of military intel-
ligence are seldom publicized and therefore no conclusions can be made about the success of expert
opinion in military intelligence matters. Expert opinion seems to have a good level of success in
medicine and weather forecasting. In economics, on the other hand, the level- of success is not good.
Predictions of economic outlook such as inflation rates, interest rates, and stock prices have been erratic.
Considerable research has been done on how best to conduct expert opinion surveys but most of the
work relates to areas other than PSM (Mosleh et al., 1987; Meyer and Booker, 1991; Cooke, 1991b).
Conclusions of these research projects may not necessarily be directly applicable to PSM but could be
adapted. This chapter discusses those aspects of expert opinion survey procedures that are applicable
to PSM.
2.1. Notations
A Aggregated value of Ai
Ai 5% confidence level estimate of failure probability by ith expert
B Aggregated value of Bi
Bi 95% confidence level estimate of failure probability by ith expert
F Aggregated value of Fi
261
262 Expert Opinion in Probabilistic Structural Mechanics
2.2. Abbreviations
We start with a discussion of the biases in expert opinion because an understanding of biases is essential
for developing and organizing an effective expert opinion survey. There are two major categories of
biases: motivational bias and cognitive bias.
1. Overconfidence: An expert is usually overconfident (Fischhoff, 1982; Morgan and Henrion, 1990; Cooke,
1991a). For example, if a mean and standard deviation are to be estimated, experts tend to underestimate
the standard deviation. If a best estimate and 90% confidence bounds are to be estimated, experts tend to
estimate narrower bounds than what they actually should be.
2. Personal knowledge: Experts tend to overestimate the probability of rare events if they had personally
observed such an event at some time in the past. For example, an engineer may overestimate the failure
probability of a very reliable structure if he or she had personal knowledge of the failure of such a structure,
irrespective of the fact that that failure was the only one ever to occur in the past many years.
3. Perception of low- and high-probability events: In weather forecasting, experts are found to underestimate
the probability of low-probability events and overestimate the probability of high-probability events (Mur-
phy and Winkler, 1977). This could also be true in structural failure probability estimates.
4. Severe consequence events: In the medical field, experts are found to overestimate the probability of events
with severe consequences. In PSM applications, this may mean that experts will overestimate the probability
of low-probability failures in high-consequence applications such as in nuclear power plants. This, however,
is in contradiction to the preceding observation that low-probability events are underpredicted.
5. Familiarity: Engineers and scientists tend to give more credibility to an analysis procedure or computer
model with which they are familiar, as opposed to other procedures and models (Bonano et al., 1989).
264 Expert Opinion in Probabilistic Structural Mechanics
Results from familiar procedures and models are considered more accurate than they really are. Thus an
expert opinion is biased toward the results of a single procedure even if more accurate procedures are
available. This type of bias could be reduced by providing the experts with results from a number of
alternate procedures and models during the preelicitation briefing.
6. Relative size of population: Sometimes, in estimating failure probabilities, engineers tend to consider the
number of failures only, without accounting for the total population of the items (Tversky and Kahneman,
1982). For example, if an engineer has witnessed more weld failures in piping than in pressure vessels, he
or she may predict a higher failure probability for piping welds even though, in fact, the reason for the
larger number of piping weld failures could be the relatively larger number of piping welds than pressure
vessel welds in that plant.
7. Anchoring: Some experts fix on (anchor to) an opinion (e.g., a failure probability value) and refuse to revise
that opinion even when new information becomes available or conditions change.
Both motivational and cognitive biases can be reduced from the aggregated results of the survey
during the analysis and aggregation of the expert estimates. All the failure probability estimates are
tabulated or plotted and if a few of the estimates fall well away from the others, it could be an indication
of bias.! Check to see whether those who provided biased estimates have a conflict of interest. If so,
such estimates may be dropped before aggregation. Alternately, those estimates may be discussed with
the experts to see if there is a rationale for it. It is possible that those who made those estimates have
superior knowledge about the problem and so their estimates are more accurate than the other estimates.
There is also another possibility: Some of the experts may be making their estimates on the basis of
assumptions, definitions, boundary conditions, etc. that are different from those of the other experts.
This situation could be avoided by discussing the assumptions, definitions, etc. with each expert, or
with all the experts as a group, and making sure that all of them understand the problem correctly.
A flow diagram of a typical expert opinion survey is given in Fig. 12-1. Each of the tasks in the flow
diagram is discussed in Sections 8 to 13. Prior to discussing the different tasks, personnel requirements
for leading the survey and the different modes of elicitation are discussed in Sections 6 and 7,
respectively.
6. PROJECT PERSONNEL
The number of key project personnel required to conduct an expert opinion survey depends on the
scope of the survey and the mode of elicitation. In smaller surveys for eliciting failure probability
estimates of a small group of related structures, a single engineer knowledgeable about those structures,
statistical methods of aggregating the expert estimates, and the fundamentals of expert opinion elicitation
may act as the sole survey leader. If an engineer with these qualifications is not available, two individ-
uals-one who is knowledgeable about the structures under consideration and another with expertise
in statistical methods and expert opinion survey procedures-may be needed. The former is known as
the substantive leader (or substantive analyst) and the latter is known as the normative leader (or
normative analyst). In multidisciplinary surveys, for example, those used in probabilistic risk assessment
of nuclear power plants (which require expert opinion in such diverse fields as nuclear physics, structural
engineering, thermal-hydraulics, and atmospheric sciences), a number of substantive leaders and one or
'A more rigorous mathematical approach to detect bias is to use the multiple range test (Milton and Arnold, 1986).
Expert Opinion in Probabilistic Structural Mechanics 265
more normative leaders would be required to select experts, prepare questionnaires, brief experts, mod-
erate group meetings, and analyze, aggregate, and document the expert opinion.
7. MODES OF EUCITATION
Expert opinion may be elicited through mail, one-to-one interviews, group meetings, individual tele-
phone calls, conference telephone calls, video conferences, or computer networks. A combination of
these modes may also be employed. For example, a group meeting of experts is conducted first, at
which substantive and normative leaders brief the experts and give them the questionnaire and sup-
porting technical documents. The experts go back to their offices and return the completed questionnaire
by mail.
In some projects the complete survey is conducted by mail. Briefing materials and questionnaire are
mailed to the experts and they mail their opinions back to the survey leaders. When the number of
experts participating in the survey is in the hundreds, economy may dictate this mode of elicitation.
selection
of
experts
per·elicitation
briefing
and
discussions
analysis feedback
and to
aggregation experts
NO
final
documentation
Conference telephone calls and video conferences are alternatives to group meetings. Although these
methods are economical, they lack personal interaction between the experts and between experts and
survey leaders. As is discussed in Sections 12 and 14, lack of personal interaction has advantages and
disadvantages.
With the proliferation of personal computers, computer networking has also emerged as a mode
of elicitation (Sundararajan and Gupta, 1991). Briefing materials and questionnaires from the survey
leaders to the experts and the completed questionnaires from the experts to the survey leaders are
transmitted through the network. Analysis and aggregation of the opinions and the final documentation
of the survey are all done on the computer itself. Some projects assemble a group meeting first for
briefing and discussions among the experts and then conduct the rest of the survey through computer
networking.
8. SELECTION OF EXPERTS
Experts participating in structural failure probability surveys should have a number of years of expe-
rience in the analysis, design, or testing of the types of structures being considered. Sometimes aca-
demics and researchers in related areas are also included. The experts participating in the survey need
not necessarily be knowledgeable in probabilistic structural mechanics but a knowledge of the basic
concepts such as the relationship between failure probability, factor of safety, and material property
variations would be helpful. (Such fundamental concepts could be discussed in the preelicitation
briefing.)
How many experts should be included in the survey? The author recommends at least 3 experts but
preferably 6 to 12. Every effort should be made to include experts with diverse backgrounds, experience,
and employment. Some surveys have elicited opinions from more than 100 experts. In the opinion of
the author, quality is more important than quantity. Individuals lacking the necessary expertise could
degrade the quality and accuracy of the aggregated failure probability estimates.
If the survey involves the estimation of the failure probabilities of a variety of structures (e.g.,
concrete shear walls, steel frames, instrument racks, piping,and pressure vessels), then at least 3 experts
(preferably 6 to 12) in each type of structure should be surveyed. Some experts could be knowledgeable
in more than one type of structure and provide estimates for the failure probabilities of all those
structures.
The preelicitation briefing is conducted in person at a group meeting, or the briefing materials are
mailed to the experts. Videoconferences are used in some surveys. The briefing is conducted by one or
more survey leaders. The briefing may include the following information.
The level of briefing depends on the complexity of the technical issues and the survey. Sometimes
specialists from industry, universities, and research laboratories are invited to make presentations on 1
or more of the above 15 items. Copies of technical reports and technical papers or summaries or lists
of such documents are also given to the experts.
In addition to the briefing, a discussion among the experts is also desirable. The discussion, mod-
erated by a survey leader, gives an opportunity for the experts to exchange information. In surveys on
structural failure probabilities, discussion of failure modes is common and useful. Approximations in
the structural analysis, conservatism in the design specifications, material property variations, load var-
iations, design and construction errors, field and test data, and structural reliability analysis are some
of the other topics of discussion.
If the briefing is by mail, experts are given sufficient time to study the materials and answer the
questionnaire. If the briefing is at a group meeting, the experts are asked to provide their responses to
the questionnaire immediately after the briefing and discussion, the next day or weeks or even months
after the briefing. If the technical issues are complex and much briefing material is given, then a longer
period of time between briefing and elicitation is necessary. This gives an opportunity for the experts
to study the briefing materials, perform their own literature surveys and analyses, if necessary, and also
discuss the problem with colleagues.
10. EUCITATION
10.1. Questionnaire
The organization, formulation, and the specific wording of the questions are important. Both substantive
and normative leaders of the survey participate in the preparation of the questionnaire.
If the scope of the survey is elicitation of structural failure probabilities, one or more of the following
estimates are usually requested.
268 Expert Opinion in Probabilistic Structural Mechanics
• Mean (or median) of the failure probability (this is the expert's best estimate)
• The 90% confidence bounds for the failure probability (this reflects the level of confidence the
expert has on his or her best estimate)
• Probability distribution type (normal, lognormal, exponential, etc.); this is a difficult item and
some surveys do not request it
• Standard deviation of the distribution (this is also a difficult parameter to estimate [Booker and
Meyer, 1991]; some surveys do not request it but derive it from the 90% confidence bounds and
the mean)
In addition to providing the forementioned estimates, the experts should also be requested to provide
their rationale for the estimates. The following information is often requested.
1. Failure modes considered and, optionally, failure probability estimate for each mode
2. Causes of failure (e.g., design errors, construction errors, corrosion, excessive load)
3. Technical basis for the estimate (e.g., field data, test results, structural reliability analysis results); the
database or reference should be specified
4. Sources of uncertainty in the failure probability estimate (e.g., limited amount of field data, differences
between test conditions and field conditions, approximations in the reliability analysis technique)
Instead of considering the cumulative effect of all these factors in a single step, the effect of each factor
is estimated separately and then combined. 2
During the preelicitation briefing, survey leaders explain decomposition principles and advantages
to the experts and, sometimes, recommend suitable decompositions for the problems at hand.
10.3. Communication
Each expert's opinion is communicated to the survey leader by mail, by telephone, through computer
networks, or at one-to-one meetings. In some surveys the experts give their opinions openly at a group
meeting. This, of course, does not provide confidentiality and could introduce motivational-type biases,
as discussed in Section 4. Some surveys provide total anonymity; experts send their opinions by mail
anonymously.
11.1. Aggregation
A number of methods are available for aggregating expert opinion. Some of those that are applicable
to failure probability estimates are discussed here.
Let the number of experts be n. Let the ith expert's best estimate3 and 90% confidence bounds be
Fi, Ai, and Bi, respectively. Mean, median, or geometric mean may be used as the aggregate for each
of these three parameters. A further refinement may be made to the aggregation procedure by assigning
weights (W;) to each expert and computing the weighted averages, where 0 < W; < 1. Weight for each
expert is assigned on the basis of self-weighting by each expert, weighting by the other experts in the
group, weighting by peers who are not part of the survey, or weighting by survey leaders.
In lieu of mean, median, or geometric mean, the following aggregation rule may be used (Seaver,
1978; Bordley, 1982). The aggregate value F is given by
g
(12-1)
F = (g + h)
where
g= ("TIFIr"
,_I
(12-2)
2A lognormal distribution for the failure probability is assumed in seismic fragility analysis (see Chapter 19). Therefore the
median value of the total effect is equal to the product of the median values of the individual effects. Similar combination
procedures can also be derived for other distributions.
'Either mean or median; specify which.
270 Expert Opinion in Probabilistic Structural Mechanics
[.
h= TI(1-Fi)
]1!' (12-3)
,=1
Equation (12-1) can be used to aggregate Ai and Bi also. Seaver (1978) has shown Eq. (12-1) to be a
good rule for aggregating probability estimates.
If the experts estimate probability distributions for the failure probability value (instead of best
estimates and 90% confidence bounds), then the probability distributions are aggregated according to
methods presented by Winkler (1981), Genest and Zidek (1986), Lindley and Singpurwalla (1986), or
Lind and Nowak (1988).
11.2. Display
Each expert's estimate and the aggregated value are displayed in tabular or graphical form. A good
display can help the experts assimilate the survey results quickly and effectively and thus make rational
revisions of their own estimates in the next iteration. Some forms of graphical displays are described
here.
11.2.1. Range Display. If the number of experts is small, then each expert's probability estimate
can be displayed in a single figure (see Fig. 12-2). The circle indicates the best estimate (Fi) and the
end points of the line are the 90% confidence bounds (Ai and Bi)' Although the identity of who provided
which estimate is not revealed, each expert can see for himself his own estimate in comparison with
the others.
If more than one failure probability estimate is made (e.g., failure probabilities of a number of
different structures), then failure probability estimates of each structure are displayed in a separate figure.
11.2.2. Histogram. Histograms are used to display a summary of the probability estimates. A
histogram for Fi is shown in Fig. 12-3. Similar histograms can also be constructed for Ai and Bi.
Histograms display the central tendency and the dispersion of the estimates.
Histograms can be used whether the number of experts is small or large.
11.2.3. Line, Box, and Circle Display. The line, box, and circle display (also known as box and
whisker display) (Vo et ai., 1991) is particularly useful if summaries of failure probability estimates for
a number of structures are to be displayed together so that the failure probabilities of different structures
can be compared quickly. A sample display is shown in Fig. 12-4. It shows the summary of failure
probability estimates of 10 different piping segments in a nuclear power plant.
The following information is displayed for each structure.
Once the experts' estimates are aggregated, the consolidated results are fed back to the experts. Some-
times the individual estimates are also fed back to the experts in addition to the consolidated results of
Expert Opinion in Probabilistic Structural Mechanics 271
the survey. In either case it is best not to reveal which expert provided which estimate. Not only the
failure probabilities but also a summary of the rationale and technical bases supplied by the experts are
communicated to the experts. Such information may include the following.
The experts review the feedback information and may revise their original estimates, if necessary.
This iterative process of feedback and revision may continue until the differences between the experts
have narrowed substantially. Most surveys, however, stop after the first iteration.
In some surveys the individual estimates are fed back to the experts without anonymity at a group
meeting. The group discusses the individual estimates and rationale, expressing support or disagreement,
~
~
10-6 ~
III
10-5 10-4 lo-J 10-2
I I I
10-6 10-5 10-4 10-3 10-2
Aggregated Results:
Mean ofF =9.5 x 10-4, =
Median of F 5.0 X 10-4, Std. dev. ofF =4.3 x lo-J
= =
Mean of A 1.5 x 10-4, Median of A 7.5 x 10-5,
= =
Mean orB 4.6 x lo-J, Median ofB 1.0 x lo-J
50
40
i
...o
><
~
...
.! 20
~
2:
Of--...,.....----,--..,....--..,.-~_r_-__1
Failure Probability
(best estimate F)
Aggregated Results: Mean = 8.3 x 10- 4, Median = 5.0 x 10-4, Std. dev. = 2.4 x IcrJ
Accumulator first-isolation
valve to RSC loop
Pipe segment between first
and second isolation valve + ~
Accumulator discharge to
RHR connection +f----,
Accumulator discharge
~'----'-+--'~
Accumulator discharge to
sample/drain lines t-----iL----,+.-r-1 ----i
Accumulator to supply
valves f-----iL--_+.J...JIf------i
Supply valve to RWST
isolation valve + 1-1----i
101(10) (failure/yr)
Figure 12-4. Sample line, box, and circle display. (Source: Yo, T. V., P. G. Heasler, S. R. Doctor, F. S. Simonen,
and B. F. Gore [1991]. Estimates of rupture probabilities for nuclear power plant components: Expert judgment
elicitation. Nuclear Technology 96:259-271. Copyright 1991 by the American Nuclear Society, La Grange Park,
Illinois. Reprinted with permission.)
Expert Opinion in Probabilistic Structural Mechanics 273
and tries to reach a consensus under the moderation of the survey leader. The advantage of this approach
is that the experts have the opportunity to listen to each expert's estimate and the rationale. Also, each
expert has the opportunity to hear supporting and opposing views from the other experts. This could
lead to a more "intelligent and educated" revision of the initial estimate and a consensus position. The
drawback is that one or a few individuals could dominate the discussion and "force" their views on
the others. This would result in final consensus estimates that are closer to the dominant individual's
views irrespective of the merits of those views.
Complete and full documentation is important because anyone who uses the survey results (e.g., plant
risk analysts who use the structural failure probabilities in the plant risk model) may want to know the
rationale and bases for the expert estimates. Also, if at a later date the plant risk assessment is reviewed,
the reviewers would want to know the rationale and bases for the estimates.
Documentation is also important if the expert estimates are to be revised at a later date because new
data become available or conditions relating to the problem change. The original experts (or new ones)
may be called on to make revised estimates and the documentation of the initial survey will be useful.
Documentation should include the following.
The Delphi method (Linstone and Turoff, 1975) is a special case of the expert opinion survey procedures
described in the preceding sections. It is one of the widely used methods. The Institute of Electrical
and Electronics Engineers (IEEE, 1977) claims that over 2500 surveys have been conducted using the
Delphi method.
In the Delphi method, the selected experts mail their failure probability estimates anonymously. A
summary of the estimates (or the summary, individual estimates, and rationale) is mailed back to the
experts and the experts mail their revised estimates anonymously. This process is repeated until the
differences between individual estimates are considered reasonable by the survey leaders. This approach
is particularly attractive if the number of experts is large and the budget does not allow for group
meetings. Some survey organizers also consider the fact that there is no interaction between the experts
as an advantage because no one can dominate or unduly influence the experts. (Some consider the lack
of interaction a drawback.) Anonymity offered by this procedure is also considered an advantage by
some; experts give their opinions without fear of what their employers or other experts or survey leaders
may think of their opinion.
Although the Delphi method is popular, there are some expert opinion specialists who advise against
it in many situations (Meyer and Booker, 1991). They believe that the method is prone to more biases
than other approaches. The lack of direct preelicitation briefing is also considered a drawback by some.
274 Expert Opinion in Probabilistic Structural Mechanics
15. APPLICATIONS
Some expert opinion surveys related to failure probability estimation are described here.
1. Preliminary studies indicated that the issue had a significant impact on nuclear power plant PRAs.
2. Very little or no data from field experience, laboratory tests, or computational models were available, or
there was no consensus or broad agreement about the results from field experience, laboratory tests, or
computational models.
On the basis of these criteria, a number of issues were identified. These issues covered a wide spectrum
of disciplines including systems analysis, structural engineering, thermal hydraulics, and nuclear physics.
A total of 38 experts was selected and grouped into 5 panels representing 5 broad categories of
issues. One of the panels, consisting of four experts, was on containment structure reliability.
All the experts were briefed at the first group meeting by normative leaders of the survey. The
briefing included information on the expert opinion survey procedure and subjective assessment of
probabilities. Biases and decomposition principles were two of the topics covered during the briefing.
A second group meeting was then held at which technical issues were presented by substantive
leaders of the survey. This briefing covered information on available data sources, computational models,
and experimental results. Specialists from universities, government agencies, national laboratories, and
industry also made presentations on specific technical issues. The experts were also provided with a
number of technical reports and papers.
Expert opinion elicitation took place at the third group meeting held a few months after the second
meeting. This interval was used by the experts to study the technical reports and other information they
received at the briefings. Some of the experts also performed their own analyses to gain a better
understanding of the subject and make more informed judgments. Experts documented their studies and
276 Expert Opinion in Probabilistic Structural Mechanics
analyses in the form of brief summaries and/or extensive reports. These summaries and J,:eports were
circulated among the experts within each of the five panels.
At the third group meeting, each expert presented his or her opinion about the data sources, exper-
imental results, and computational models available in literature. The experts also presented the results
of any analyses they performed. However, the experts did not present their failure probability or other
numerical estimates. Finally, they discussed and reached consensus on the elicitation variabl$!s; that is,
what numerical estimates were to be made. For example, for the containment reliability issue, the four
experts on containment reliability agreed that failure probabilities of three distinct failure modes, namely,
leak, rupture, and catastrophic rupture, should be estimated. They also reached consensus on a definition
of each failure mode so that there was no ambiguity.
The group meeting ended with the consensus on elicitation variables. The survey leaders then met
with each expert separately in a quiet room and received the expert's numerical estimates and rationale.
Survey leaders made written records of the estimates and rationale. This written record was later sent
to the expert for his or her signature to assure that the expert's opinion was correctly documented.
Finally, the numerical estimates were aggregated by simple averaging.
Detailed discussions of the expert opinion survey procedure used in the NUREG-1150 project may
be found in Wheeler et ai. (1989), Harper et ai. (1989), Hora and Iman (1989), and Ortiz et ai. (1991).
Expert opinion is playing an increasingly important role in probabilistic structural mechanics. When
sufficient objective, quantitative information from field experience, test data, or structural reliability
analysis are not available, analysts turn to expert opinion for structural failure probability estimates.
Even when directly related objective data and results are not available, experts are able to integrate and
interpolate any related data and qualitative information to make educated estimates of the failure prob-
ability. How good are the expert estimates of failure probabilities? It is difficult to say because what is
usually estimated is the very low failure probabilities of highly reliable structures. Sufficient field data
to verify the expert estimates may take many years to accumulate. Can we compare the estimates with
existing failure data? The experts are aware of the existing data and their estimates are, in part, based
Expert Opinion in Probabilistic Structural Mechanics 277
on these data. Therefore a comparison with existing data will not provide a true measure of the accuracy
of the estimates. In spite of the lack of validation, expert opinion can be useful when other methods of
failure probability prediction are impractical or as a complement to the other methods.
REFERENCES
ARMSTRONG, J. S. (1985). Long-Range Forecasting: From Crystal Ball to Computer. New York: John Wiley &
Sons.
ARMSTRONG, J. S., W. B. DENNISTON, and M. M. GORDON (1975). Use of the decomposition principle in making
judgments. Organizational Behavior and Human Performance 14:257-263.
BONANO, E. J., S. HORA, R. L. KEENEY, and D. VON WINTERFELDT (1989). Elicitation and Use of Expert Judgment
in Performance Assessment for High-Level Radioactive Waste Repositories. NUREG/CR-5411. Washington,
D.C.: Nuclear Regulatory Commission.
BooKER, J. M., and M. A MEYER (1990). Common problems in the elicitation and analysis of expert opinion
affecting probabilistic safety assessments. In: Proceedings of the CSNI Workshop on PSA Applications and
Limitations. NUREG/CP-0115. Washington, D.C.: Nuclear Regulatory Commission.
BOOKER, J. M., and M. A MEYER (1991). A framework for using expert judgment as data. Statistical Computing
and Statistical Graphics Newsletter 2(1).
BoRDLEY, R. F. (1982). A multiplicative formula for aggregating probability assessments. Management Science
28:1137-1148.
COMER, M. K., D. A SEAVER, W. G., STILLWELL, and C. D. GADDY (1984). Generating Human Reliability
Estimates Using Expert Judgments. NUREG/CR-3688. Washington, D.C.: Nuclear Regulatory Commission.
COOKE, R. M. (1991a). Expert Judgment Study on Atmospheric Dispersion and Deposition. Report No. 91-81.
Delft, the Netherlands: Delft University of Technology.
COOKE, R. M. (1991b). Experts in Uncertainty: Expert Opinion and Subjective Probability in Science. Oxford,
England: Oxford University Press.
EMBREY, D. E. (1983). The Use of Performance Shaping Factors and Quantified Expert Judgment in the Evaluation
of Human Reliability: An Initial Appraisal. NUREG/CR-2986. Washington, D.C.: Nuclear Regulatory
Commission.
EPRI (1986). Seismic Hazard Methodology for the Central and Eastern United States, Vol. 1. Report No. NP-4/
26. Palo Alto, California: Electric Power Research Institute.
FISCHHOFF, B. (1982). Debiasing. In: Judgment under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovik,
and A Tversky, Eds. Cambridge, England: Cambridge University Press.
GENEST, C., and J. V. ZIDEK (1986). Combining probability distributions: a critique and an annotated bibliography.
Statistical Science 1:114-148.
GEORGE, L. L., and R. W. MENSING (1980). Using subjective percentiles and test data for estimating fragility
functions. In: Proceedings of the DOE Statistical Symposium (Berkeley, California). Washington, D.C.: De-
partment of Energy.
HARPER, F. T., et al. (1989). Evaluation of Severe Accident Risks: Quantification of Major Input Parameters:
Experts' Determination of Structural Response Issues, Vol. 2, Part 3. NUREG/CR-4551. Washington, D.C.:
Nuclear Regulatory Commission.
HORA, S., and R. L. IMAN (1989). Expert opinion in risk analysis: The NUREG-1150 methodology. Nuclear Science
and Engineering 102:323-331.
HoRA, S., D. VON WINTERFELDT, and K. TRAUTH (1991). Expert Judgment on Inadvertent Human Intrusion into
the Waste Isolation Pilot Plant. Report No. SAND-90-3063. Albuquerque, New Mexico: Sandia National
Laboratory.
IEEE (1977). IEEE Guide to the Collection and Presentation of Electrical, Electronic, and Sensing Component
Reliability Data for Nuclear-Power Generating Stations. New York: John Wiley & Sons.
278 Expert Opinion in Probabilistic Structural Mechanics
LIND, N. C., and A S. NOWAK (1988). Pooling expert opinions on probability distributions. ASCE Journal of
Engineering Mechanics 114(2):328-341.
LINDLEY, D. v., and N. D. SINGPURWALLA (1986). Reliability (and fault tree) analysis using expert opinions. Journal
of the American Statistical Association 81(393):87-90.
LINSTONE, H. A, and M. TUROFF (1975). The Delphi Method: Techniques and Applications. Reading, Massachu-
setts: Addison-Wesley.
MENSING, R. W. (1981). Seismic Safety Margins Research Program: Phase I Final Report-the Use of Subjective
Input, Vol. 10. NUREG/CR-2015. Washington, D.C.: Nuclear Regulatory Commission.
MERKHOFER, M. W., and A K. RUNCHAL (1989). Probability encoding: Quantifying judgmental uncertainty over
hydrologic parameters for basalt. In: Proceedings of the Conference on Geostatistical Sensitivity and Un-
certainty Methods for Ground Water Flow and Radionuclide Transport Modeling. B. E. Buston, Ed. Colum-
bus, Ohio: Battelle Press, pp. 629-648.
MEYER, M., and J. BOOKER (1991). Eliciting and Analyzing Expert Judgment: A Practical Guide. New York:
Academic Press.
MILLER, G. A (1956). The magical number seven, plus or minus two: some limits on our capacity for processing
information. Psychological Review 63:81-97.
MILTON, J. S., and J. C. ARNOLD (1986). Probability and Statistics in the Engineering and Computer Sciences.
New York: McGraw-Hill.
MOHAMMADI, 1., A LONGINOW, and T. A WILLIAMS (1991). Evaluation of system reliability using expert opinions.
Structural Safety 9:227-241.
MORGAN M., and M. HENRION (1990). Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and
Policy Analysis. Cambridge, England: Cambridge University Press.
MOSLEH, A et al. 1987. Methods for Elicitation and Use of Expert Opinion in Risk Assessment. NUREG/CR-
4962. Washington, D.C.: Nuclear Regulatory Commission.
MuRPHY, A H., and R. L. WINKLER (1977). Reliability of subjective probability forecasts of precipitation and
temperature. Journal of the Royal Statistical Society (Series C) 26(1):41-47.
NRC (1989). Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants. NUREG-1150. Washing-
ton, D.C.: Nuclear Regulatory Commission.
ORTIZ, N. R, T. A. WHEELER, R J. BREEDING, S. HORA, M. A. MEYER, and R L. KEENEY (1991). Use of expert
judgment in NUREG-1150. Nuclear Engineering and Design 126:313-331.
SEAVER, D. A (1978). Assessing Probability with Multiple Individuals: Group Interaction versus Mathematical
Aggregation. SSRI Research Report 78-3. Los Angeles: University of Southern California.
SEAVER, D. A, and W. G. STILLWELL (1983). Procedures for Using Expert Judgment to Estimate Human Error
Probabilities in Nuclear Power Plant Operations. NUREG/CR-2743. Washington, D.C.: Nuclear Regulatory
Commission.
STILLWELL, W. G., D. A SEAVER, and J. P. SCHWARTZ (1982). Expert Estimation of Human Error Probabilities in
Nuclear Power Plant Operations: A Review of Probability Assessment and Scaling. NUREG/CR-2255. Wash-
ington, D.C.: Nuclear Regulatory Commission.
SUNDARARAJAN, C. (1988). A Fuzzy Set Approach to Seismic Safety Assessment. Report to the Nuclear Regulatory
Commission, Washington, D.C.
SUNDARARAJAN, C. (1995). Uncertainties in piping frequency analysis. International Journal for Fuzzy Sets and
Systems (accepted for publication).
SUNDARARAJAN, C., and P. GUPTA (1991). Procedures for a Computerized Expert Opinion Survey. Humble, Texas:
EDA Consultants.
TRAum, K. M., R. P. RECHARD, and S. HORA (1991). Expert judgment as input to waste isolation pilot plant
performance assessment calculations: Probability distributions of significant parameters. In: Mixed Waste-
Proceedings of the 1st International Symposium, Baltimore, Maryland.
TVERSKY, A, and D. KAHNEMAN (1982). Evidential impact of base rates. In: Judgment under Uncertainty: Heu-
ristics and Biases. Cambridge, England: Cambridge University Press.
Expert Opinion in Probabilistic Structural Mechanics 279
VO, T. V., P. G. HEASLER, S. R. DOcrOR, F. A. SIMONEN, and B. F. GORE (1991). Estimates of rupture probabilities
for nuclear power plant components: Expert judgment elicitation. Nuclear Technology 96:259-27l.
WHEELER, T. A., S. C. HoRA, W. R. CRAMOND, and S. D. UNWIN (1989). Analysis of Core Damage Frequency
from Internal Events: Expert Judgment Elicitation, Vol. 2. NUREG/CR-4550. Washington, D. C.: Nuclear
Regulatory Commission.
WINKLER, R. L. (1981). Combining probability distributions from dependent information sources. Management
Science 27(4):987-997.
13
FUZZY SETS IN PROBABILISTIC
STRUCTURAL MECHANICS
FABIAN C. HADIPRIONO
1. INTRODUCTION
Almost 30 years ago, Zadeh (1965) introduced the concept of fuzzy sets. In ordinary set theory, an
element is either a member or not a member of a set (in logic, it is indicated by either 1 or 0). In the
fuzzy set, whether or not an element belongs to a set may be expressed not just by 1 or 0, but by any
value between 0 and 1, indicating different degrees of the membership of the element. For example, 0
indicates a nonmembership, 0.3 a weak membership, 0.9 a strong membership, and 1.0 crisp member-
ship. Building on Zadeh's work, Yager (1986) offers the following definition:
A fuzzy set is a generalization of the ideas of an ordinary or crisp set. A fuzzy subset can be seen as a predicate
whose truth values are drawn from the unit interval, I = [0, 1] rather than the set (0, 1) as in the case of an
ordinary set. Thus fuzzy subset has as its underlying logic a multi-valued logic.
Hence, in the context of probability theory, although the fuzzy set concept has a basis similar to that
of the classic probability concept, the former departs somewhat from the latter. Furthermore, the con-
ventional set theory is based primarily on objective information, while fuzzy set theory operates on the
basis of the premise of subjective judgment. Thus, probability assessments are often characterized by
probabilistic phrases, such as probable, likely, unlikely, about 0.4, and approximately 10- 2 , which are
often difficult to quantity. They are linguistic probability values assessed by subjective judgment. They
reflect the way humans think and express a certainty value in an imprecise but useful way.
The use of these expressions/phrases is abundant in a multitude of disciplines; however, their use-
fulness is not fully capitalized until the introduction of the fuzzy set concept. In terms of probability,
the need for using linguistic phrases for uncertainties emerges when crisp probabilities are not available,
or when the event of interest is subjective in nature.
Blockley (1975, 1977), Brown (1979, 1980), and Yao (1981) are among the first who applied fuzzy
set concepts to probabilistic structural mechanics. Later, Brown and Yao (1983) demonstrated the fuz-
zification of objective information and the use of fuzzy sets to assess structural damage. Concurrently,
Shiraishi and Furuta (1983) used the fuzzy set concept for structural reliability assessment. Wu et al.
280
Fuzzy Sets in Probabilistic Structural Mechanics 281
(1984) assessed the safety of earth dams on the basis of results of preliminary inspections. Brown et
al. (1984) demonstrated the use of fuzzy sets in seismic safety assessment of buildings. Shibata (1987)
applied this concept to evaluate failure modes of building components and pipings damaged by earth-
quake. Another study was conducted by Frangopol et al. (1988) to predict seismic hazard on the basis
of randomness and imprecision. Hadipriono (1985, 1987) applied this concept to assess performance
and safety of temporary structures. Boissonade et al. (1985), Dong et al. (1986), and Chiang et al.
(1987) are some of the other researchers who employed fuzzy sets in structural safety assessments.
Despite the inevitable applications to structural mechanics, the integration of the fuzzy set concept
into this area is relatively new. This may be due to both its subjective nature and its departure from
the ordinary set theory. Uncertainties in engineering (or for this matter, in structural mechanics) are no
different than those in other areas. Many of these uncertainties have objective characteristics, but many
others are established-or more appropriately established-on the basis of subjective judgment.
As an example, in some types of problems the range (lower and upper limit) of, say, a uniform
probability distribution function of a variable is often determined more realistically through the use of
subjective judgment. As another example, it would be more realistic and convenient for someone to
say that the chance of a house in San Francisco being severely damaged by the next earthquake is
highly probable rather than, say, 98%. The probability that a house is severely damaged during an
earthquake cannot be defined precisely because there is no sharp dividing line between severe damage
and no severe damage. Furthermore, objective information concerning the variables used to establish
the probability is often not quantifiable. Hence, for cases such as this, for which there is no sharp
boundary between failure and no failure and/or for which sufficient quantifiable objective data are not
available, a vague response such as highly probable would be more appropriate. Fuzzy set concepts can
be used to analyze such cases.
2.1. Notations
V /-L4p)
)
Maximum value of all /-L4p) for j = 1, 2, .. .
A /-Lpip) Minimum value of all /-L4p) for j = 1, 2, .. .
)
m
U Total union of membership functions
j::l
n
;=1
Total intersection of membership functions
2.2. Abbreviations
Ant. Antecedent
ASCE American Society of Civil Engineers
Cons. Consequent
ITFM Inverse truth functional modification
MPD Modus ponens deduction
MTD Modus tollens deduction
TFM Truth functional modification
3. MEMBERSHIP FUNCTION
Zadeh (1965) indicated that human perception about many real-world problems is characterized by
imprecision. In probabilistic terms, the probability of occurrence of a fuzzy event, for example, severe
weather or moderate damage, or the fuzzy event itself, is imprecise. Therefore, a fuzzy event A is
determined by pairs of fuzzy element and its membership value as shown in Eq. (13-1).
A = [/-LA(a j ) aJjol = [/-LAa1)la" /-LA(az)laz, ... , /-LA(an)lan]; (13-1)
(A C Yr, Vaj E Yr, j = 1, 2, ... , n)
in which aj is the fuzzy element; I-lA(aJ is the membership function of A, representing the degree of
membership of A; vertical bars are delimiters; and the fuzzy set A is a subset of the universe of discourse
a, for all a contained in a. A universe of discourse is analogous to a sample space in the conventional
probability theory. A membership value I-lA(aJ can take any value in the range [0, 1]. Equation (13-1)
shows a discrete form. For a continuous form, the membership function I-lAaj ) is a continuous function
of a. For simplicity, discrete membership function is used throughout this chapter. The membership
function represents the degree of belief one has on the fuzzy element, aj. Equation (13-1) is sometimes
referred to as a fuzzy expression, fuzzy term, or fuzzy variable.
Characteristically, fuzzy calculus and operations emphasize the operation of membership functions.
Numerous operations have been developed since the introduction of fuzzy sets; however, for the purpose
of this chapter, only some basic operations are discussed. Readers interested in further details may see
Kaufmann and Gupta (1982), Klir and Folger (1988), Zadeh (1975a,b,c) and Zimmerman (1991).
Zadeh (1975c) attempted to formulate probability terms, such as likely, not likely, and unlikely. The
term likely can be represented by the following fuzzy set in the universe of discourse: PROBABILITY
likely = [O.Ola" O.llaz, 0.2Ia 3, ••• , 0.9Ian-" 1.0lan] (13-2)
Fuzzy Sets in Probabilistic Structural Mechanics 283
Here, the membership values range from 0.0 to 1.0. Fuzzy elements ai' j = 1, 2, ... , n, represent the
probability values; for example, n = 11, and al = 0, a2 = 0.1, ... , an = 1.0. One can say that the
probability value an has a crisp membership (membership value of unity), an- 1 has a very strong mem-
bership, ... , a3 has a weak membership, a2 has a very weak membership, and al has nonmembership
in the fuzzy set.
Zadeh distinguished the terms not likely and unlikely as defined by their membership function as
follows:
(13-5)
(13-6)
Using Eqs. (13-5) and (13-6), the terms very likely and fairly likely yield
The membership values of very likely decrease much faster than those of likely; meanwhile, the mem-
bership values of fairly likely decrease slower than those of likely.
Note that the universe of discourse: PROBABILITY may contain other probabilistic terms, such as
high, medium, and low. Each term can be considered as a fuzzy variable.
Universes of discourse other than PROBABILITY may contain their own fuzzy variables. Consider
the following example. Suppose that we are interested in the effect (or impact) of an event on a structure
(e.g., impact of a truck on a bridge deck). We wish to know the impact force on the structure. Lacking
any measurements or analytical computations, we may make an approximate estimate (guess) on the
basis of previous experience with such impact forces. The IMPACf is a universe of discourse that can
be characterized by terms, such as severe, moderate, or slight, and each term is defined by a fuzzy
expression. For example,
severe = [0.1110,000 lb, 0.5112,000 lb, 0.8114,000 lb, 1.0116,000 lb] (13-9)
slight = [0.1116,000 lb, 0.5114,000 lb, 0.8112,000 lb, 1.0110,000 lb] (13-10)
In the above equations, we assume that slight is the opposite of severe. Equations (13-3) through
(13-6) can be used to show the relationships between severe, slight, not severe, very severe, and fairly
severe.
Frequently, two fuzzy terms are represented by the same membership functions. Here, we call these
terms parallel. For example, severe impact and high probability are parallel if
where ij and Pj are the fuzzy elements of the universes of discourse IMPACT and PROBABILITY,
respectively.
Applications of fuzzy set concept in probabilistic structural mechanics call for basic operations, such
as fuzzy relation, composition, and composite fuzzy relation. They are explained and illustrated with
numerical examples in the next sections.
4. FUZZY RELATION
Fuzzy sets may be related to one another. The relation is defined as a fuzzy subset of the Cartesian
product. An example is a severe impact of a load that can be related to a very high probability of
structural component failure (i.e., a severe impact results in a very high probability of structural com-
ponent failure). The term severe is contained in the universe of discourse IMPACT whereas very high
is contained in PROBABILITY. Suppose that fuzzy sets I and P represent severe and very high, re-
spectively; meanwhile I and cP are the universes of discourse for IMPACT and PROBABILITY, such
that, I C I, and P C CP. The relation between I and P is defined by
where X is the Cartesian product, which yields the following membership function of RIP:
where l.liz) and fJ.p(p) are the membership functions of I and P, respectively; i and p are the fuzzy
elements of I and P, respectively; and the symbol 1\ denotes minimum between related membership
values.
Suppose that severe is defined as
(13-14)
and high is parallel to severe. The membership function of very high is defined by Eq. (13-5) as
(13-15)
From a practical standpoint, i may represent the degree of impact of loads, for example, erection
stresses that are often excluded in design or not readily quantifiable. It may also represent the extent
of effect (impact) of conditions, such as poor workmanship and human error, on the resistance or
strength of a structural component. The fuzzy element p may represent the exponent of a probability
of failure, such as in lO-n, where n determines the magnitude of the probability. Here, depending on
the type of a structure or its components one may assess a probability value as very high. For example,
n =3 for nuclear facilities may be considered as very high, whereas for a warehouse it is not very high.
An example for a very high probability of failure is shown below:
Then, using Eqs. (13-12) and (13-13), the relation between the two fuzzy sets (Eqs. [13-14] and [13-
15]) as defined by fJ.IP(i, p) is found as
Fuzzy Sets in Probabilistic Structural Mechanics 285
As an example, the value 0.50 in Eq. (13-17), which corresponds to i2 and P3, is obtained by taking
the minimum of the membership value of i2 in Eq. (13-14) and P3 in Eq. (13-15).
Further, the use of a fuzzy relation to represent an implication rule has been suggested by many
(Zadeh, 1965; Mamdani and Asilian, 1975). The relation RIP (Eq. [13-17]), for example, may represent
an implication rule "IF IMPACf is severe, then PROBABILITY is very high." Such a proposition
may be rewritten as
(/ C I) :J (P C 'P) (13-18)
where (I C I) is the premise, ::J means "implied," and (P C '1') is the goal. However, a proposition
may have more than one implication rule. Consider, for example, the proposition: "If IMPACf is slight
then PROBABILITY is low, else if IMPACf is fairly severe, then PROBABILITY is high, else, etc."
This proposition can be rewritten as
Here, II> 12, ... , 1m and PI> P 2, ... , Pm may represent, for example, II = slight, 12 = fairly severe, 1m =
severe, P I = low, P 2 = high, Pm = very high, etc.
The complete implication rule, RIP, is the total relation between VIj and VPj' and is defined by (see
Mamdani and Assilian, 1975) I }
where j=l
U denotes the union of all membership functions of RIP' The membership functions of RIP are
J }
defined as
5. FUZZY COMPOSmON
A composition of fuzzy relations is performed by composing fuzzy sets and/or fuzzy relations into a
common universe of discourse. Suppose that, in addition to R/Fl one is interested in the relation between
a large magnitude of an event and a fairly severe impact (effect) on a structural component. Assume
that large and fairly severe are represented by M and /1, respectively. Hence, their relation is defined
as follows:
(13-22)
(13-23)
where 0 denotes fuzzy composition and v[. L is the maximum value of all [.1 for j = 1, 2, .... Suppose
that large is parallel to severe (represented by Eq. [13-14]) and defined as
The fuzzy set for fairly severe is found following Eq. (13-6) as
(13-26)
Using Eqs. (13-12) and (13-13) the relation between large and fairly severe is obtained as
RM/I = M X II i, i2 i3 i4
m, 0.10 0.10 0.10 0.10
m2 0.32 0.50 0.50 0.50
(13-27)
m3 0.32 0.71 0.80 0.80
m4 0.32 0.71 0.89 1.00
Equations (13-23) and (13-24) are used to establish the composition between RMI, and RIP as follows:
The operation to obtain Eq. (13-28) is similar to a matrix operation, except that multiplication is replaced
by minimum and summation by maximum. For example, the value 0.25 in Eq. (13-28) (corresponding
to m3 and P2) is obtained by first comparing the values in the third row of Eq. (13-27) to the corre-
Fuzzy Sets in Probabilistic Structural Mechanics 287
sponding values in the second column of Eq. (13-17), then by taking their minimum values, and
subsequently by taking the maximum of these minimums. In essence, the matrix in Eq. (13-28) repre-
sents the relation RMP between magnitude and probability.
In another case, one may be interested in assessing the probability of failure in relation to small and
large magnitudes of an event. First, let us obtain the probability of failure in relation to small magnitude,
Mb by composing small and RM1\'IP. This is performed as follows:
(13-29)
where small is defined below by Eq. (13-4) as the mirror image of large.
(13-30)
Note that 1 - mk represents the fuzzy elements of MI. Assume that Ml = {mb m2, m3, m4}; it follows
that 1 - m1 = m4, 1 - m2 = m3, and so on. Then Ml can be rewritten as
(13-31)
(13-32)
In comparison with the value of high in Eq. (13-11), PI has a linguistic probability value of much less
than high.
The probability of failure associated with large magnitude is found in the same way by composing
large (Eq. [13-25]) with RM1\'IP. This yields
(13-33)
(13-34)
where X C I X '1'. Here, RMI\ C Jvl X I and RM1\.X C Jvl X '1' are known fuzzy relations.
Owing to the fuzziness of a fuzzy relation or composition, solutions to such a problem are treated
differently than those in conventional matrix operations. To solve such problems, the following com-
posite fuzzy relation equation applies (Sanchez, 1976):
(13-35)
where RI,I\ is the transpose of fuzzy relation R M1 \, and a is the composite fuzzy set operator, such that
(13-36)
288 Fuzzy Sets in Probabilistic Structural Mechanics
where
(13-37)
(13-38)
x =RJ.llj ex RM1jo X PI pz P3 P4
il 0.01 0.25 1.00 1.00
iz 0.01 0.25 0.64 1.00
(13-39)
i3 0.01 0.25 0.64 1.00
i4 0.01 0.25 0.64 1.00
Because of the max-min characteristic of fuzzy composition the solution X may not be the same as
RIP in Eq. (13-17); however, RIP C X, where X is the largest solution. Note that the composition of
RMII and X should give the same result as RMIlolP (Eq. [13-28]) or
(13-40)
Various probabilistic structural mechanics problems may involve the relation and composition of
values from universes of discourse: FREQUENCY, MAGNITUDE, IMPAC!', and PROBABILITY. Sup-
pose that, in addition to RMI and RIP, a frequency of occurrence F is related to the magnitude of an
event. This relation is shown as
Subsequently, frequency may be related to the probability of failure as shown in the following
composition:
(13-42)
For example, assume that low frequency is related to a large magnitude of an event. Suppose that
low (the mirror image of high) frequency is represented by the following fuzzy sets:
Also suppose that large magnitude is represented by Eq. (13-25); then the relation RFM yields
RFM=FXM ml m2 m3 m4
It 0.10 0.50 0.80 1.00
/2 0.10 0.50 0.80 0.80
(13-44)
h 0.10 0.50 0.50 0.50
[.. 0.10 0.10 0.10 0.10
Through the composite fuzzy relation in Eq. (13-42), and using Eqs. (13-17), (13-27), and (13-44)
for RIP, RMII , and RFM, respectively, the fuzzy relation RFP, between the low frequency F and a probability
of structural failure P3 is found as
which yields
RFP, PI P2 P3 P4
It 0.01 0.25 0.64 1.00
/2 0.01 0.25 0.64 0.80
(13-46)
/3 0.01 0.25 0.50 0.50
[.. 0.01 0.10 0.10 0.10
Moreover, the probability of structural failure associated with low frequency event can be found
from
(13-47)
which yields
(13-48)
or, in words, very high probability of structural failure (compare with Eq. [13-33]).
Consequently, through composite fuzzy relation operations, each fuzzy relation may be found in
terms of the others:
Notice that, as expected, RIP (Eq. [13-17]) is a subset of RIP, (Eq. [13-52]).
290 Fuzzy Sets in Probabilistic Structural Mechanics
The fuzzy formulas in the above equations incorporate the basic operations of what one would find
in numerous applications. For example, this application may be useful in assessing the urgency measure
to prevent concrete damage (Hadipriono and Lai, 1986).
Thus far, fuzzy information explained in the above discussions is represented by pairs of fuzzy
elements and their related membership values. These pairs of elements and their memberships can also
be shown through graphical representation. For many, it would be easier to represent linguistic proba-
bilities such as probable and unlikely in graphical form. In fact, numerous graphical fuzzy set models
have been introduced and employed in various disciplines.
Owing to its subjective nature, fuzzy set models may take various shapes and forms. However, they all
relate the membership values ....(xi) to the fuzzy elements Xi. The subject of developing membership
functions through empirical procedures has been discussed by many researchers (Wallsten et al., 1986;
Norwich and Turksen, 1983). The models discussed here generally have three characteristics: positive,
negative, and neutral. Examples of terms denoting positive characteristics are low probability, not severe
impact, and small magnitude. For example, not severe impact (the complement of severe impact) can
be interpreted as close to slight impact, and thus has a positive characteristic. On the contrary, terms
such as not very low probability, very severe impact, and large magnitude have characteristics opposite
to the former, and thus are considered to have negative characteristics. Terms such as fair, medium, and
moderate have neutral characteristics. When constructing and using fuzzy set models care should be
taken to maintain the consistency of these values.
For simplicity, we classify fuzzy set models into "translational" and "rotational" models (certainly,
there may be an overlap between these classes). In translational models, a linguistic value changes when
shifted horizontally. Figure 13-1 shows a translational model constructed on the basis of subjective
assessments (in a continuous form).
0.9
0.8
0.7
~mGH
0.6 +-----~--~-r~---r~~~~RYmGH
~,.AlRLYmGH
Jl 0.11 +-----~~--~~~~--~-.Ar-,.AlRLYWW
OA +------+---Y--+-+------ilri-----I ~ ~RYWW
-+-ww
0.3 +------+-~--~~~~~--;~NVT~RYmGH~--~~~~+_~~~--_1
~NVT~RYWW
0.2 +-----__-----+Y-~,---t--~--; _____ MEDIUM
0.1
0
0 0.1 0.2 0.3 0.4 0.11 0.6 0.7 0.8 0.9
p
Figure 13-1. Examples of translational models for fuzzy probability values.
Fuzzy Sets in Probabilistic Structural Mechanics 291
Triangular and bell-shaped translational models are commonly found in the literature. The hedges,
such as not very, very, and fairly, are often (but not always) determined by Eqs. (13-3), (13-5), and
(13-6), respectively. As an example, in Fig. 13-1, very high probability is represented by the following
fuzzy set:
very high = [1.011.0, 0.6410.9, 0.2510.8, 0.0110.7, 0.01(0.6, ... , 0.0)] (13-53)
Translational models have been used by many researchers in various applications. In their fuzzy
reliability analysis, Shiraishi and Furuta (1983) employed these models to evaluate subjective uncer-
tainties, such as omissions, mistakes, incorrect modeling, and construction errors. Elsewhere, triangular-
shaped translational models (like medium in Fig. 13-1) were used to determine the performance of
constructed facilities (Hadipriono, 1988a). In an earlier work on assessing falsework performance during
construction operations, Hadipriono (1985) found that assessment results are not sensitive to small
variations of the models (i.e., small variations in membership values). This lack of sensitivity is advan-
tageous because it essentially accommodates the variations commonly found in subjective judgments.
Characteristically, a rotational model represents a linguistic value, represented by a linear or nonlinear
line connecting one or two "rotational" points at the end(s) of the line. These models are often called
"ramp functions." For example, the term likely in Fig. 13-2 is represented by the following fuzzy set:
The terms not likely, very likely, and fairly likely are almost always found by using Eqs. (13-3), (13-
5), and (13-6).
a
Figure 13-2. Examples of rotational models for fuzzy probability values.
292 Fuzzy Sets in Probabilistic Structural Mechanics
absolutely true = 90
~ _ _ _ _ _ _ _ f4ndecided = 0
Rotational models were used in numerous applications, including the analysis of structural failures
(Blockley, 1977), structural safety assessment (Blockley, 1980), and damage assessment of protective
structures (Hadipriono and Ross, 1991).
Angular models with rotational characteristics have also been developed (Hadipriono and Sun, 1990).
However, unlike the previous rotational models, angular models are defined in a half-circular universe
of discourse, as shown in Fig. 13-3. A linguistic value is represented by a line or by its respective
angle. The horizontal axis, that is, A = 0°, represents a value of undecided. The vertical line with A =
'IT/2 = 90°, represents absolutely true, whereas that with A = -'IT/2 = -90° represents absolutely false.
The positive values, such as very true; true, and fairly true, are represented by lines or angles between
A = 0° and A = 'IT/2 = 90°. The negative values, such as very false, false, and fairly false are represented
by lines or angles between A = 0° and A = -'IT/2 = -90°. Here the hedges, such as very, fairly, etc.
are established by increasing the degree of angle. In our example, true =A = 'IT/4 = 45°, fairly true =
A = 'IT/8 = 22.5°, very true = A = 3'IT/8 = 67.5°, and so forth.
In fuzzy logic operations, because of their simplicity, angular models can be used more conveniently
than the other models described before. Furthermore, interpretation and ranking of linguistic values can
be performed easily. The use of rotational models in fuzzy logic operations is explained next.
Fuzzy logic operations were introduced by Zadeh (1975a,b,c) and developed further by Giles (1979)
into a formal system for fuzzy reasoning that is capable of dealing with degrees of belief and inconsistent
evidence. Baldwin and Pilsworth (1980) investigated various implication rules for use in modeling a
given situation by approximate reasoning with fuzzy logic. Another study by Baldwin and Guild (1980)
resulted in a new approach to reasoning that allows imprecise premises to consistently produce imprecise
Fuzzy Sets in Probabilistic Structural Mechanics 293
conclusions. Blocldey (1977) was among the first who applied fuzzy set and fuzzy logic to structural
engineering. Inspired by these fuzzy logicians, Hadipriono (1987) applied fuzzy logic to assess safety
and performance in structural and construction engineering. Details of this concept can be found in the
above references. Their application in the context of linguistic probability is described below.
Fuzzy logic operations described here involve truth values. These values can modify the values of
other universes of discourse. The operation is called truth functional modification (TFM). Here, TFM
is a logic operation that modifies the membership function of a fuzzy set with a known truth value.
Consider again the impact of an event on the failure probability of a structural component. For example,
if impact is very severe, then failure probability is extremely high. Let us first consider the "if" statement
"impact is very severe." Assume that this statement has a truth value, T, where T is a subset of the
universe of discourse TRUTH (commonly called truth space), represented by rr: Suppose that severe is
represented by a fuzzy set I. Hence, the following proposition applies:
(I is 1) is T; (J C I, T C 'I) (13-55)
Note that if the truth T is true, intuitively, one can say that the value of impact is very severe; on the
other hand, if T is false, the value for impact is the mirror image of very severe. However, in many
cases, for other truth values, particularly when hedges are introduced (e.g., fairly true, not quite true,
or rather false), then the solution is not straightforward. The TFM operation solves this problem by
establishing a modified value for impact, 110 whose membership function is
Solution to this problem can be carried out through a graphical procedure as shown in Fig. 13-4,
using rotational models. In this illustration, we assume that T is false. First, we plot very severe on the
right diagram, which represents the universe of discourse for impact and false on the left diagram, which
represents the truth space. Note that the left diagram is rotated 900 counterclockwise so that the axis
....
J..1T(t) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
that represents the fuzzy elements of false coincides with that representing the membership values of
very severe. This follows from Eq. (13-56), where t = j.L[(i). This also means that for any given element
i of very severe, we can obtain the corresponding element t of false. Knowing both t and false, we can
find the membership function j.LT(t), represented by the horizontal axis in the left diagram. Since j.L[,(i) =
j.LT(t) (Eq. [13-56]), the membership function j.L[,(i) is found on the same axis. For example, i = 0.9
yields j.L[(i) = 0.81 on the right diagram. This yields t = 0.81 on the left diagram, which in turn results
in j.LT(t) =0.19. Using Eq. (13-56), j.L[li) =0.19. Successive plotting of elements i and their correspond-
ing membership values j.L[,(i) yields the new value II or very slight, the mirror image of very severe.
Inverse truth function modification (ITFM) is the inverse of TFM. It is a logic operation that obtains
the truth value of a conditional proposition. Consider again the "if" statement, "impact is very severe,"
of the proposition in Eq. (13-55). Suppose that information indicates that "impact is very slight."
Intuitively, one can say that the truth value of the statement is false. However, information with other
impact values, such as fairly severe, severe, or not very slight, may not provide a straightforward answer.
Now suppose information shows that "impact is severe." The proposition can be written as
where I = very severe and 12 = severe. This yields the following answer:
(13-58)
Solution to this problem can be found by a graphical procedure as shown in Fig. 13-5. The horizontal
and vertical axes of the right diagram represent the fuzzy elements and membership values, respectively.
Note that because the left diagram is rotated 90° counterclockwise, its horizontal and vertical axes
represent the membership and fuzzy elements of truth values, respectively. Here, the values of very
severe (I) and severe (/2) are first plotted on the right diagram. As in Eq. (13-59), the truth element t
is equal to the membership function j.L[(i); they should therefore lie on the same vertical axis. Hence,
for each membership value of I, the corresponding element of T J is also known. Because the membership
value TJ equals that of 12, for any given i of both I and 12 , we can find the corresponding element and
membership value of TI • For example, i = 0.7 on the right diagram yields j.L[(i) = 0.49 and j.L[,(i) = 0.7.
From Eq. (13-59), j.L[(i) = 0.49 corresponds to t = 0.49 at the vertical axis. On the left diagram, j.L[2(i) =
0.7 corresponds to j.LT,(t) = 0.7. This procedure is repeated for different values of i to obtain the truth
value TI on the left diagram of Fig. 13-5. Here T J is found as fairly true.
An implication rule that relates an impact to failure probability of a structural component may exist in
the form of an "if-then" statement. Consider again the rule: "if impact load is very severe, then the
probability of structural failure is extremely high." Suppose evidence or information shows that the
Fuzzy Sets in Probabilistic Structural Mechanics 295
"impact load is severe." Then solving for the value of failure probability is of interest. Both propositions
can be rewritten as antecedents 1 and 2.
where I, I), and P represent very severe, severe, and extremely high, respectively. Note that extremely
high is equated as very, very high or (high)4 (Blockley, 1980). Through the use of the ITFM, Ant.1
becomes
We already know from the ITFM that T is fairly true (see Section 9). Our interest is to find T), the
truth of P. Giles (1979) employed the Lukasiewicz implication relation operation to obtain TI • He
defined the Lukasiewicz truth relation, denoted as L, of Ant.1' in universes '1 and '11 as having a
membership function of
(13-62)
where t and t1 are the truth elements of T and TI> in universes '1' and '1'1> respectively.
The membership function of TI is given by
(13-63)
Graphically, (1 - t + tl ) is shown in Fig. 13-6 as diagonal lines with varying value of tl • For
example, the longest diagonal represents (1 - t) for t1 = 0; the next diagonal line shows (1.2 - t) for
J..LI(i), J..LI2(i),t
~ .. V A
J..L Tl (t) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
-
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
t1 = 0.2; (1.4 - t) for t1 = 0.4; and so on. Equation (13-63) indicates that for a given value of t),
intersection between (1 - t + t1) and T will establish the membership function of T1• As an example,
for t1 = 0, the intersection between (1 - t + t1) and T yields I-LTI (t 1) = 0.62 (Fig. 13-6). Knowing both
t1 and I-LTI (t 1), T1 can be found. The procedure involves a successive plotting of horizontal lines drawn
from these intersections to the corresponding t 1•
Knowing the truth value T1 , the TFM operation described earlier can be used to obtain PI> which is
the modified value of P. The result yields the following consequent:
The process of obtaining PI from antecedents 1 and 2 is called the modus ponens deduction (MPD)
technique. Figure 13-7 shows the MPD procedure where the left diagram is rotated 900 counterclock-
wise. The new value for probability of failure is represented by curve Pl. Linguistically, one may say
that failure probability is between high and very high. However, this is applied only to higher fuzzy
elements; for lower elements there seems to be indecisiveness. This is because of the truth elements of
T1 that become t1 = 0 when they reach I-LTI (t 1) = 0.62.
Blockley's shorthand notation is used to summarize the MPD process:
Step 1: T= ITFM(IIII)
Step 2: TI = v[L 1\ ITFM(III1)]
Step 3: PI = TFM(P, T1)
Another indecisiveness often experienced in MPD operations is when falsification occurs in the
conditional proposition. In the above example, this could happen when the values of I and II have
opposite characteristics (one negative and the other positive). For example, if I is very severe and II is
not severe or slight, then T in Fig. 13-6 will have characteristics and directions similar to the diagonal
1 ~~~~--~~~~~r-~--~~~"
0.96
0.9
0.86
0.8
0.76
0.7
0.66
0.6
0.66
0.6
J..l 0.46
0.4
0.36
0.3
0.26 --0- T=FAIRLY
0.2 TRUE
0.16 ~T1
0.1
0.06
Oo----ir---+--+---+---t---r--+--+--t--~
t, tl
Figure 13-6. An illustration of the Lukasiewicz implication operation for MPD.
Fuzzy Sets in Probabilistic Structural Mechanics 297
UT
~~
-o-r-FAIRLYTRUE /
\, -Q-l-VERYSEVERE, f t .// "f I
l----+---+--I _____ U - 8 E V E R E ·w ~.J rI
I--____~-+-_+--I ~ Tl W""'"- tI
J1
, ..
01""""""1 ...
lines. Consequently, finding the intersection between T and the diagonal lines is not straightforward.
Should this be the case, the final result, PI> in Fig. 13-7 will be undecided (a horizontal line with
membership values equal to one). Despite this, one could take advantage of the trait. When used in an
application (e.g., in a quality control measure), such an indecisiveness may be used as a cutoff line.
Suppose that the rule in Eq. (13-60) holds true (Le., if impact load is very severe, then the probability
of failure is extremely high). Hence, any evidence of impact load indicating negative characteristics will
produce a failure probability with negative characteristics. However, once failure probability becomes
undecided, one can be sure that the characteristics of impact load have changed (e.g., impact load
becomes not so severe).
The MPD technique is particularly useful when evidence or information is related to the "if" state-
ment of an implication rule. However, in many cases, evidence could be related to the "then" statement
of the rule. In this case, we recommend the fuzzy modus tollens deduction technique.
Suppose that, in relation to the rule "if impact load is very severe, then the probability of structural
failure is extremely high," the evidence shows that "failure probability is not very high." We are
interested in finding the value of the impact load. The process we used to obtain the new value of
impact load is called modus tollens deduction (MID) logic. First, both the rule and evidence are
rewritten as follows:
where I, P, and P 1 represent very severe, extremely high, and not very high, respectively. Note that Eq.
(13-3) yields
(13-67)
The membership function of the Lukasiewicz truth relation, L, of Ant.1' in universes fJ' and fJ'1 is given
by
(13-68)
where t and t1 are the truth elements of T and TI> in universes fJ' and fJ'1> respectively. The membership
function of T1 is given by
(13-69)
Next, a graphical solution is performed in a way similar to that of MPD. However, the diagonal
lines (1 + t - t1) are shown as the mirror image of those in MPD. The longest diagonal represents
(1 + t - 1.0), for t1 = 1.0. The next lines are (0.1 + t) for t1 =0.9, (0.2 + t) for t1 =0.8, and so on.
Equation (13-69) was used to produce truth relation lines in Fig. 13-8.
Finally the TFM operation is performed to obtain II> the new impact value. Linguistically, 11 in Fig.
13-9 can be described as approximately not very severe. The entire process of obtaining 11 shown in
Fig. 13-9 is summarized as follows:
1 ~-'--'-~~~~~~~r--r--r-~
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2
Step 1: T = ITFM(PIP1)
Step 2: Tl = v[L 1\ ITFM(PIP1)]
Step 3: II = TFM(I, T1)
Contrary to the MPD method, MID requires a falsification process in its ITFM operation in order
to produce a result. Otherwise, the result (in this example, the impact value 11) will be indecisive. Such
indecisiveness is represented by the top horizontal line in the right diagram, where the membership
values are equal to one. As in MPD, one could take advantage of this trait; for example, for assessing
the performance of a structural component during a quality control process (Hadipriono, 1987). How-
ever, in this case, indecisiveness that occurs should alert us that the probability value has changed from
its positive characteristics (in the above example, not very high) to negative (e.g., fairly high).
Graphical solutions for MPD and MID can be obtained by a simple computer program (Hadipriono,
1988b). The results can be interpreted and ranked in relation to other known values (see, e.g., the
interpretation of PI in Fig. 13-7 and interpretation of 11 in Fig. 13-9). Furthermore, these models can
be used effectively to accommodate the implication relation in "if-then" rules, and therefore the use
of MPD and MID techniques in fuzzy reasoning expert systems is appealing. Despite the existence of
traits related to the indecisiveness of the result, they do not pose potential problems in constructing
such rules in expert systems. These traits do not seem to appear when angular models are used.
Unlike the rotational models used in Sections 10 and 11, fuzzy operations using angular fuzzy set
models are simplified and performed through basic geometry (Hadipriono and Sun, 1990). The logic
operations TFM, ITFM, MPD, and MID are described below.
}' / "
-ft-P-EXTREMELY HIGH
~~
V
-e-TI
/ .'
~T L~V I ~
/
_ _ _ _ II ) II
)~ ,'I
1 l~
j~
l/)
/- f .... jf 1~ \
/ ./ f ~( .~
~------------~~t
The TFM logic operation is defined by Eq. (13-56) and for convenience is rewritten as follows:
Note that here, I, T, and II represent both the linguistic value and the angle. The membership functions
of I, T, and II are given by
It follows that
and
Using Eq. (13-73) one can obtain the TFM solution through a simple tangential equation. Let us
return to our previous "if-then" rule, where I =very severe = -67.5° and T =false = -45°. Equation
(13-73) yields II = 67S or very slight (Fig. 13-10), which is the mirror image of very severe. The
result is the same as that from the previous graphical TFM operation using the conventional rotational
model.
The ITFM is derived from Eq. (13-59), which is rewritten as follows:
Substituting Eq. (13-71) into Eq. (13-74), the membership function of Tis
!-Lr(t) = !-Lr[!-LAi)] = !-Lr[i tan I] = i tan I tan T and !-Lr(t) =V [!-L/.(i)] =V [i tan Id
; i
(13-75)
Fuzzy Sets in Probabilistic Structural Mechanics 301
Hence, if I = very severe = -67.5° and II = severe = -45°, then Eq. (13-76) yields T = 22.5°, which
is fairly true. Figure 13-11 shows the result, which is again the same as shown in the graphical ITFM
process using rotational models.
Here, as before, the MPD logic operation includes TFM and ITFM, which are employed in con-
junction with the Lukasiewicz implication rule. Consider again the antecedents (I C I) :> (P C rp) and
(II C I), which were then modified to become [1111 is T] :> [P is Til This leads to the consequent:
[Pd, which is to be solved. The membership functions of the fuzzy sets are defined as
However, for angular models, we need to define a new Lukasiewicz truth implication relation, IJ.L(th t).
According to Giles (1979), the traditional Lukasiewicz truth implication relation of A and B is defined
as follows:
It"""-------I i, t
Figure 13-11. Inverse truth functional modification solution, using angular models.
302 Fuzzy Sets in Probabilistic Strucmral Mechanics
where (A), (B), and (A :::> B) are the membership functions of A, B, A :::> B, respectively. Hence, the
truth implication relation in angular models is given as
(13-80)
At the intersection of ~L(tb I) and ~T(t) in Eq. (13-78), the following equation applies:
(13-81)
(13-84)
By using the TFM operation, such as in Eq. (13-72), the membership function of P l is
by ITFM,
(13-88)
and
tan P tan II
tan P1= (13-89)
tan I + tan II
= = = =
Returning to our example, for I very severe -67.5°, II severe -45° and P extremely high = =
-78.7SO. Note that because in these models angles are used in fuzzy operations, the hedges in linguistic
values are established on the basis of these angles. To illustrate, here we assume that fairly A =22S;
= = =
A 2 X fairly A; very A 3 X fairly A; and absolutely A 4 X fairly A. Because we assume that
extremely high is between very high and absolutely high, it is equated as 3.5 X fairly high = -78.75°.
Fuzzy Sets in Probabilistic Structural Mechanics 303
Then, using Eq. (13-89), PI = -55.82°, which is between high and very high (Fig. 3-12). This result
is about the same as that from the graphical MPD approach using rotational models.
In MID logic, we would consider anew the antecedents (I C 1) ::::> (P C 'P) and (PI C 'P) in Eq.
(13-65), where the fact is related to the "then" statement in the first antecedent. The ITFM, Lukasiewicz
rule, and TFM operations result in a modified proposition: [I is Td ::::> [PIP1 is T] and consequent: [11]'
Note that here T is the truth value of P given P lo and T1 is the truth value of I. Finding T1 and
subsequently II is our interest.
The membership functions of the fuzzy sets are defined as before according to Eq. (13-77). The
membership function of the Lukasiewicz truth implication relation IJ.L(/ lo I) is defined as in Eq. (13-78).
However,
(t, tl ?: 0) (13-90)
At the intersection of IJ.L(/lo I) and IJ.T(/), Eq. (13-90) applies. Therefore (I - II) = I tan T and
__--------t i, P
severe = -45
etween high and very high =-55.82
very severe = -67.5
extremely high = -78.75
Figure 13-12. Modus ponens deduction solution, using angular models.
304 Fuzzy Sets in Probabilistic Structural Mechanics
and
(13-97)
As in our previous example, the following are given: I =very severe = -67S, P =extremely high =
-78.75°, and Pi = not very high. Note that not very high is equated as (90° + very high) or Pi = 22S.
Equation (13-97) yields Ii = 10.4°, which is approximately not very severe. Figure 13-13 shows the
result that is consistent with that from the conventional rotational models.
Notice the ease of using a simple tangential formula in both MPD and MTD operations. The results
of numerous exercises performed by Hadipriono and Sun (1990) for various characteristics of linguistic
values show a good agreement with those using the conventional rotational models. Furthermore, be-
cause a linguistic value is represented by a linear line (or an angle) its interpretation and ranking relative
to other values are simple and straightforward. In addition, the traits of indecisiveness that frequently
appear in the rotational models do not seem to appear here. However, because this model is relatively
new compared to the others, much research still needs to be done.
Applications of the fuzzy set concept in probabilistic structural mechanics have been provided by many.
In the following sections, two applications are presented: (1) merging of objective and subjective in-
formation using the fuzzy set concept and (2) assessment of probability of structural failure using the
fuzzy logic concept. References to these and other applications are given in Section 1.
If there are q factors, then the complete subjective information RGE is defined by
Brown suggests that n represents a crisp singleton (i.e., a fuzzy set with a single membership, having
a value of unity), No, expressed as
No = [lin] (13-102)
where n is treated as a fuzzy element with a crisp membership (membership value = 1). Fuzzification
of No is performed by expanding the memberships of No, and subsequently establishing a fuzzified
probability N. A reasonable assumption is that the probability measure n may range about n and vary
in increments of 1. Hence, the memberships of N is
(13-103)
where nj varies from n - k to n - m, where k, m, and n are positive integers. These bounds represent
the range of interest of the probability measure n. For example, if interests are on the range of three
orders of magnitude above p, then k = 0 and m = 3. Hence, N has the fuzzy elements n, n - 1, n -
2, and n - 3. Interest will be in establishing the membership function of N.
Further, a fuzzy relation between the effects (E) and the fuzzified probability (N) can be established
in the form of multiple implication rules. That is, "If E1 then NI> else if E2 then N 2, etc." (Here E; is
grave, not grave, etc., and N; is high, low, etc.). These rules can be written as
(13-104)
(13-108)
N (the fuzzified No) is established by selecting the fuzzifying kernel K(n), where K(n) is a subset of
Fuzzy Sets in Probabilistic Structural Mechanics 307
RGN• Selection of K(n) is a subjective procedure. One approach of selecting K(n) is noted in the following
numerical example.
As an illustration, consider the following numerical example (for simplicity, only two membership
values are given):
Using Eqs. (13-98), (13-99), (13-104), and (13-105) the following relations are found as (writing only
the membership function)
Then, Eqs. (13-100), (13-101), (13-106), and (13-107) yield the union of the total relations
R d R
GE = [0.4
0.1
0.3]
0.8 an EN = [0.6
0.3
0.4]
0.9 (13-112)
Brown (1980) selected fuzzy kernel K(n) from RGN on the basis of the most pessimistic viewpoint,
which is [0.3 0.8] corresponding to g2. Hence if, for example, n = 6, then using Eq. (13-102) the
objective information becomes No =116. Suppose that interest is on one order of magnitude higher than
p. Using Eq. (13-103) for k = 0 and m = 1, the fuzzified objective information (failure probability)
becomes N = [0.316, 0.815]. This means that there is a much stronger membership for n = 5 than n =
6 or, in other words, there is considerable support for failure probability one order of magnitude higher
than the objective information. Clearly, the influence of subjective information cannot be neglected.
Had we chosen a less pessimistic kernel, which is [0.4 0.4], corresponding to gl in Eq. (11-113), then
N = [0.416, 0.415]. This shows an equal membership for n = 5 and n = 6; this is a less conservative
failure probability measure. Somewhat more complex numerical examples may be found in Brown
(1980) and Brown et al. (1984).
AND the likelihood of a new presently unknown effect in the material is low
AND the likelihood of a new presently unknown effect due to structural form is low
AND the likelihood of a new presently unknown loading effect is low
B,: the notional probability of failure (assuming a perfect calculation model) is fairly high
C,: the likelihood of a new presently unknown effect in the material is not low
0,: the likelihood of a new presently unknown effect due to structural form is very high
E,: the likelihood of a new presently unknown loading effect is fairly low
The following MID technique is used to determine the probability of failure of the structure in any
limit states. We start with the following:
Ant.1: A :J B (13-114)
where A, B, and BI represents low, low, and fairly high. Suppose that Baldwin's models were used to
determine Af; that is, the probability of failure in any limit states in relation to information B I • Then
using the MID technique described in Section 11, Eq. (13-114) is solved as shown in Fig. 13-14. Note
that in this example, A = B = low. The result, Af, may be considered as less than fairly high.
Fuzzy Sets in Probabilistic Structural Mechanics 309
Ant.1: A :J C
Ant.3: C1 (13-115)
Ant.1: A :J D
Ant.4: DI (13-116)
Ant.1: A :J E
Ant.5: El (13-117)
where C =D =E = low and C1 =not low, D2 = very high, and El =fairly low.
Through a similar procedure, the solutions of Eqs. (13-115) through (13-117) for Ai, Af, and Af are
shown in Figs. 13-15 through 13-17, respectively. The total probability of failure in any limit state is
established by taking the total intersection of the membership functions of all A~ shown in Figs. 13-14
through 13-17, as follows:
(i = B, C, D, E) (13-118)
The final result in Fig. 13-18 essentially shows the worst probability from Figs. 13-14 through 13-
17 because of the union operation in Eq. (13-118). Here A1(TOTAL) may be considered as between
fairly high and high. Notice that, as mentioned in Section 11, MID procedure using Baldwin's models
follows a falsification process. Hence, the MID will produce results only when the values of the THEN
statement and the assessment (e.g., Band B 1) have opposite characteristics. For example, Figs. 13-14
through 13-16 show various results for A~ On the other hand, Fig. 13-17 shows a value of undecided.
.
r-I
1
~
~~~
,.. ..A~r:.
0.9
~
l>I~ •~r
0.8
0.7
......... ........ r'\.V
r
0.6
Jl 0.11
} ~
0.4
}( ~
0.3 IV ~"-
/ ~
"
0.2
/
--D--04-B-LOW
--'-BI-Fo4IRLY mGH
0.1
V ~
--+--041
o
o 0.1 0.2 0.3 0.4 0.11 0.6 0.7 0.8 0.9 1
a,6
Figure 13-14. Modus tollens deduction operation for A :J B and B I •
310 Fuzzy Sets in Probabilistic Structural Mechanics
1['-. ~!1'
0.9 H~ l.,.)VV
0.8 " ~V ' /
0.7 ......... ~ /
0.1 /
-'-CI-NQTLOW
-+--AI
I ""
K
o ~
o 0.1 0.2 0.3 0.4 O.S 0.6 0.7 0.8 0.9 1
a,c
Figure 13-15. Modus tollens deduction operation for A ::> C and C1•
This is because of the positive characteristics for both E (low) and El (fairly low). However, the
undecided value does not govern the final result.
Let us use angular models described in Section 13 to solve the above problem. The linguistic values
low, fairly low, not low, fairly high, and very high are defined following the models shown in Fig.
13-3 as having angles 45°, 22S, -45°, -22S, and -67S, respectively (see Fig. 13-19). To solve
1 rt-
r'1
~
0.9 ....A
•(
r"
0.8 ~
0.7 ~ V
....A /
~ ~~ (
l,...) r'r":
0.6
V
V
fl O.S
~IJ~
0.4
4, ~
-D-A-D-LOW
)~
0.3 ~ -'-DI-VERymGH
-+--AI ) ... K
/ r
0.2
0.1
o
o
___ l.--I ~,.
0.1 0.2 0.3 0.4 0.5 0.6 0.7
"'~
0.8 0.9
.J
a,ri
Figure 13-16. Modus tollens deduction operation for A ::> D and D 1•
Fuzzy Sets in Probabilistic Structural Mechanics 311
0.' r'\~~
0.8 "t--t--""o'1~~--+-II--t--+--+----+---I
0.7 T----j--j----l..c--+-~___f-+-+__+___l
JiO.&
0.6
0.4
-r--t---+---+-~~" "w l
+----if---+--+----i--+-----l
~~ 'I" )~
HenceA~ = -16.3°. A similar procedure was performed to solve Ai, Af, andAf from Eqs. (13-115),
(13-116), and (13-117), which yields -26.6°, -35.3°, and 35.3°, respectively. Taking the worst con-
dition results in A1(TOliU.) = -35.3°, which may be considered as between fairly high and high (see
Fig. 13-20). Note that the result is the same as before, but the problem was solved more conveniently
1 I
0.' ~ ...A ~
0.' ~ ~ ~
"
I - - D - A- LOw
0.2
I-+-AI.roTAL I
0.1
o ~J
o 0.1 0.2 0.3 0.4 0.& 0.6 0.7 0.8 0.9 1
a
Figure 13-18. Total failure probability of a structure in the MID application example.
312 Fuzzy Sets in Probabilistic Structural Mechanics
Figure 13-19. Angular models for linguistic values in the MID application example.
absolutely low = 90
tt"'_ _ _ _ _ _ _.,..ndecided = 0
by using a simple formula. Furthermore, the undecided value that appears in the former model does
not appear here.
14. DISCUSSIONS
The relatively new fuzzy set concept is still developing. Much has yet to be explored in order to
capitalize its applications, particularly in the area of probabilistic structural mechanics. Advantages of
the fuzzy set concept have been described throughout this chapter. Yet disadvantages exit because of
the newness of this concept in the area of probability. For example, owing to its fuzziness, assigning
membership functions with few data may lead to erroneous result.
Let us consider again fuzzy sets G1 and EI in Eq.(13-110), representing the values of gravity and
effect, respectively. The relation RGIEI is given in Eq. (13-111). Notice that the composition of G 1 or
G2 with RGIEI or R G ,E2 does not, in general, recover the original value of EI or E2 (in Eq. [13-110)).
This is shown here:
A solution to this problem was provided by Pedrycz (1983). Essentially, Pedrycz investigates a
performance index Q, which is the sum of squared errors between the membership values of E1 and
Ej • This in tum yields an iteration scheme. Pedrycz then employed Newton's iteration method to obtain
RG~i' a modified value of RG~i' which yields the recovery of E The same procedure can be employed
j•
to recover the original value of Ej when performing max-min composition between Gj and the total
relation R GE • For example, Boissonnade et al. (1985) employed the Pedrycz method to successfully
recover the original values of E in Brown's work (1980).
j
However, in the example shown in Eqs. (13-110) through (13-113), the Pedrycz approach (or any
other approach for this matter) will not recover the original value of E This is because the fuzzy sets
j•
used in the example consist of only two membership values, and hence they lack some fuzziness.
Another area to be explored is the use of fuzzy numbers to represent a probability value. Consider
probabilistic expressions approximately 0.5 and roughly about 0.8, which can be represented by fuzzy
numbers "0.5" and "0.8," respectively. These fuzzy numbers can be represented equally well, for
example, by a membership function as follows:
approximately 0.5 = "0.5" = [0.711' =0.4, 1.011' =0.5, 0.811' =0.6] (13-122)
roughly about 0.8 = "0.8" = [0.611' =0.7, 1.011' =0.8, 0.811' =0.9] (13-123)
Zadeh (1968) argued that the probability of a fuzzy event is the expectation of its membership
function. Hence, the probability mass function of pj in Eqs. (13-122) and (13-123), for example, is
given by
(13-124)
314 Fuzzy Sets in Probabilistic Structural Mechanics
2: PifLP(P;)
"P" = -',:::.- - - (13-125)
2: fLP(Pi)
Equation (13-125) can be used to obtain the fuzzy number from the membership function. For
example, consider fuzzy events A and B, having the values of "0.5" and "0.8", represented by Eqs.
(13-122) and (13-123), respectively. Assuming that these events are independent, one may intuitively
perceive that an event C, representing the occurrence of both A and B, has a probability of "0.4." Such
a solution can also be found on the basis of Zadeh's (1975a) extension principle for the algebraic
product of two fuzzy variables:
(13-126)
The use of Eq. (13-126) to determine the algebraic product of Eqs. (13-122) and (13-123) yields
Equation (13-125) yields the approximate probability value of "0.4." Notice that Eqs. (13-122) through
(13-127) illustrate the potential use of fuzzy algebra. Research related to the application of fuzzy num-
bers in structural engineering is underway. For example, the approximate probability approach was used
to assess the probability of cumulative damage of bridge components (Hadipriono and Wong, 1987).
Dong et al (1986) developed a fuzzy algebraic approach using triangular models with some applications
in seismic safety.
Presently, much of the fuzzy set applications in engineering are in control processes, but many are also
related to probabilistic aspects of failed components. In probabilistic structural mechanics, the fuzzy set
concept is used both because of the frequent use of subjective judgment in assessing failure probabilities
and because of the fuzziness of damage/failure characteristics. Despite the usefulness of this concept in
this area, much is still to be explored and improved. The operations, models, and applications introduced
in this chapter are not exhaustive. Rather, they represent the theory and applications of recent studies
and may be considered as the basics of fuzzy probabilistic structural mechanics.
REFERENCES
BALDWIN, 1. E, and N. C. E GUILD (1980). Feasible algorithms for approximate reasoning using fuzzy logic. Fuzzy
Sets and Systems 3:225-251.
BALDWIN, J. E, and B. W. PILSWORTII (1980). Axiomatic approach to implementation for approximate reasoning
with fuzzy logic. Fuzzy Sets and Systems 3:193-219.
BLOCKLEY, D. I. (1975). Predicting the likelihood of structural accidents. Proceedings of Institution of Civil En-
gineers 59(2):659-668.
BLOCKLEY, D. I. (1977). Analysis of structural failures. Part I. Proceedings of Institution of Civil Engineers 62:
51-74.
Fuzzy Sets in Probabilistic Structural Mechanics 315
BWCKLEY, D. (1980). The Nature of Structural Design and Safety. New York: John Wiley & Sons.
BOISSONNADE, A C., W. DONG, H. C. SHAH, and F. S. WONG (1985). Identification of fuzzy systems in civil
engineering. In: Proceedings of International Symposium on Fuzzy Mathematics in Earthquake Researches.
Feng Deyi and Liu Xihui, Eds. Beijing, China: Seismological Press, pp. 46-71.
BROWN, C. B. (1979). A fuzzy safety measure. ASCE Journal of Engineering Mechanics 105(EMS):855-872.
BROWN, C. B. (1980). The merging of fuzzy and crisp information. ASCE Journal of Engineering Mechanics
106(EMl): 123-133.
BROWN, C. B., and J. T. P. YAO (1983). Fuzzy sets and structural engineering. ASCE Journal of Structural Engi-
neering 109(5):1211-1225.
BROWN, C. B., J. L. JOHNSON, and 1. J. LOFTIJS (1984). Subjective seismic safety assessment. ASCE Journal of
Structural Engineering 110(9):2212-2233.
CHIANG, W. L., W. M. DONG, and F. S. WONG (1987). A comparative study of probabilistic and fuzzy set method.
Probabilistic Engineering Mechanics 2(2):82-91.
DONG, W. M., H. C. SHAH, and F. S. WONG (1986). Fuzzy computations in risk and decision analysis. Journal of
Civil Engineering Systems 2:210-208.
FRANGOPOL, D. M., K. IKEJIMA, and K. HONG (1988). Seismic hazard prediction using a probabilistic-fuzzy ap-
proach. Structural Safety 5: 109-117.
GILES, R. (1979). A formal system for fuzzy reasoning. Fuzzy Sets and Systems 2:233-257.
HADIPRIONO, F. C. (1985). Assessment of falsework performance using fuzzy set concepts. Structural Safety 3(1):
47-57.
HADIPRIONO, F. C. (1986). Modus tollens deduction technique for component safety assessment. International
Journal of Civil Engineering for Practicing and Design Engineers 5(4):255-268.
HADIPRIONO, F. C. (1987). Approximate reasoning for falsework safety assessment. Structural Safety 4(2):131-
140.
HADIPRIONO, F. C. (1988a). Fuzzy set concept for evaluating performance of constructed facilities. ASCE Journal
of Performance of Constructed Facilities 2(4):209-225.
HADIPRIONO, F. C. (1988b). Fuzzy Reasoning Expert System (FRES) for Assessing Damage of Protective Structures,
a Report Presented to Universal Energy Systems. No. S-760-6MG-054 (Subproject under United States Air
Force). Columbus, Ohio: Ohio State University.
HADIPRIONO, F. C., and J. Y. LAI (1986). Assessment of urgency measure to present further damage of concrete
components. Structural Safety 4(1):49-62.
lIADIPRIONO, F. C., and T. J. Ross (1991). A rule-based fuzzy logic deduction technique for damage assessment
of protective structures. Fuzzy Sets and Systems 44(3):459-468.
HADIPRIONO, F. C., and K. M. SUN (1990). Angular fuzzy set models for linguistic values. Journal of Civil
Engineering Systems 7(3): 148-156.
lIADIPRIONO, F. C., and B. K. H. WONG (1987). Cumulative damage studies of highway bridges. Journal of Civil
Engineering Systems 4(4):199-205.
KAUFMANN, A, and M. M. GUPTA (1982). Introduction to Fuzzy Arithmetic: Theory and Applications. New York:
Van Nostrand Reinhold.
KuR, G., and T. FOLGER (1988). Fuzzy Sets, Uncertainty, and Information. Englewood Cliffs, New Jersey: Prentice-
Hall.
MAMnANI, E. H., and S. ASSILIAN (1975). An experiment in linguistic synthesis with a fuzzy logic controller.
International Journal of Man-Machine Studies 7:1-13.
NORWICH, A M., and I. B. TURKSEN (1983). The construction of membership function. In: Fuzzy Set and Possibility
Theory: Recent Development. R. R. Yager, Ed. London: Pergamon Press, pp. 61-67.
PEDRYCZ, W. (1983). Numerical and applicational aspects of fuzzy relational equations. Fuzzy Sets and Systems
11:1-18.
SANCHEZ, E. (1976). Resolution of composite fuzzy relation equations. Information and Control 30:38-48.
5mBATA, H. (1987). An idea to bridge between fuzzy expression and engineering numerical expression. In: Pro-
ceedings of International Symposium on Fuzzy Systems and Knowledge Engineering. Guangzhou Guiyang,
China. Addendum: 1-13, Guangdong, China: Higher Education Publishing House.
316 Fuzzy Sets in Probabilistic Structural Mechanics
SIllRAISlll, N., and H. FURUTA (1983). Reliability analysis based on fuzzy probability. ASCE Journal of Engineering
Mechanics 109(6):1445-1459.
WALLSTEN, T. S., D. V. BUDESCU, A. RAPOPORT, R. ZWICK, and B. FORSYTH (1986). Measuring the vague
meanings of probability terms. Journal of Experimental Psychology: General 115(4):348-365.
Wu, T. H., M. D. CHEN, and M. M. MOSSAAD (1984). Assessment of the safety of earth dams. Structural Safety
2:145-157.
YAGER, R. R. (1986). An introduction to fuzzy set theory. In: Application of Fuzzy Set Theory in Human Factors.
W. Karwowski and A. Mital, Eds. Amsterdam: Elsevier Science Publishers, pp. 29-39.
YAO, J. T. P. (1981). Damage assessment of existing structures. ASCE Journal of Engineering Mechanics 106(EM4):
785-799.
ZADEH, L. A. (1965). Fuzzy sets. Information and Control 8:338-353.
ZADEH, L. A. (1968). Probability measures of fuzzy events. Journal of Mathematical Analysis and Applications
23:421-427.
ZADEH, L. A. (1975a). The concept of a linguistic variable and its application to approximate reasoning. Part I.
Information Sciences 9:199-249.
ZADEH, L. A. (1975b). The concept of a linguistic variable and its application to approximate reasoning. Part II.
Information Sciences 9:301-357.
ZADEH, L. A. (1975c). The concept of a linguistic variable and its application to approximate reasoning. Part III.
Information Sciences 9:43-80.
ZIMMERMAN, H. J. (1991). Fuzzy Set Theory and Its Applications. Norwell, Massachusetts: Kluwer Academic.
14
NEURAL NETWORKS IN
PROBABILISTIC STRUCTURAL
MECHANICS
1. INTRODUCTION
Neural networks are the most recent development in computer technology. This chapter discusses how
neural networks can be used in probabilistic structural mechanics. Because many engineers working in
the field of probabilistic structural mechanics would have only a vague understanding of neural net-
works, a brief discussion of neural networks, including background, development, fundamental concepts,
and a mathematical description is provided in Sections 3 through 6. This is followed by an actual
application of neural networks in probabilistic structural mechanics, namely, the ranking of pipe welds
in a power plant according to their failure probabilities.
2.1. Notations
317
318 Neural Networks in Probabilistic Structural Mechanics
2.2. Abbreviations
3. NEURAL NETWORKS
A brief introduction to neural networks is provided in this section. This introduction is restricted to just
one particular (albeit popular) form of neural network. More details on neural networks may be found
in books such as by Rumelhart and McClelland (1986).
3.1. Background
The starting point for neural network research lay in the work of physiologists, psychologists, and
other cognitive scientists who were trying to understand the brain and its functions. They were trying
to explain, among other things, how the brain worked, the location of the memory, the process of
learning, and why the brain is so good at pattern recognition.
The popular view is that the brain is a kind of computer; indeed, computers are often referred to as
electronic brains. The brain is, however, a unique type of computer, unlike any other. Among the
important features are the following:
recognize faces the process is subconscious but similar. The process of input, output, correction, ad-
justment, and repetition seems to be efficient. Once the learning is completed the subsequent recognition
process is almost instantaneous despite the vast amount of information stored and the relative slowness
of the brain. The recognition of faces from partial or fuzzy views is also achieved with little difficulty.
AXON
~aSYNAPSE
(1982) by the simple expedient of inserting one (or more) intermediate (hidden) layers between the
input and output layers. The training difficulty was worsened, however, because there were no target
values known for the hidden neurons, unlike the output neurons. In 1982 the introduction of the back
propagation algorithm (Hopfield, 1982) enabled one to work back from the difference between the
predicted and target outputs and modify the various connection strengths so as to reduce those differ-
ences. Larger problems were now capable of being tackled and soon real industrial problems were being
attempted.
4. COMPUTER ANAWGS
Rather than attempt to mimic the brain by using electronic components, it is simpler to use computer
simulations to produce an analog of the neural network, which is itself a model of the brain. Strictly
speaking one should use a parallel processing computer, but for the small models under consideration
in the work discussed in this chapter a PC286 proved adequate. The lack of parallelism does not in any
case prevent the system from working, it merely makes it operate more slowly.
To produce an analog of the brain, a neural network, one must make several choices of representation:
• Network architecture
• Rule for combining signals
• Transfer function
• Learning rule
Architecture concerns the number of neurons in the system and the arrangement of connections
between the neurons. In neural networks the neurons are usually grouped in several layers. The input
layer, with each input feeding into a single input neuron (each differing in behavior from the other
neurons), and the output layer, where each neuron produces a single output, are fairly well constrained
by the physical inputs and outputs. The number of neurons in the hidden layers is, on the other hand,
relatively unconstrained.
As we have mentioned previously, at least one hidden layer is essential if problems other than the
most trivial are to be solved. It has been shown theoretically that no more than two hidden layers are
needed for any problem. 1\vo layers appear to offer a more general capability than one layer; two
hidden layers have been used in the networks used in the work discussed in this chapter.
The choice of number of neurons in the hidden layers is almost an art: too few neurons and no fit
to the data will be found; too many and any data can be trivially fitted. The optimum number is usually
chosen by experience tempered by trial and error. The objective is to have just enough neurons to find
the underlying pattern concealed within the data, which can then be used in a predictive capacity.
The topology of interneuron connections is an even more complex problem. Connections between
neurons may go forward, backward, sideways, or even skip layers altogether. There may even be self-
connections in which a neuron output connects directly to its input (this is a non-brain-like feature).
The number of outputs from each neuron is subject to choice.
In the simulations discussed in this chapter, the following features (which are quite common in neural
network studies) are used: neurons are connected fully forward, each neuron is connected to every
neuron in the next layer, and no other connections are made. Figure 14-2 shows an example of such a
neural network.
The nodes in the input layer differ from those in the other layers in that no processing of the signals
takes place within them; the signals merely branch out to the first hidden layer of neurons. The nodes
in the other layers are actual neurons with several functions.
Neural Networks in Probabilistic Structural Mechanics 321
The signals between each connected pair of neurons are weighted by a factor unique to that con-
nection. As will be seen later, the values of these weights are the driving force behind the capability
of the network. The weights are the equivalent of the synapses in the brain.
The dendritic function of the neuron is in combining the various incoming signals and passing on
one signal. Several combining functions have been tried, for example, input, sum of inputs, or weighted
sum of inputs. The last option, in which each input signal is multiplied by the weight associated with
that connection before summing over all the inputs to that neuron, is very popular. It is in fact the
option chosen in our simulations.
The interpretive function of the neuron is represented by a nonlinear function that transforms and
scales the signal before transmitting it. The sigmoid function is commonly chosen (as here) but step
functions of various kinds have been tried.
Scaling of the signals is important to keep the calculations within bounds. It is usual to scale the
inputs to lie in the range 0 to 1 (-1 to 1 are also used). Because the weights are uncontrolled, as are
the number of inputs to any neuron, the weighted sum may assume values from -00 to +00. The sigmoid
function, f(x) = 1/[1 + exp( -x)], lies within 0 to 1 for any value of x. A direct consequence of this is
that the outputs are also scaled in the range 0 to l.
The learning process may be either supervised, in which target outputs are provided for the system
to emulate, or unsupervised, in which the system seeks to find a pattern among a large amount of input
15-~
data. Supervised learning is used in our system and the back-propagation algorithm is invoked. Other
techniques such as simulated annealing are also tried.
To recapitulate, the neural networks used in the work discussed in this chapter have two hidden
layers with full forward connection between neurons. The combining rule is the weighted sum and the
sigmoid transfer function is used. Learning is supervised and the back-propagation algorithm is used.
For many years artificial intelligence has been synonymous with expert systems, which has meant in
practice knowledge (or rule)-based systems. In such systems the knowledge engineer extracts from the
expert or experts a set of rules that are obeyed in the situation under examination. This process is
extremely difficult and requires great skill on the part of the knowledge engineer. A further complication
is that experts often make judgments that are not amenable to expression in the form of rules. Everyone
is familiar with the auto mechanic who listens to an engine running and states that a particular com-
ponent is worn. When asked how he knows this to be the case he replies, "it sounds like it." Trying
to extract the underlying rule or rules for use in an expert system is almost impossible in such cases.
Neural networks, on the other hand, excel at such tasks. To take the auto engine example one would
use as inputs a set of sound traces (in a suitably digitized form) and the opinion of the expert as to the
state of the engine corresponding to each trace. The neural network would be trained using this data
set and would eventually "learn" the underlying pattern corresponding to each fault. When subsequently
presented with a new sound trace the network would make a prediction as to the possible fault state
for that engine.
It should be noted that the network behaves in a manner similar to the expert in this case: a set of
inputs leads to a prediction without a rule being identified. With the neural network software discussed
in this chapter it is often possible to identify some fairly crude rules because one can identify which
inputs are important in predicting the various outputs. This rule extraction process is somewhat vague
but helps the engineer to gain an understanding of the system.
1\vo technical examples of the expert system and neural network approaches are as follows:
1. A solar flare forecasting expert system used 700 rules, took 1 man-year of effort to develop, and 5 min to
process an input. The neural network version was trained in 1 week and ran in milliseconds per input.
2. A burnout correlation took 1 man-year to develop the rule-based way but only 4 weeks, using a neural
network; the results are indistinguishable.
6. MATHEMATICAL DESCRIPTION
Consider a general neuron identified by the index j. Converging on this neuron are several (Ninp) input
signals 1[1], 1[2], ... ,/[Ninp ] with weights W[1, W[2, n, W[Ninp , n, ... , n.
The weighted sum combination rule produces a signal within the neuron, XU], where
N inp
XU] = L /[i]'W[i,j]
i=l
(14-1)
The sigmoid transfer function produces an output signal from the neuron, au], where
O[j] = 1/[1 + exp( - X[j])] (14-2)
Neural Networks in Probabilistic Structural Mechanics 313
We can now consider a simple neural network with one middle layer. There are Ninp inputs /[i; n]
(i = 1 to Ninp ), Nmid neurons in the middle layer, and Nout outputs O[k; n] (k = 1 to Nout). The output
layer also contains Nout target outputs T[k; n] corresponding to the input training set. There are NtTS sets
of inputs and targets in the training set (n = 1 to NtrJ.
In what follows the index i refers to the input layer and ranges from 1 to Ninp ; i refers to the middle
layer and ranges from 1 to Nmid ; k refers to the output layer and ranges from 1 to Nout; n refers to the
training set and ranges from 1 to Ntrs •
For neurons in the middle layer U = 1 to Nmid) we have
N...
Where Winp[i, j] are the input weights. The scaled input from the middle layer (M[i, iD is given by
N...,
In the learning process we attempt to minimize the total error over the whole training set by adjusting
the various weights in the system. Initially all weights are chosen at random within the range -1 to
+ 1. The inputs from the first member of the training set, n = 1, are run through the network and the
outputs O[k; n] are calculated as described above. Comparison with the corresponding target T[k, n]
produces a contribution to the error associated with the initial set of weights. The rest of the training
set is presented to the network in sequence. Finally, the root-mean-square error is computed over all
outputs and all members of the training set:
The weights are adjusted by the back-propagation algorithm, described below, and the process is
repeated until the error is reduced to some prescribed level. The network is then said to be trained and
the final set of weights is saved for use in the predictive mode of the network. In this mode a set of
inputs is presented to the network, which calculates the outputs O[k] as above. Whereas the training
might take weeks, a single calculation in the predictive mode takes only milliseconds.
The back-propagation algorithm is invoked at the end of each forward pass through the network.
For each training example n the algorithm uses the outputs O[k; n] to produce corrections to the various
weights. When each training set has been completed, the corrections are summed and the weights
adjusted before the next forward pass.
324 Neural Networks in Probabilistic Structural Mechanics
The W;np and Woul on the left-hand side of Eqs. (14-13) and (14-14) are the new values (after the
training section) and those on the right-hand side are the old values (before the training section).
The parameter Q in the last two equations is adjustable and governs the approach to convergence of
the algorithm. If Q is too small then convergence will be slow; if Q is too large the correction will
overshoot and convergence will again be slow. Optimizing the value of Q is something of an art, but
experience soon gives one a feel for good values. A further parameter, known as momentum, is often
used to accelerate convergence of the back-propagation algorithm; the interested reader will find details
of this, and other features, in standard texts on neural networks (Rumelhart and McClelland, 1986).
Suffice it to say that the above equations should enable anyone to develop on a PC a simple neural
network capable of solving real problems.
The above discussions have indicated the power of neural networks in extracting a pattern, trend, or
correlation from a mass of data, and speedily predicting the result given a new set of inputs. The
prediction speed is a consequence of the training process, in which all the hard work is done. The
prediction of the failure probability of pipework under varying operating conditions is an example
application.
this is achieved via a structural reliability and risk analysis (SRRA) model. A diagrammatic represen-
tation of this SRRA model is given in Fig. 14-3. Separate computer programs are used for the different
parts of the analysis, with all the results being finally put together to derive the probability of failure
for a single weld or pipe location. The input data for the analysis are both complex and detailed; it
therefore requires a considerable effort and variety of expertise to complete the task. The objective of
this work was to see if a neural network could be developed that could be used instead of the full
SRRA model. If the neural network could be taught to do this ranking with fuzzy or imprecise data as
input, then this would provide a quick initial screening for the whole plant. Furthermore, this initial
screening could be carried out by engineers who possessed a detailed knowledge of the plant but did
not have the detailed expertise in the individual areas needed for the SRRA.
Because the teaching (training) data is restricted to a particular type of plant the exercise is limited
to the subset of pipe welds and the conditions applicable to that type of plant. Although this limits the
applicability of the final neural network it does not detract from the value of the exercise.
I
I
I
CONDITIONAL DEFECT I
PROB. OF DlsrN AND .J
FAILURE DENSITY
PROB. OF
FAILURE
The first area was easily dealt with; a pipe weld can be adequately described by its outside diameter,
its thickness (Le., the pipe wall thickness), and its material (no transition welds were included).
For the second area the input to the SRRA model required detailed knowledge of how the weld was
constructed and inspected. However, because of the limited scope of this exercise, and because the
welds in question were all of a similar nature (Le., multipass manual metal or submerged arc), we
decided to see if giving the basic weld details would be sufficient. Our personal belief is that there is
an implicit assumption in this, namely, that the defect size distribution is reasonably constant as a
function of the percentage wall thickness and that the defect density is a function of the weld volume.
A question we also believed could affect the above, and to which the answer should be known, was
whether or not the weld was made on site or at the vendor (in the construction being considered several
piping systems were prewelded and inspected by the vendor before being taken to the site as a complete
unit).
This then left the weld inspection, which would surely affect the defects entering surface. Again the
SRRA required detail knowledge about the inspection in order to simulate an inspection efficiency as
a function of different defect types and their through-wall depth. For the neural network this was
simplified to basic questions about what type of inspection was carried out.
The third area is about the transients the weld experiences during its lifetime. Again it is perhaps
unfortunate that the exercise was not larger in scope, as it became clear that in all areas of the piping
there was always one dominant transient. It was, therefore, decided to select only this worst or most
dominant transient and to describe the basic features of this cycle. These basic features were selected
to be the following:
The fourth and final area addresses the question of what size of defect will lead to failure, that is,
the critical defect size. Although the SRRA model is capable of taking into account the possibility of
a cleavage or ductile failure, the failures for all the pipe work welds in this exercise were ductile.
Because the exercise was also carried out for one type of plant, the operating pressure was also constant.
Thus any difference in the hoop or longitudinal pressure stress would be adequately defined for the
neural network via the tube diameter and wall thickness. The only principal variable that we felt would
affect the critical defect size was therefore just the bending moment.
The final set of questions for the el).gineer to answer is given in Table 14-1. It can be seen that some
are factual, some simply require a knowledge of the plant operation, and some require a degree of
judgment as well as a knowledge of the plant.
reality the terms "high," "low," etc. and even the form of the questions were set by the engineers
during this normalizing process.
Question Comments
7.5. Results
After a few trials around the entire system we at last obtained a converging set that seemed adequate.
The program itself took about 1000 or so iterations to converge, which required 4 to 6 hr of running
time on a personal computer (usually overnight). Figure 14-4 shows how the target and predicted values
compare for the set of training values. It can be seen that the neural network had learned to recognize
the SRRA output probability of failure from the fuzzy input data within about plus or minus half a
decade over a range of several decades.
The test runs were then put through the neural network, and Fig. 14-5 shows how well the network,
in these cases, predicted the SRRA output. What Fig. 14-5 shows is that the errors around the perfect
fit in Fig. 14-4 are a reflection of how the network would perform in a predictive way with some new
set of input data.
The neural network described in Section 7 proved to be reliable within the limitations of the training
data. Indeed, the network has been used on a second plant. The failure probability ranking of the welds
in this plant proved to be consistent and there was felt to be no need to use the full SRRA model other
than for a few confidence-building comparisons. The exercise was completed in a fraction of the time
normally taken and paid for the network in that single application. There also emerged a hidden ad-
vantage of commitment in the involvement of the practising engineers in both the creation and running
of the network.
10-2
10-3
10-4
10-5
I-
::>
Q.
I-
::>
0
10-6
10-7
TARGET
Figure 14-4. Comparison between SRRA target and neural network output for training values.
Neural Networks in Probabilistic Structural Mechanics 329
The example in Section 7 is related to data from a theoretical analysis and the question then arises
as to the ability of the technique to work from raw data. The difficulty here is with obtaining even the
fuzzy or imprecise data needed to train the network. If, however, a fuzzy set of input variables can be
assembled for a given database, it would seem from this experience that the neural network could
possibly provide a significant increase in the value of the data.
It could be argued that because the neural network is doing little more than function fitting a set of
vague inputs to a set of outcomes, then normal multidimensional surface fitting routines can be used.
Indeed, the authors have used both methods for a given problem and identified the same dominant input
variables by both techniques. The difference is that in the surface fitting technique the form of the
functional equations must be input, whereas for the neural network this is not necessary. In the neural
network the knowledge from the outcomes is being distributed over the input variables to derive the
relationship. In this way weak or strong interactions between variables are recognized, linear or nonlinear
relationships are recognized, etc; the problem is that the user has no knowledge of these. It is this last
statement that may alienate many engineers. It is in our nature as engineers (and at least one of the
authors considers himself an engineer) to want to know why something works and to be able to inter-
rogate any model to satisfy this knowledge. In the application here, the knowledge resides in the SRRA
and so there is little problem. However, if a neural network is to be used in what could be termed a
purer form, then it will be necessary to have faith in the neural network, that is, to have faith in its
ability to learn the underlying physical relationship, which it will not then render up for anything but
a superficial interrogation! Despite this final shortcoming (if it may be called a shortcoming), the authors
believe that neural networks are a tool with which engineers will need to equip themselves in the future.
I-
::J
D..
I-
::J
o
TARGET
Figure 14-5. Comparison between SRRA target and neural network output for nontraining values.
330 Neural Networks in Probabilistic Structural Mechanics
REFERENCES
HOPFIELD, J. J. (1982). Neural networks and physical systems with emergent computational abilities. Proceedings
of the National Academy of Sciences (USA). 79
MINSKY, M., and S. PAPERT (1969). Perceptrons: An Introduction to Computational Geometry. Boston, Massa-
chusetts: MIT Press.
RUMELHART, D. E., and J. L. McCLELLAND (1986). Parallel Distributed Processing. Boston, Massachusetts: MIT
Press.
TAGUCHI, G. (1976). System of Experimental Design, Vols. 1 and 2. [Translated into English by D. Clausing, 1987.]
Lanham, Maryland: UNIPUB/Kraus International.
15
PROBABI LITY-BASED DESIGN
CODES
1. INTRODUCTION
This chapter presents code development procedures. Design code plays a central role in the building
process. It specifies the requirements for the designer so that a minimum acceptable safety level is
provided. The current codes specify load models (design loads) and resistance (design load carrying
capacity). Safety reserve is implemented through conservative load and resistance factors.
The major steps in the development of a probability-based code are reviewed. They include the
scope, objective, frequency of demand, space metric, and format.
The procedure is demonstrated by the development of a load and resistance factor design (LRFD)
code for girder bridges. Using the available statistical models of load and resistance, reliability indices
are calculated for bridge girders designed according to the current code. The design formula is then
modified to provide for good fit to the target reliability level. The presented approach was successfully
applied in bridge code development in the United States and Canada (Nowak and Lind, 1979; Nowak,
1992). Probability-based codes (LRFD codes) have also been developed for steel structures (AISC,
1986; CISC, 1974; CEC, 1984), concrete structures (ACI, 1977), timber structures (ASCE, 1992), and
offshore oil platform structures (API, 1989).
2.1. Notations
331
332 Probability-Based Design Codes
D Dead load
DA Dead load, asphalt-wearing surface
Dr Demand function
d Constant in the probability of failure formula
Fy Yield stress
L Live load
M I3-Metric function
PF Probability of failure
Q Load effect
R Load carrying capacity (resistance)
T Weighted average measure of closeness
UT Utility
V Coefficient of variation
Zx Plastic section modulus
13 Reliability index
13 T Target reliability index
TJ Constant in load factor formula
A Bias factor (mean-to-nominal ratio)
!:J.C T Increase of total cost
<!> Resistance factor
'Y Load factor
2.2. Abbreviations
Realization of a structure involves an interaction of many different trades and professions. The major
players are owner, designer, contractor, and user. There are many conflicting interests. The owner invests
Probability-Based Design Codes 333
money and is interested in a maximization of the profit. This means a low cost of material and labor.
The designer is hired by the owner. The role of the designer is to deliver the calculations and drawings.
The owner puts pressure on the designer to minimize the costs, which means minimization of material
and labor. The designer is also expected to design a safe structure. Failure due to underdesign may
have severe legal consequences. The role of a contractor is similar to that of the designer. A contractor
is hired by the owner and is expected to follow the design documentation prepared by the designer.
The contractor's interest, like the owner's, is to minimize the costs of material and labor. However, use
of substandard materials and unqualified labor is illegal. Finally, the user is interested in safe and
comfortable living/working/operation. This means expensive materials and high labor costs. The design
code is a set of requirements that must be satisfied by the designer so that the designed structure will
have a minimum acceptable safety level. The central role of a code is shown in Fig. 15-1.
The acceptability levels in building codes have evolved through the centuries. Structural failures have
always been undesirable events. They occur because of ignorance, negligence, greed, physical barriers,
and sometimes acts of god. Furthermore, longer spans, heavier loads, and new materials bring increased
risk. Historically, the approach to risk related to construction has been subject to considerable variation.
The oldest preserved building code is Hammurabi's code from ancient Babylonia. It dates back
almost 4000 years and is on display in the Louvre Museum in Paris. The responsibilities were clearly
determined. If the building collapses and kills the owner, the builder is put to death. If the son of the
owner dies, the son of the builder is put to death, and so on. In the middle ages, the construction of
large structures (churches and towers) was done by skilled craftsmen. Safety was provided by compar-
ison to existing successful realizations. The learning process was based on trial-and-error practice. At
the present time, failures still provide information that is valuable in the development of design require-
ments for future structures. However, the development of structural analysis and material sciences has
provided a basis for the modem approach. Loads and resistance parameters are treated as random
variables. The uncertainties are quantified using the available statistical data and procedures.
Code provides requirements for the minimum acceptable safety level. The provisions are expressed
in terms of formulas and procedures. A typical design requirement is a load and resistance factor design
(LRFD) formula.
(15-1)
where "Ii is the load component i, Qi is the load factor i, R is the resistance (load carrying capacity),
and <I> is the resistance factor. The design formula is developed by the code committee. It is then the
Owner Designer
User Contractor
Figure 15-1. Design code and parties involved in the building process.
334 Probability-Based Design Codes
designer's responsibility to make sure that, for given load and resistance factors (specified in the code),
design loads and resistance satisfy Eq. (15-1). Another example of a code requirement is the specified
minimum spacing between the diaphragms in a girder bridge.
4. CODE LEVELS
From the reliability analysis point of view, there are four levels of design codes (Madsen et al., 1986).
Level I codes use deterministic design formulas. The safety margin is introduced through central safety
factors (ratio of design resistance to design load) or partial safety factors (load and resistance factors).
In level II codes, the design acceptance criterion is closeness to the target reliability index or other
safety related parameter. Level III codes require a full reliability analysis. The acceptance criterion is
closeness to the optimum reliability level (or probability of failure). Finally, level IV codes use the total
expected cost as the optimization criterion. The acceptable design maximizes the utility function (dif-
ference between the benefits and costs).
Current design practice is based on level I codes. However, level II methods are used for the de-
velopment of code parameters (load and resistance factors). Levels III and IV are of practical use only
in advanced research.
5.1. Scope
A code is developed for a class of structures. It is important to determine that class by identifying
the range of parameters covered and not covered by the code. These parameters can include type of
material (steel, concrete, wood, plastic), type of function (office, apartment, hotel, hospital, highway
bridge, railway bridge), span length (short, medium, long), structural type (frame, beam, column, con-
Probability-Based Design Codes 335
nection), thickness of components (hot-rolled steel, cold formed), and type of connection (welded,
riveted, bolted).
To avoid unintentional misuse of the code provisions, the code-writing committee should clearly
specify the scope. The scope is a parametered set of structures and the set of parameters is called the
data space. It can be narrow (anchor bolts used in a concrete wall in a nuclear power plant) or wide
(all types of bridges).
An example of a code with a specified scope is a bridge design code. For each parameter, the range
is either listed as a discrete set or provided in the form of an interval (specified using numerical values
or formula). With regard to function, the parameter set may include highway, railway, transit guideway,
pedestrian, and others. Structural types can be girder, slab, truss, arch, frame, cantilever, cable-stayed,
and suspension. Materials used for bridge construction can include steel, reinforced concrete, preten-
sioned concrete, posttensioned concrete, wood, glued laminated wood, and stressed wood. The code
may specify the method of analysis for bridges: simple girder distribution factors (as specified by
AASHTO, 1992), two-dimensional analysis, three-dimensional analysis, finite element analysis, or other
numerical procedures.
The code may deal with various limit states. Limit states must be clearly defined, with the major
parameters identified and acceptance criteria determined in the form of limit state functions. For ex-
ample, the ultimate limit states (ULSs) may include flexural capacity, shear, compression, and tension.
Serviceability limit states (SLSs) are determined depending on material and structural type: cracking,
deflection, vibration, and excessive permanent deformation. Fatigue limit state (FLS) may govern the
design. Fatigue load and resistance can be expressed in terms of number of load cycles (in the case of
highway bridges this can be in terms of number of trucks).
The major codes used in the United States vary considerably with regard to scope. The following
codes for building structures cover resistance only (loads are specified in other documents).
Load components include dead load, live load (static and dynamic), environmental forces (temper-
ature, wind, earthquake, ice pressure, water pressure), and special forces (for buildings, e.g., fire, gas
explosion, and for bridges, e.g., emergency braking, collision). Loads for buildings are specified in the
following codes:
For bridges, loads and resistance are specified in one document issued by the American Association
of State Highway and Transportation Officials (AASHTO, 1992).
The scope of a code is a compromise between simplicity and closeness to the objective. It is desirable
to cover a wide range of structures by the same provisions. Such a simple code is easier for the designer
and the probability of error (use of a wrong design formula) is reduced. On the other hand, it is difficult
to achieve the target safety level for all structures covered by the code. If safety is close to the target
value for one group of components, then another group can be overdesigned or underdesigned. If the
scope is narrowed down to structures with similar parameters, then it is easier to satisfy the required
safety criterion.
336 Probability-Based Design Codes
The current trend is to cover load and resistance by the same code. Furthermore, modern codes must
provide a rational basis for the comparison of materials (steel, concrete, and wood). From the designer's
point of view, it is convenient to use the same load and resistance factors for all design cases, for
example, to always use one factor for dead load (regardless of material).
The current trend in the selection of code objectives is to specify the target reliability index !3T' The
target safety value can be determined for a class of structures, components, and/or limit states. The
optimum value of !3T depends on the expected cost of failure and cost of upgrading (cost of increasing
safety reserve). Therefore, in current codes there are considerable differences in !3T' For example, con-
sider steel beams and connections (fasteners, e.g., bolts). Should reliability of beams be the same as
that of fasteners? The reliability of hot-rolled sections depends on Zx, F y , and thickness-to-width ratios.
For a given value of F y , to increase !3 (reliability index) of a beam, Zx must be increased, which means
an increase of weight (cross-section). For fasteners, !3 can be increased by adding a bolt or bolts, which
is usually less expensive than increasing the beam size. Therefore, it costs less to increase the reliability
index !3 for fasteners than for beams. Safety can be considered as a commodity and it is cheaper in the
case of bolts. Therefore, for the beams, typical values of !3 are between 3 and 4, but for fasteners 13 is
6 to 8.
A family of prescribed target reliability indices is clearly the only currently possible and acceptable
objective for a code. Collectively, the !3 values of the objective is called the target reliability index
function.
A reliability index may be associated with any stochastic system that can attain two states: failure
and nonfailure. Ideally, one would like to assign a reliability index to an entire structure. However,
there are many essentially different modes of failure that are not all equivalent. This is reflected in the
safety checks required for each failure mode. The only practical alternative seems to be to associate a
prescribed target reliability index with each safety check. Without due complication, the target reliability
index may vary with loading (function of load ratio; e.g., live-to-dead load ratio) and type of failure
mode (shear, flexure, buckling, etc.) and material. Moreover it may vary within a "single" failure ode
(e.g., different for long, short, and intermediate columns). Whether such a variation is permissible and
desirable is a matter for the code committee to decide. For example, the committee may consider whether
the reliability of beams should not be independent of the D/L ratio. If there is no valid reason to prescribe
different reliability, the target reliability should be constant.
Probability-Based Design Codes 337
As a guide to selection of target reliability index, the past performance of codes in service is most
valuable. The index can be calculated (given the appropriate statistical data) for any structural member
and safety check, using the models of loads and resistance. From the reliability indices thus computed
for existing code values, a target index is selected, as a function or a constant.
In particular, it is possible (but normally a rather pointless exercise) to select ~ values of the "old"
code as target for the new code. This makes sense, however, when the code change is meant as a change
in form and not content, for example, as when a new analysis formula is proposed to replace an old
one.
(15-2)
where M is a function, called the ~-metric, Dr is the demand function, and s is the integration parameter.
338 Probability-Based Design Codes
It is important that a code committee have an idea of how the code is likely to be used. If, as seems
unavoidable, the target reliability cannot be met exactly and if the distance from the target of necessity
must vary, it must be known what structural data occur most frequently so that the target reliability can
be met as closely as possible for these data. The end result of the calibration is not sensitive to details
of variation of the demand function, and rather crude estimates based on sampling from past designs
would normally suffice in practice. In general, it can be said that the closer the target can be met, the
less important is the shape of the demand function.
The ~-metric can be chosen in many reasonable ways. For example, it may be a weighted least-
squares fitting of ~ to ~T over the data domain. Alternatively, one may seek to minimize the numerically
maximum relative error maxl~T - ~I, and so on.
An appropriate measure for the ~-metric can be obtained from the viewpoint of the utility of struc-
tures, that is, by drawing attention to the difference of consequence of overdesign and underdesign
(Lind and Davenport, 1972). Neglecting maintenance and demolition costs, the total cost of a structure
against a single limit state can be expressed as
(15-3)
in which CT, CI> and CF are total cost, initial cost, and failure cost, respectively, whereas P F is the
probability of failure. Member sizes are selected to satisfy inequality in the form
in which Q is the load effect in a particular mode of failure and R is the corresponding resistance.
The initial cost C1 can be fitted with good accuracy, at least in the neighborhood of the target
reliability index ~1) by
where a and b are constants, and ~ is the reliability index. The probability of failure P F can be ap-
proximated by
(15-6)
in which c and d are constants. Assuming that the value of risk (consequence of failure, i.e., failure
cost) is independent of the reliability index ~, the total cost is given as
(15-7)
The total cost CT is equivalent to the negative utility -UT• The relationship between CT (= -UT )
and ~ given by this equation is presented in Fig. 15-2, in which skewness indicates the difference of
consequence of over- or underdesign on the utility. Furthermore, assuming that the target safety level
~T is optimal, the simplified equation indicating the increment of total cost from the optimally is
(15-8)
where dCT is the increase of total cost due to the difference between ~ and ~T. Note that dCT = 0
when ~ = ~T. Then a measure of closeness, M I ,
(15-9)
Probability-Based Design Codes 339
might be employed. It was found that the constant d changes insensitively from 4.2 to 5.2 accordingly
as p = 4.0 to 5.0, provided that PF is equal to «1»( -P), where «I» is the standard normal probability
function.
For a small P - Pn the approximated measure of closeness, M2 , is obtained:
(15-10)
Many other measures of closeness could be considered as appropriate ones. If it is desired to penalize
larger deviations from the objective, the following closeness measure, M3 , might be employed:
(r ~ 2) (15-11)
~I
where the factored load is equal to L 'YiQni, Qni is the nominal (design) value of load component i, 'Yi
is load factor i; factored resistance is <\>Rn, Rn is the nominal (design) value of resistance and <\> is
resistance factor.
An example of nominal (design) load, mean load, and factored load is shown in Fig. 15-3. In some
developments of design codes, the factored load was taken so that the shaded area (see Fig. 15-3) was
the same for all load components (e.g., dead load and live load).
A similar example showing nominal resistance, mean resistance, and factored resistance is presented
in Fig. 15-4. The actual value of the resistance factor is determined by calibration with the objective
to obtain 13 = 13'[0
The presented approach is demonstrated in the development of a load and resistance factor design
(LRFD) bridge design code (Nowak and Lind, 1979; Nowak et aI., 1987; Nowak, 1992). The work
involved the development of load models, resistance models, limit states, and acceptance criteria. This
description deals with the reliability-related aspects only.
Probability-Based Design Codes 341
Probability
Density
Function
o mean
Figure 15-3. Nominal load, mean load, and factored load.
The design provisions are developed for the ultimate limit states (ULSs) of flexural capacity (bending
moment) and shear. Calculations are performed for spans from 30 to 200 ft (0 to 60 m) and girder
spacing from 4 to 12 ft (1.2 to 3.6 m).
Probability
Density
Function
o mean R
Figure 15-4. Nominal resistance, mean resistance, and factored resistance.
342 Probability-Based Design Codes
Coefficient of
Load component Bias factor variation
Dead load
Factory-made components 1.03 0.08
Cast-in-place components 1.05 0.10
Asphalt wearing surface Mean = 3.5 in. 0.25
Live load and dynamic load 1.0-1.8 0.18
The most common structural types are steel girders, reinforced concrete T-beams, and prestressed con-
crete AASHTO-type girders.
In the present study, the calculations are performed for the three types of material (steel, reinforced
concrete, and prestressed concrete). The spans cover the range from 30 to 200 ft (9 to 60 m). Girder
spacings considered include 4, 6, 8, 10, and 12 ft (1.2, 1.8, 2.4, 3.0, and 3.6 m).
where D is the dead load effect, L is the live load effect, I is the dynamic load effect, Rn is the load
carrying capacity, and <P is the resistance factor. Resistance factors specified by AASHTO (1992) are
shown in Table 15-3.
The reliability analysis is performed for steel girders, reinforced concrete T-beams, and prestressed
concrete girders. The results are shown in Figs. 15-5 to 15-7 for moments and in Figs. 15-8 to 15-10
for shears. There is a considerable variation in 13 values. It is caused by variation in statistical parameters
of load and resistance.
Selection of I3T can be based on 13 values for current code, evaluation of performance of existing
structures, and engineering judgment. The selection process should also involve representatives of the
owners, designers, contractors, and maintenance engineers. If the performance of existing structures is
acceptable, then I3T should be at the lower level of the calculated reliability index spectrum. If the
current provisions result in overdesign, I3T may be reduced. For structures with no performance eval-
uation data available, I3T can be selected by comparison to similar structures or conditions.
There is a different cost of safety in various structures and parts. For example, increasing the safety
level in a connection costs less than increasing safety in a beam itself. A connection also generally fails
5
s = girder spacing
,.
.A...
tr~
."
4
3
S=12'~ ~~
8=10'
':: ~:::/
." ~
~
-
---
- -
2 s= 4'
o
o 100 200
Span (ftl
Figure 15-5. Reliability indices for steel girders designed according to AASHTO (1992), moments.
in a brittle, catastrophic manner. Therefore, target reliability indices for connections should be relatively
higher. Such considerations must be taken into account in the selection of (3T values. In the present
study, the target reliability index (3T = 3.5.
5 s = girder spacing
~
48~~~~
s-
S-12'~~~
10'~""'"~
~~"""" ~
-v
3 s= 86 :~"'" _
s=
2 .s=
4' ------+---------------~--------------~-----------_1
1~--------~--------~--------~~------__;
oL---------~--------~--------~~------~
o 100 200
Span (ftl
Figure 15-6. Reliability indices for reinforced concrete T-beams designed according to AASHTO (1992), moments.
Probability-Based Design Codes 345
5
~
~ ~~ ""'-
4
S-12'~ .... ' ~
3
f- s=10' /
V -
""
:~ ~:~ V
2 -s-
- 4' 0
s=girder spacing
1
o
o 100 200
Span (ft)
Figure 15-7. Reliability indices for prestressed concrete girders designed according to AASHTO (1992), moments.
It is preferred to use the same load factor for each load component in all design cases (for any
material). Therefore, load factors are determined first. Then various values of the resistance factor are
tried. The code value of <l> corresponds to the best fit of 13 to I3T'
The factored load component 'YiQni, where 'Yi is the load factor i and Qni is the load component i,
can be determined so that the probability of being exceeded is the same for all load components in the
considered load combination. Then the load factor 'Yi can be calculated as follows:
where Ai is the mean-to-nominal ratio for load i, V, is the coefficient of variation, and 1] is a constant
for all load components.
In the new code under consideration, the design live load and dynamic load are changed compared
5
s=12'
s=10'
4 s= 8'
H s= 6'
C)
'tJ s= 4'
....r:I 3
-~
::I
i::I
C)
2
1
s=girder spacing
0
0 100 200
Span (ft)
Figure 15-8. Reliability indices for steel girders designed according to AASHTO (1992), shears.
346 Probability-Based Design Codes
4 s=12'
i-
s=10'
s= 8'
3 s= 6'
s= 4'
~
....
:::I 2
s=girder spacing
~
:::I
~
1
~
0
0 100 200
Span (ft)
Figure 15-9. Reliability indices for reinforced concrete T-beams designed according to AASHTO (1992), shears.
to AASHTO (1992). The bias factor for the new live load (including dynamic load) is 1.1 to 1.2. For
dead loads the statistical parameters can be taken from Table 15-1. The resulting relationship between
11 and 'Yi is shown in Fig. 15-11. The selected values of load factors are 1.25 for dead load (except of
asphalt), 1.5 for asphalt wearing surface, and 1.60 for live load (static and dynamic). Therefore, the
design formula for bridges is
where D is the dead load (except of asphalt), DA is the asphalt wearing surface, L is the static portion
of the live load, I is the dynamic portion of live load, Rn is the nominal resistance, and <I> is the resistance
factor.
4 r- s=12'
s=10'~
s= 8'~ ~~
o
o 100 200
Span (ft)
Figure 15-10. Reliability indices for prestressed concrete girders designed according to AASHTO (1992), shears.
Probability-Based Design Codes 347
2.0
1.5
1.0
----- UveLoad
--a-- D. asphalt
-..- D. cast-In-place
--0-- D. factory made
0.5
0.0
1.50 1.75 2.00 2.25 2.50
For !!ach material several resistance factors are considered. For a given <1>, the corresponding nominal
resistance is
(15-16)
The reliability indices calculated for bridges designed using the three considered types of material
are shown in Figs. 15-12 to 15-14 for moments and Figs. 15-15 to 15-17 for shears. 1\vo values of
live load factor are considered, 'Y = 1.60 and 'Y = 1.70. The best fit to the target reliability level, I3T =
3.5, is obtained for the live load factor 'Y = 1.60, and the resistance factors presented in Table 15-4.
7. CONCLUDING REMARKS
The theory of probability and statistics provides a convenient tool for the development of rational design
and evaluation criteria. The major steps in the development of a code include definition of the scope,
code objective, frequency of demand, selection of space metric, and code format. The scope of the code
should be clearly identified to avoid ambiguities and improper use. In current codes, the objective is
closeness to the preselected target reliability level, often expressed in terms of the reliability index.
Analysis of the frequency of demand allows for identification of the most important safety checks. The
space metric serves as a measure of closeness to the target safety level. Usually, underdesign is penalized
more than overdesign. Selection of the code format is important from the user's point of view.
The formulated approach is demonstrated in an example of a code developed for the design of
highway bridges. An existing code is based on deterministic analysis. The new provisions are derived
using the available statistical data on loads and resistance parameters. Load and resistance factors are
calculated so that the reliability indices for bridges design by the new code are close to a preselected
target value. The probability-based approach results in a uniform safety level for various spans and
materials.
348 Probability-Based Design Codes
l!
4
'-----~- ~--.---- ------- .
1 0--- 0-----0- --0---- --------"1
... 3
•
----e---
4>=0.95, r-1.70
4>=0.95, r-1.60
~ o 4>=1.00, r-1.70
:9
2
----0--- 4>=1.00, r-1.60
~
=
~ 1
0
0 100 200
Span (ft)
Figure 15-12. Reliability indices for steel girders designed according to the New Code, moments.
..
4 --- - -~
0--_ 0-----0_ ..
-~
• 4>=0.85. r-1.70
3 ----e--- 4>=0.85, r-1.60
o 4>=0.90, r-1.70
·---0--- 4>=0.90, r-1.6O
2
o
o 100 200
Span (ft)
Figure 15-13. Reliability indices for reinforced concrete T-beams designed according to the New Code, moments.
Probability-Based Design Codes 349
5r-----r---~~--~~--~
4 H-~---~"'~::1
0--- --0----· --------""
0-----0
• <j>:O.95, )=1.70
31---+---+---+---1 ----e--- <j>:O.95, )=1.60
o 4>=1.00, )=1.70
----0--- 4>=1.00, )=1.60
21---+---+---+---1
1 r-----+-----+-----4~--___l
o~--~~--~----~----~
o 100 200
Span (ft)
Figure 15-14. Reliability indices for prestressed concrete girders designed according to the New Code, moments.
.. .-. ....
---0--
4 U"
~---- --------..,
0··- 0-_ "'0" 1-_- 00 ---- ---------
- ...- <j>:O.95, )=1.70
3 ----e--- 4>=0.95, )=1.60
o 4>=1.00, )=1.70
2
----0--- 4>=1.00, )=l.60
o
o 100 200
Span (ft)
Figure 15-15. Reliability indices for steel girders designed according to the New Code, shears.
350 Probability-Based Design Codes
4
~.
0··· ~O .... .'0 •••
•• -0 • cjFO.90, '1-1.70
3 _...•.. - cjFO.90, '1-1.60
o cjFO.95, '1-1.70
---·0--- cjFO.95, '1-1.60
2
o
o 100 200
Span (ft)
Figure 15-16. Reliability indices for reinforced concrete T-beams designed according to the New Code, shears.
4
...... .........
0·- .. ~- . .,::- .----....... ............ _.,
0 ...
•• '0 •• '0 ....... . ............ •
............
cjFO.90, '1-1.70
3 cjFO.90, '1-1.60
o 4>=0.95, "(=1.70
.. ···0·_· cjFO.95, "(=1.60
2
o
o 100 200
Span (ft)
Figure 15-17. Reliability indices for prestressed concrete girders designed according to the New Code, shears.
REFERENCES
AASHTO (American Association of State Highway and Transportation Officials) (1992). Standard Specifications
for Highway Bridges, 15th ed. Washington, D.C.: American Association of State Highway and Transportation
Officials.
ACI (American Concrete Institute) (1977). Building Code Requirements for Reinforced Concrete. ACI 318-77.
Detroit, Michigan: American Concrete Institute.
AISC (American Institute of Steel Construction) (1986). Manual for Steel Construction, Load and Resistance
Factor Design. Chicago, Illinois: American Institute of Steel Construction.
API (American Petroleum Institute) (1989). Recommended Practice to A-LRFD [draft]. Dallas, Texas: American
Petroleum Institute.
ASCE (American Society of Civil Engineers) (1992). Load and Resistance Factor Design: Specification for En-
gineered Wood Construction. New York: American Society of Civil Engineers.
CEC (Commission of European Communities) (1984). Common Unified Rules for Steel Structures. EUROCODE
No.3. Brussels, Belgium: Commission of the European Communities.
CISC (Canadian Institute of Steel Construction) (1974). Steel Structures for Buildings-Limit States Design. Stand-
ard CSA-516.1-1974. Rexdale, Ontario, Canada: Canadian Institute of Steel Construction.
HWANG, E.-S., and A. S. NOWAK (1991). Simulation of dynamic load for bridges. ASCE Journal of Structural
Engineering 117(5):1413-1434.
LIND, N. C., and A. G. DAVENPORT (1972). Towards practical application of structural reliability theory. In: Prob-
abilistic Design of Reinforced Concrete Buildings. ACI SP-31. Detroit, Michigan: American Concrete Insti-
tute. pp. 63-110.
MADSEN, H. 0., S. KRENK, and N. C. LIND (1986). Methods of Structural Safety. Englewood Cliffs, New Jersey:
Prentice-Hall.
NowAK, A. S. (1992). Calibration of LRFD Bridge Design Code. Report UMCE 92-25, NCHRP Project 12-33.
Ann Arbor, Michigan: University of Michigan.
NOWAK, A S., and Y.-K. HONG (1991). Bridge live load models. ASCE Journal of Structural Engineering 117(9):
2757-2767.
NowAK, A. S., and N. C. LIND (1979). Practical bridge code calibration, Journal of the Structural Division,
American Society of Civil Engineers 105(12):2497-2510.
NowAK, A. S., J. CZERNECKl, 1. ZHOU, and R. KAYSER (1987). Design Loads for Future Bridges. FHWA Project,
Report UMCE 87-1. Ann Arbor, Michigan: University of Michigan.
RACKWITZ, R., and B. FIESSLER (1978). Structural reliability under combined random load sequences. Computer
and Structures 9:489-494.
TABSH, S. w., and A. S. NOWAK (1991). Reliability of highway girder bridges. ASCE Journal of Structural
Engineering 117(8):2373-2388.
16
RELIABILITY-BASED OPTIMUM
STRUCTURAL DESIGN*
DAN M. FRANGOPOL
1. INTRODUCTION
Today, in modem structural design the ultimate goal is generally to find the best possible solution
without compromising structural reliability. Toward this aim several reliability-based design codes have
been proposed and are currently used in the United States, Canada, Asia, Australia, and Europe for
buildings, bridges, and offshore platforms. These codes are calibrated using advanced structural relia-
bility techniques to provide uniform and consistent safety levels over all structural elements (e.g., beams,
columns, connections) that are designed by the same code provisions. However, uniform reliability of
structural elements does not assure uniform reliability of structural systems. Depending on the type of
structural topology, material, configuration, joint behavior, and correlations, the reliability of a structural
system can be vastly different (Ang, 1989). Therefore, considerable research has been focused on
structural system reliability assessment both in code work and in specific structural investigations for
design criteria selection, concept evaluation, as well as on inspection and maintenance strategies. It is
assumed in this chapter that we know how to evaluate both element (also referred to as component)
and system reliabilities with respect to various limit states. l
To fulfill the requirement of finding the best possible design without compromising structural relia-
bility, optimization theory and methods must also be used. During the last three decades theory and
methods of structural optimization have developed significantly. The demands for lightweight structures
(particularly in aerospace applications), efficient use of materials (particularly fiber composites), and
energy conservation in various transportation systems have been strong driving forces behind these
developments (Schmit, 1984). However, despite the demonstrated growth of structural optimization,
most of it has been cast in a deterministic format. A major limitation of the deterministic optimization
*The financial support of the National Science Foundation under Grants MSM-8618108, MSM-8800882, and MSM-9013017
for RBSa research at the University of Colorado at Boulder is gratefully acknowledged. Also gratefully acknowledged is the
collaboration and support of the writer's associates: A. Al-Harthy, G. Fu, S. Hendawi, M. Iizuka, S. Katsuki, M. Klisinski,
Y.-H. Lee, R. Nakib, and K. Yoshida, all of whom made contributions to the development of RBSa theory, software, and/or
applications.
'Methods of component and system reliability analysis are described in Chapters 2 to 8 of this book.
352
Reliability-Based Optimum Structural Design 353
approach is that the uncertainties in loads, resistances, and structural response are not included in the
optimization process. Therefore, deterministic optimized structures have inconsistent reliability levels.
Usually, they exhibit higher failure probabilities and less redundancy than those designed by reliability-
based design codes. Consequently, a balance must be developed between the reliability needs of the
structure and the aims of reducing its cost. Clearly, this requires the combination of reliability-based
design and optimization. This combination has led to a new design perspective from which reliability-
based structural optimization (RBSO) evolved.
This chapter focuses on structural design based on both reliability and optimization. It presents (1)
a brief historical background of RBSO, (2) a short review of problem types and basic formulations,
multicriteria and damage-tolerant formulations, and computational methods including sensitivity analy-
sis, and (3) RBSO examples.
2.1. Notations
q Iteration number
R n Set of real numbers
U, Total expected utility
V Volume of structural members; also coefficient of variation
W Linear approximation of C
WO Allowable value of W
x Vector of design variables
x.'I Estimation of the optimum point at the q iteration
Xi Design variable
Y Feasible space
aq Step size at the q iteration
~ Reliability index
~o Allowable value of ~
~DMG Reliability index of the damaged system
~~MG Allowable value of ~DMG
~e Reliability index of the element
~~ Allowable value of ~e
~ejk Reliability index of element k with respect to limit state j
~~jk Allowable value of ~.jk
~INT Reliability index of the intact system
~?m- Allowable value of ~INT
~s Reliability index of the system
~~ Allowable value of ~s
~sj Reliability index of the system with respect to limit state j
~~ Allowable value of ~Sj
<11 Cumulative distribution function of standard normal variate
p Correlation coefficient
2.2. Abbreviations
3. mSTORICAL BACKGROUND
Seven decades ago, long before structural reliability and optimization techniques were born, Forsell
(1924) formulated the design process as a total cost (including the initial cost, maintenance cost, and
the expected value of the cost of failure) minimization. This total cost criterion along with failure
probability evaluation methods were investigated during the 1950s by Johnson (1953), Ferry-Borges
(1954), Freudenthal (1956), and Paez and Torroja (1959), among others.
By 1960, along with the introduction of the deterministic approach to structural optimization set
forth by Schmit (1960), in which finite element structural analysis is coupled to nonlinear mathematical
programming to create automated optimum design capabilities, the concept of RBSO was applied to
structural weight minimization for a specified reliability (Hilton and Feigen, 1960; Switzky, 1964) and
cost (Kalaba, 1962). It is noteworthy that these contributions were published in aerospace journals and
conferences, as the desire to reduce structural weight without compromising structural reliability was
particularly strong in the aerospace industry.
The early work of Freudenthal (1956) and the subsequent development by Freudenthal et al. (1966)
provided the reliability bases of structural analysis and design. Using these bases, Cornell (1967) for-
mulated bounds on reliability of structural systems, and Moses and Kinser (1967), Moses (1969), and
Moses and Stevenson (1970) presented methods for incorporating reliability analysis into optimum
design of structural systems.
Parallel with the development of deterministic structural optimization methods based on mathematical
programming techniques during the 1960s, exemplified by the work of Gellalty and Gallagher (1966),
among others, a major effort to introduce probability-based design code formats was made by Cornell
(1969a,b). Subsequently, significant contributions to probabilistic design code formats were made by
Lind (1971), Rosenblueth and Esteva (1972), Ditlevsen (1973), Ang and Cornell (1974), and Hasofer
and Lind (1974), among others. These contributions, however, were limited to structural elements.
Moses' (1974) presented a description for extending probabilistic design code formats from structural
elements to structural systems. Specifically, partial safety factors were proposed to relate element to
system reliability for both weakest-link and fail-safe structures. Parallel with the development of the
fourth generation of computers, continuing interest in the development of RBSO stimulated work in
structural design decisions (Turkstra, 1970), optimum design of structures with a minimum expected
cost criterion (Rosenblueth and Mendoza, 1971; Mau and Sexsmith, 1972), and matrix structural reli-
ability analysis and reliability-based design (Vanmarcke, 1971). A contribution by Moses (1973) sur-
veyed how reliability analysis can be incorporated into optimization methods and presented an example
of RBSO of reinforced concrete beams.
Since 1973 the RBSO field has matured considerably. It is not possible to include in this chapter all
the contributions to RBSO during the past two decades. However, the following references and the
literature cited in them contain the work of almost all the researchers in the field.
Review papers and chapters in books on RBSO: Rojiani and Bailey (1984), Frangopol (1985c, 1991), Thoft-
Christensen and Murotsu (1986), Thoft-Christensen (1991), Frangopol and Moses (1994)
Proceedings and special issues of international journals on RBSO: Thoft-Christensen (1987a, 1989), Frangopol
(1989), Frangopol and Corotis (1990), Der Kiureghian and Thoft-Christensen (1991), Rac1cwitz and Thoft-
Christensen (1992)
Articles on RBSO: Moses (1977), Parimi and Cohn (1978), Carmichael (1981), Surahman and Rojiani (1983),
Frangopol (1984a,b, 1985a,b,d; 1986a,b), Feng and Moses (1986a,b), Rosenblueth (1986), Ishikawa and Iizuka
(1987), Rackwitz and Cuntze (1987), Thoft-Christensen and S~rensen (1987), Soltani and Corotis (1988), Fu
and Frangopol (1990a,b), Kim and Wen (1990), Murotsu and Shao (1990), Nakib and Frangopol (1990a,b),
Murotsu et al. (1992)
Papers in conference proceedings on RBSO: Frangopol and Rondal (1976), Casciatti and Faravelli (1985),
356 Reliability-Based Optimum Structural Design
Frangopol (1986b, 1987, 1993), Bourgund (1987), Enevoldsen et al. (1989), Frangopol and Fu (1989, 1990),
Frangopol and Nakib (1990), Frangopol and Iizuka (1991a,b, 1992a,b), Frangopol et al. (1991), Mahadevan
and Haldar (1991), Tao et al. (1992), Thoft-Christensen (1991, 1992)
Reports and PkD. Theses on RBSO: Furuta (1980), S0rensen (1986, 1987, 1988), Kim and Wen (1987),
S0rensen and Thoft-Christensen (1987), Thoft-Christensen (1987b), S0rensen and Enevoldsen (1989), Enevold-
sen and S0rensen (1990), Enevoldsen et al. (1990), Enevoldsen (1991), Iizuka (1991), Shao (1991)
Most of the above references, representing contributions to RBSO during the 1973-1993 time period,
were based on developments in structural reliability and/or structural optimization during the same time
interval. These developments were reported in the following.
Books on structural optimization: Spillers (1975), Haug and Arora (1979), Gill et al. (1981), Kirsch (1981),
Lev (1981), Reklaitis et al. (1983), Atrek et al. (1984), Farkas (1984), Osyczka (1984), Rao (1984), Vanderplaats
(1984a), Haftka and Kamat (1985), Save and Prager (1985), Mota Soares (1987), Arora (1989a), Borkowski
and Jendo (1990), Eschenauer et al. (1990)
Books on structural reliability: Ditlevsen (1981), Thoft-Christensen and Baker (1982), Ang and Tang (1984),
Augusti et al. (1984), Vanmarcke (1984), Yao (1985), Madsen et al. (1986), Thoft-Christensen and Murotsu
(1986), Melchers (1987), Frangopol (1989)
Survey papers on structural optimization: Ashley (1982), Vanderplaats (1982), Templeman (1983), Schmit
(1984), Levy and Lev (1987), Arora (1989b)
Survey papers on structural reliability: Grimmelt and Schueller (1982), Moses (1982), Shinozuka (1983),
Ditlevsen and Bjerager (1986), Bjerager (1989)
International journals on optimization: Engineering Optimization, Structural Optimization
International journals on structural safety and probabilistic mechanics: Structural Safety, Probabilistic Engi-
neering Mechanics
Specialized conferences on structural safety and reliability, probabilistic mechanics, and/or reliability-based
optimization: International Conference on Structural Safety and Reliability (ICOSSAR) (1969-Washington,
D.C.; 1977-Munich, Germany; 1981-Trondheim, Norway; 1985-Kobe, Japan; 1989-San Francisco, Cal-
ifornia; 1993-Innsbruck, Austria); International Conference on Applications of Statistics and Probability in
Civil Engineering (ICASP) (1971-Hong Kong; 1975-Aachen, Germany; 1979-Sydney, Australia; 1983-
Florence, Italy; 1987-Vancouver, Canada; 1991-Mexico City, Mexico); ASCE Specialty Conference on Prob-
abilistic Mechanics and Structural and Geotechnical Reliability (1969-Purdue University, Lafayette, Indiana;
1974-Stanford University, Stanford, California; 1979-Tucson, Arizona; 1984-Berkeley, California; 1988-
Blacksburg, Virginia; 1992-Denver, Colorado); and IFIP WG 7.5 Working Conference on Reliability and
Optimization of Structural Systems (1987-Aalborg, Denmark; 1988-London, England; 1990-Berkeley, Cal-
ifornia; 1991-Munich, Germany; 1993-Takamatsu-shi, Kagawa, Japan).
Within the reliability-based design philosophy structural optimization problems are generally classified
according to the nature of design variables x (Frangopol, 1985c; Enevoldsen, 1991) as follows: sizing
optimization (e.g., cross-sectional dimensions, moments of inertia), configuration (also referred to as
shape) optimization (e.g., coordinates of joints in a frame or truss system, fiber orientation angles in a
composite material), topology optimization (e.g., number of spans in a bridge, number of joints and/or
members in a truss system), and total optimization (e.g., structure and material types). Most RBSO
studies deal exclusively with the first and lowest category in this hierarchy of structural optimization
problems. However, it should also be noted that few RBSO studies addressed configuration optimization
Reliability-Based Optimum Structural Design 357
problems for truss systems (Furuta, 1980; Murotsu and Shao, 1990), monotower platforms (Enevoldsen
et al., 1989), and fiber-reinforced composite structures (Shao, 1991; Murotsu et al., 1992).
To compare feasible design alternatives it is necessary to formulate an objective. Many objective
functions have been proposed in RBSO. These include cost and utility functions that should be mini-
mized and maximized, respectively, as follows:
(16-1)
max: Ut = B - Cj - L (16-2)
where Ct is the total expected cost of the structure, Cj is the initial cost, which is a function of the
vector of design variables x, Cf is the expected cost of failure, Pf is the probability of failure of the
structure, which is also a function of x, Ut is the total expected utility of the structure, B is the benefit
derived from the existence of the structure, and L is the expected loss due to failure. As shown by
Rosenblueth (1986), in some cases the benefit B can also be a function of the vector x.
Because of the difficulties in associating monetary values to all failure consequences (e.g., placing
monetary value on human life and injuries), the total expected cost and utility objectives are of limited
interest for practical purposes. As an alternative, the RBSO problem can be reformulated as
min: C (16-3)
S.t.: PI ~ P} (16-4)
where C is the expected cost of the structure without considering costs of human life loss and/or injuries,
pt is the allowable probability of structural failure, and S.t. means subjected to. Side constraints of the form
can also be imposed, in which the superscripts I and u denote lower and upper bounds, respectively.
The RBSO problem can also be formulated as
(16-6)
S.t.: C ~ CO (16-7)
where CO is the allowable cost of the structure without considering costs of human life loss and/or
injuries. In the above formulation the side constraints (Eq. [16-5]) may also be added to the cost
constraitit (Eq. [16-7]).
Using first-order reliability methods (FORMs) the probabilities Pf and pt are related to the reliability
indices 13 and 130, respectively, as follows:
where «11- 1(1 - Pf ) and «11- 1(1 - P~) are the values of the standard normal variate at the probability
levels Pf and pt, respectively. Reliability-based structural optimization can be formulated both at the
system level [i.e., 13. = «11- 1(1 - Pm), J3~ = «11- 1(1 - pt.)] and at the element level [i.e., J3e = «11- 1(1 -
Pfe), J3~ = «11- 1(1 - pte)], where the subscripts s and e denote system and element, respectively.
The general RSBO problem at the system level, using FORMs, can be stated as one of finding design
variables subjected to both reliability and side constraints, so that the objective function is minimized.
358 Reliability-Based Optimum Structural Design
(j = 1, 2, ... , m) (16-11)
(i = 1, 2, ... , n) (16-12)
(i = 1, 2, ... , n) (16-15)
where p is the number of elements in the structure and l3eik is the reliability index of the element k with
respect to limit state j.
Equations (16-10)-(16-12) and (16-13)-(16-15) are generally considered as basic RBSO formula-
tions. They were used for both cross-section and configuration optimizations and solved with varying
degrees of success. Additional RBSO formulations (e.g., RBSO considering element and system limit
states, multicriteria RBSO, damage-tolerant RBSO, time-variant RBSO) have been addressed in the
literature (see, e.g., Frangopol and Moses, 1994, and references cited therein). Owing to space limitations
only multicriteria and damage-tolerant RBSO formulations are presented herein.
Nondeterministic structural systems under stochastic loads are generally designed under safety, servicea-
bility, and economy requirements. The single objective RBSO problem has been well discussed in the
literature and basic formulations such as Eqs. (16-10)-(16-12) and (16-13)-(16-15), or a combination of
these, have been presented. In general, a solution to a single objective RBSO problem is unique and will
not provide enough information for decision making in the design process (Frangopol and Iizuka, 1992a,b).
Nowadays, in structural system reliability-based design there often exists more than one objective,
usually conflicting (e.g., system cost and probability of system collapse), which must be considered
simultaneously in the optimization process. Therefore, to find the optimum solution (also referred to as
the Pareto optimum or noninferior solution) multicriteria optimization (also referred to as vector, multi-
objective, or Pareto optimization) techniques have to be used. In this section a brief presentation of
multicriteria RBSO formulation is given.
The multicriteria RBSO optimization formulation for structural system design under uncertainty was
set forth by Fu and Frangopol (1990b). This formulation introduced the idea and indicated the feasibility
of solving RBSO problems under multiple criteria (also referred to as objectives).
The problem is formulated as follows:
where f and x are the objective and design variable vectors, respectively, and Y is the feasible space of
x given by a set of constraints expressed in the abbreviated form (Osyczka, 1984; Duckstein, 1984;
Reliability-Based Optimum Structural Design 359
where RD is the set of real numbers, g is the vector of q inequality constraints, and h is the vector of
r equality constraints. The objective vector f(x) consists of t elementary objective functions,
which are certain characteristics of the structural system to be designed under uncertainty, such as total
cost, repair cost, maintenance cost, material volume or weight, system probabilities of collapse, first
yielding, and plastic hinge occurrences. The n components of the design variable vector x,
(16-20)
are parameters to be determined in the optimization process. For such a multicriteria optimization
problem, a solution is defined as a vector x*, which belongs to the space Y and under which none of
the objectives can be further reduced without causing an increase in at least another objective (Koski,
1984). In general the vector x* exists and is called the Pareto optimal (also referred to as strongly
noninferior or nondominated) solution (Duckstein, 1984). The corresponding vector objective f(x*) is
called the minimal criterion solution or Pareto optimal objective.
Fu and Frangopol (1990b), Iizuka (1991), and Frangopol and Iizuka (1991a, 1992a,b) proposed a
multicriteria RBSO formulation based on the minimization of a four-objective vector:
(16-21)
where V is the total volume of structural members (assuming that the structure is made of the same
material), PfCOL is the system probability of plastic collapse, PfYI.o is the system probability of first
yielding, and PfDFM is the system probability of excessive elastic deformation.
As shown by Frangopol and Klisinski (1992), most system reliability-based optimization research deals
with design of structural systems under ultimate and/or serviceability limit states requirements only. For
most structures, serviceability limit states are related to disruption of the functional use due to excessive
deformation or initiation of component damage, and the ultimate limit states are related to plastic
collapse or instability occurrences. In many practical situations it is desired to control the performance
of structural systems associated with intermediate stages of behavior (i.e., between initiation of unser-
viceability and final collapse). These intermediate stages of behavior are characterized by conditions
affecting one or more elements of a structural system, failure of one or a group of members of a
structure, and more than one plastic hinge occurrence in a highly indeterminate structure (Arora et al.,
1980; Feng and Moses, 1986b; Frangopol and Fu, 1989; Fu and Frangopol, 1990a; Frangopol et al.,
1991; Liu and Moses, 1991; Frangopol and Klisinski, 1992; Frangopol and Moses, 1994). According
to Arora et al. (1980) a structure is called damage tolerant if it continues to perform its basic function
even after it sustains a specified level of damage.
As pointed out by Frangopol and Fu (1989) and Liu and Moses (1991), in order to obtain an
360 Reliability-Based Optimum Structural Design
acceptable optimal structure the RBSO approach should consider the performance over the expected
lifetime of a structural system. Consequently, both reserve (intact system) and residual (damaged system)
reliability requirements must be considered simultaneously in the RBSO process.
Frangopol and Fu (1989) defined system reserve reliability P RSV as one minus the probability of
system collapse of the intact structure P flNT , and system residual reliability PRES as one minus the
probability of system collapse of the damaged structure P fDMG , as follows:
(16-22)
(16-23)
Certain levels of both system reserve reliability and system residual reliability, I1sv and pORES' re-
spectively, are usually required over the expected lifetime of a structure. The former, I1sv (allowable
system reserve reliability), addresses the reliability requirement for the intact structure and the latter,
pORES (allowable system residual reliability), addresses the reliability requirement for the damaged
structure.
Using a single-objective RBSO format, the optimization formulation of a damage-tolerant structure
is as follows:
(16-26)
(i = 1, 2, ... , n) (16-27)
(16-30)
where ~INT and ~DMG are, respectively, related to P fiNT and P fDMG by Eq. (16-8), and ~\'m and ~~MG are,
respectively, related to pOfINT and P~DMG by Eq. (16-9).
Frangopol and Fu (1989) and Fu and Frangopol (1990a,b) proposed to solve the damage-tolerant
RBSO problem in a multicriteria format as follows:
where x is the design variable vector to be determined, and V, P fINT , and P fDMG are objectives to be
minimized simultaneously.
Reliability-Based Optimum Structural Design 361
1\\'0 excellent review and assessment papers on computational design optimization methods and com-
putational methods for reliability analysis have been presented by Arora (1989b) and Bjerager (1989),
respectively. For this reason and also because computational methods for reliability analysis are dis-
cussed in Chapters 2 to 7 of this book, this section concentrates mostly on computational methods for
multicriteria RBSO and stresses the key role of sensitivity analysis in RBSO.
(16-34)
where q is the iteration number, x q + 1 is the estimation of the optimum point x*, <Xq is the step size, and
d q is the search direction. Primal methods, based on both linear convergence (i.e., feasible directions
method, gradient projection method, generalized reduced gradient method) and superlinear convergence
(e.g., with and without potential constraint strategies, hybrid methods), and transformation methods
(e.g., penalty function methods, conjugated Lagrangian or multiplier methods) have both been used
with various degrees of success in conjunction with first- and second-order reliability methods (FORMs
and SORMs). Simulation reliability methods, such as importance sampling and directional simulation,
will soon find a place in RBSO.
Approaches to find the Pareto optimal solution to a deterministic multicriteria optimization problem
including weighting, E-constraint, compromise programming, and goal programming methods have been
proposed in the literature (Duckstein, 1984; Koski, 1984; Osyczka, 1984; Eschenauer et ai., 1990).
Presented herein is an original strategy for searching the solutions to the multicriteria RBSO problems
whose objectives are defined by Eqs. (16-21) and (16-32). This strategy was proposed in Frangopol and Fu
(1989) and Fu and Frangopol (1990b). It consists of the following steps (Fu and Frangopol, 1990b).
Step 1. Choosing ranges on upper bounds of system failure probability objectives: In reliability-based
optimization problems of structural engineering interest, conservative assumptions need to be made with respect
to the evaluation of structural system reliabilities. For this reason, the upper bounds (e.g., Ditlevsen, 1979) on
the probabilities PfmL> PfYLo, P fDFM , P flNT, and PfDMO must be used. In the problem of structural optimization
considering system reliabilities, it is not always necessary to find the complete solution set, but only the part
that is sensible for further consideration in the final decision making. From the experience of the structural
engineering profession, for example, it is often possible to predetermine the ranges on upper bounds of system
362 Reliability-Based Optimum Structural Design
where (pii, P~i) is the interval for the upper bound Pt(x) of the structural system failure probability objective
Pli(X).
Restricting the ranges for upper bounds on system reliabilities will eliminate part of optimal objectives from
consideration. For instance, a structural system failure probability with regard to ultimate collapse initiating
from the intact state P fINT is usually not allowed to be higher than 10- 2, although this may be produced by a
feasible Pareto optimal solution. By choosing the size of Pareto optimal objectives in this way, the Pareto
solution set to be found is also reduced in size. It should be noted that wrong or inappropriate choices of the
ranges (pi i, P~i) will result in a waste of effort in finding the optimal solution.
A legitimate question is why the lower end of the range for Pt(x) should be restricted. As Osyczka (1984)
shows, in multicriteria optimization problems "optimized" means to find a "good" solution considering all
the objective functions simultaneously. Of course, in multicriteria RBSO a good solution will balance in an
optimum manner the system reliability and cost (e.g., weight) requirements. To balance these conflicting re-
quirements properly, it is necessary neither to overdesign nor underdesign the system.
Step 2. Solving biobjective optimization problems: For a problem with several objectives contained in the
vector f(x) it is often helpful to investigate first the interactive behaviors between any two of the objectives
without considering the others. These investigations can be performed in a series of biobjective optimizations
as follows: min[j;(x) and .fi(x)], (i, j = 1, 2, ... , t; i f. j), subject to x E Y. It is obviously possible to have
various pairs of the objectives in this step. The number of such pairs depends on what the designers desire to
know. Each of such biobjective optimizations will provide information on the interactive relation between these
two objectives without disturbances by the others. Technically, the Pareto solution with respect to these two
objectives can be found by the E-constraint method (Osyczka, 1984). This method converts one of the two
objectives to a constraint, as follows: min f,(x), subject to x E Y; and
and conducts a series of the resulting single-objective optimizations by varying the upper limit Ej of the constraint
converted from the objective. The solutions of these biobjective optimizations are desired to have fundamental
insights into the original optimization problem and are frequently helpful in understanding the comprehensive
results produced in next step.
Step 3. Solving multicriteria RBSO by E-constraint method: In this final step, one of the t objectives in the
vector f(x) is to be kept as the objective to be minimized and the remaining t - 1 objectives are transferred to
constraints. As shown in Fu and Frangopol (1990b) this single objective is to be optimized subject to a group
of given values Ej for the t - 1 transferred constraints. The single objective optimizations in steps 2 and 3 can
be performed with available approaches for nonlinear optimizations (e.g., Vanderplaats, 1984a).
The three-step solution technique described above constitutes an organized strategy for solving mul-
ticriteria RBSO problems. In this searching process, a satisfactory solution may be obtained on the basis
of different preferences given to the criteria, or a new direction to search for a preferred solution can
be determined.
establish a measure of the way each response quantity varies with changes in the parameters that define
the system." In RBSO sensitivity analysis seeks rates of change of predicted optimum solutions with
respect to changes in the preassigned deterministic and/or probabilistic parameters. As shown by Fran-
gopol (1985a), other important factors to be considered in the sensitivity of RBSO solutions are such
things as the influence of approximations used in formulating the objective functions, of the reliability
methods used for computing the system reliabilities, and of the optimization techniques themselves. In
RBSO there are three principal reasons' for needing this information (Frangopol, 1985a).
First, if the statistical parameters (e.g., mean values, coefficients of variation, coefficients of corre-
lation) or specified reliability levels are modified after RBSO is complete this will provide the designer
with a measure of what effect such changes will have on the optimized structure. Because there is
limited statistical information on loads and strengths, approximations cannot be avoided in arriving at
the statistical parameters associated with these random variables. The critical parameters can be iden-
tified through sensitivity analysis. This will allow for an efficient control of errors, in specifying the
required level of approximation for each parameter of interest, so that a reasonable compromise between
reality and simplicity could be achieved in RBSO.
Second, the information provided by sensitivity analysis of the optimum solution to changes in the
objective function, in the probabilistic method to determine structural system reliability, or in the op-
timization technique, can be effectively used by the designer to rationalize the computational effort and
to develop a reasonable level of confidence in the final design.
Third, the sensitivity analysis information is valuable for indicating new paths of research in
reliability-based optimization (Moses, 1979), and also is useful in multilevel and multidiscipline design
(Sobieszczanski-Sobieski et ai., 1982; Vanderplaats, 1984b).
The sensitivity of the objective function with respect to change in problem parameters is also of
interest in RBSO. As shown by Kirsch (1981), it is always desired, from a practical point of view, to
select an objective function that is both sensitive to variations in the design variables and represents
the most important cost components.
Various optimum design sensitivity methods and algorithms for both deterministic optimization and
reliability-based optimization are available in the literature (Sobieszczanski-Sobieski et ai., 1982; Schmit
and Cheng, 1982; Schmit, 1984; Vanderplaats, 1984b; Madsen et ai., 1986; Bjerager and Krenk, 1989;
Enevoldsen, 1991). These include methods based on Kuhn-Tucker conditions, first-order methods based
on gradient information, and second-order methods based on second derivatives of the objective and
constraints. 1\vo types of sensitivities may be obtained in an RBSO problem (Enevoldsen, 1991): (1)
sensitivity of both objective function, dC/dx, and constraints, dr3s)dx and/or dr3ejk/dx, with respect to
design variables, and (2) sensitivities with respect to all other deterministic and probabilistic remaining
parameters in the description of the structure, loads, and structural behavior (e.g., geometric properties,
correlation coefficients, coefficients of variation) dC/dp, dr3sj/dp and/or dr3ejk/dp, where p is the prob-
lem parameter vector not included in the design variable vector x.
As previously mentioned, several sensitivity methods and algorithms are currently available for
RBSO. Some of these algorithms work acceptably well for particular problems taking into account the
possible discontinuity of the sensitivities, yet considerable research work must be performed to improve
their robustness and implementation as an integral part of advanced computer codes for both single and
multicriteria RBSO.
8. NUMERICAL EXAMPLES
The most common examples used in the RBSO literature over the last three decades are single-limit state
sizing optimization of steel frames and trusses under static loading. It is interesting to note that in the last
364 Reliability-Based Optimum Structural Design
15 years the state of the practice in RBSO has advanced considerably by providing more realistic and
complex numerical examples on reinforced concrete structures (parimi and Cohn, 1978; Surahman and
Rojiani, 1983; Frangopol, 1985d), multiple-limit states (parimi and Cohn, 1978; Frangopol, 1985c, 1986b;
Frangopol and Fu, 1990), time-varying loads (Kim and Wen, 1987, 1990), inspection and repair of structural
systems (fhoft-Christensen and S~rensen, 1987), damage-tolerant structures (Frangopol and Fu, 1989; Iizuka,
1991; Liu and Moses, 1991), multicriteria requirements (Fu and Frangopol, 1990b; Frangopol and Iizuka,
1991a,b, 1992a,b; Iizuka, 1991), shape of structural systems (Furuta, 1980; Enevoldsen et al., 1989; Murotsu
and Shao, 1990; Shao, 1991), configuration of material systems (Shao, 1991; Murotsu et al., 1992), and
adaptive geometries of intelligent systems (Shao, 1991). In some of the numerical examples on multicriteria
RBSO, decision support spaces and/or procedures for risk-based decision making were also indicated (Fu
and Frangopol, 1990a,b; Frangopol and Iizuka, 1991a; Iizuka, 1991). In this manner, RBSO becomes a
useful support tool for optimum structural design trade-offs under uncertainty.
In this section selected numerical examples developed by the writer and his associates over the last
decade are presented. Other examples on RBSO may be found in various contributions listed in References.
a. min: W
s.t.: P n :5 Pi!
Po. :5 Jib
MB ;:: 0, Me ;:: 0
b. min: W
s.t.: P n :5 Pi!
MB ;:: 0, Me ;:: 0
c. min: W
s.t.: Po. :5 Jib
MB ;:: 0, Me ;:: 0
where W is the objective function, P n is the probability of plastic collapse of the frame, P f2 is the
probability of frame unserviceability by first plastic hinge occurrence, and Pi! and Pi2 are allowable
values of P n and P f2 , respectively.
Formulation (a) is similar to that given by Eqs. (16-10)-(16-12), and formulations (b) and (c) are
similar to that given by Eqs. (16-3)-(16-4).
Figure 16-1 presents a comparison of the formulations (a), (b), and (c), in which the allowable
probabilities of collapse and unserviceability are specified: Pi! = 10-5 and Pi2 = 10- 2 • The two-
dimensional mean design space shown in Fig. 16-1 has two reliability constraint curves, such that all
Reliability-Based Optimum Structural Design 365
points on the curves 111 = 10-5 and 112 = 10-2 have the probabilities of plastic collapse and unserv-
iceability equal to the prescribed values 10-5 and 10- 2, respectively. Also shown is the linear objective
function, W, that should be minimized (W = 4300 is shown; lines corresponding to other values of W
will run parallel to this line). The boundary of the feasible design space (which contains points repre-
senting allowable probabilities of failure) depends on the optimization formulation as follows: boundary
a2bclde2 corresponds to the multiconstraint formulation (a), and boundaries albcldel and a2bc2de2 cor-
respond to the single-constraint optimization formulations (b) and (c), respectively. Consequently, if
formulation (a) or (b) is chosen, the optimum solution (the point at which W is tangent to the P f
constraint) is the design point CI (collapse constraint is critical at the optimum). If formulation (c) is
chosen, the optimum solution is the design point C2' It is interesting to note that (1) both reliability
constraints are satisfied as equalities at two design points (b and d); (2) the points lying on the curve
bc2d have unacceptable reliabilities with regard to plastic collapse (Pfl ;::: 10- 5); and (3) the points lying
on the curves alb and del have unacceptable reliabilities with regard to unserviceability (Pf2 ;::: 10- 2).
Figures 16-2 and 16-3 show minimum weight solutions for various sets of specified risk levels against
plastic collapse and unserviceability, respectively. Figure 16-2 presents solutions for different values of
111 when 112 is kept constant, whereas Fig. 16-3 presents solutions for different values of 112 when
111 is kept constant.
Table 16-1 summarizes these solutions for each of the three different optimization formulations (a),
(b), and (c) presented previously. Table 16-1 indicates the possible fallacy in a simple-constraint optim-
ization approach. For example, the solutions c~, d~' and c~, and d~ have unacceptable reliabilities with
regard to unserviceability and plastic collapse, respectively.
Another possible RBSO formulation, similar to that given by Eqs. (16-6) and (16-7), is
300 .,
PLASTIC COLLAPSE pO = ~
\ OCCURRENCE "
10
'S
z:
<5 '~
''G3
...
1% ,,
\~
,.
pO= 10
-2
::I:
I-
l!)
z:
UJ
250 ,
0::
l-
V!
~
~
...J
0
w
LL.
11=4268. 4kNm2
0
Me=274kNm
UJ
~ 200 Mc=191kNm
...J
<
:::-
z:
<
UJ
·2
::E
.,
15~00 250 300 350
MEAN VALUE OF BEAM STRENGTH Me (kNm)
Figure 16-1. Design space with two reliability constraints: Minimum weight optimization. (Source: Frangopol,
D. M. [1985c). Structural optimization using reliability concepts. Journal of Structural Engineering, ASCE 111(11):
2288-2301. Reprinted with permission from the American Society of Civil Engineers.)
366 Reliability-Based Optimum Structural Design
PLASTIC COLLAPSE
OCC~RENCE ~--t--1-41 11=4697. 2kNlf
! R,"299.0kNm
Mc=213.39kNm
1~300
R.=2S9.0kNm
M
c'"196. lkNm
11=3853. 13kNni
ii.=250.0kNm
Mc"169.14kNm
11=4268. 4kNm2
M."274.0kNm
Mc"191.05kNm
°O~----~10~O-----2~OO------300~-----4~OO----~~
Figure 16-2. Effect of allowable probability level P~ on optimum solution. (Source: Frangopol, D. M. [1985c].
Structural optimization using reliability concepts. Journal of Structural Engineering, ASCE 111(11):2288-230l.
Reprinted with permission from the American Society of Civil Engineers.)
Optimization formulation"
Specified Specified (a)
probability probability of (Pfl :s pJ
of collapse unserviceability and (b) (c)
Figure (PJ1) (P72) Pf2 :s 112) (P'1 :s P71) (P'2 :s P72)
"b, means point b, in Fig. 16-2; numbers in parentheses represent W (in kN·m2); prime and double prime superscripts indicate
unacceptable reliabilities with regard to collapse and unserviceability, respectively.
Source: Frangopol, D. M (1985c). Structural optimization using reliability concepts. Journal of Structural Engineering, ASCE
111(11):2288-2301. Reprinted with permission from the American Society of Civil Engineers.
Reliability-Based Optimum Structural Design 367
400r-------------~~r-~~----~----~
FIRST PLASTIC
HINGE
OCCURRENCE
;!
C 300 W=5304. 69kNi
I~ M.=333kNm
~ iic =246. 84kNm ...
z P?a= 10
~
In
f ~Ma =297kNm
=4742. SOkNnf
10-3
~ 200 Me =221. 56kNm r-~c--~~~~f-lo--_ _ _ _--U!.:-2-1
8 W=4158.75kNm2
15 ~=259kNm
Me =196. 09kNm W=4268. 4kNi
~
> 100 W=3500.0kNII2 M.=274kNm
~ ;;"=215kNm Mc=191kNm
!i! Me =168. 75kNm
°O~--~--~---~--~--~
100 200 300 400 500
MEAN VALUE OF BEAM STRENGTH M. (kNm)
Figure 16-3. Effect of allowable probability level P~ on optimum solution. (Source: Frangopol, D. M. [1985c].
Structural optimization using reliability concepts. Journal of Structural Engineering, ASCE 111(11):2288-230l.
Reprinted with permission from the American Society of Civil Engineers.)
d. max: ~1
s.t.: W:S WO
MB === 0, Me === 0
where ~1 = <1»-1(1 - Pn ) is the collapse reliability index of the frame, and WO is the allowable value of W.
On the basis of formulation (d), which requires the maximization of the collapse reliability index ~1
when the maximum structural weight WO is prescribed, Fig. 16-4 shows sensitivity results to determine
the effect on the optimum solution due to changes in WOo An interesting conclusion is that the ratio of
the mean values of the beam-to-column optimum plastic moments, which represents the optimum dis-
tribution of material in the frame, is almost insensitive to changes in the prescribed weight WOo It should
be pointed out that the calculations for Figs. 16-1 to 16-4 are based on computing Pc from the upper
bound method given by Cornell (1967).
min: WI
s.t.: ~I = ~1.f1 ~ ~~
~2 = !:ltop.f1 ~ ~~
~3 = !:l,hearl.UDStiff. ~ ~~
~4 = ~shear2.unstiff. ~ ~~
where WI is the weight of steel, 131 and 132 are reliability indices with respect to maximum stress in
bottom and top flanges, respectively, 133 and 134 are reliability indices with respect to maximum shear
forces, 136 is the reliability index with respect to maximum compressive stress in the concrete slab, and
g7 ::5 0, g11 ::5 0, and g12 ::5 0 are constraints on dimensions given in AASlITO (1983), namely, aspect
ratio, flange local buckling, and width of bottom flange to depth of the web ratio, respectively.
6r---~--~------~----~--~-------'
i=5250kNm2
PLASTIC COLLAPSE OCCURRENCE i=5000kNm2
1f=4750kNJ
i=4500kNm2
1=4250kNi
~/Me=l. 44
B.=4.24
Ma/Me=l. 42
B.=4.54
VCP)=O. 20
iia IMe=1. 41 VCH) =0. 15
B,=4.81
VCM) =0.10
Ma IMe =1. 39
~ P=80kN
B,=5.06
til H=20kN
/l.4 M./Me=1.37
pcM. ,Me)=I
~
B.=5.29
PCP. H>=O
...., 00 2 4 6 8 10 12 14 16
RATIO OF MEAN STRENGTHS Ma/Mc
Figure 16-4. Effect of allowable total structural weight WO on optimum solution. (Source: FrangoPoI, D. M.
[1985c]. Structural optimization using reliability concepts. Journal of Structural Engineering, ASCE 111(11):2288-
2301. Reprinted with permission from the American Society of Civil Engineers.)
Reliability-Based Optimum Structural Design 369
min: W2
where W2 is the weight of steel, I3s is the reliability index with respect to maximum shear force, and
gg :5 0, g9 :5 0, and glO :5 0 are aspect ratio constraints given in AASHTO (1983).
For solving the RBSO formulations (a) and (b), the deterministic optimization program ADS (Van-
derplaats, 1986) was linked to the structural reliability analysis program RELTRAN (Lee et ai., 1993).
For comparison purposes, the deterministic numerical example considered by Dhillon and Kuo (1991)
is treated herein in an RBSO context. It consists of an 80-ft (24.38-m) single-span, composite-hybrid
plate girder. The five deterministic parameters (Y1 = deck slab thickness; Y 3 = girder span length; Y4 =
modular ratio; Y12 = effective width of deck slab; and Y13 = unit weight of steel) and the eight random
variables (Y2 = yield strength of top flange steel; Ys = dead load excluding girder weight; Y6 maximum
live load moment including impact; Y7 =maximum live load shear including impact; Yg = yield strength
of bottom flange steel; Y9 = yield strength of web steel; YlO = compressive strength of concrete; and Yll
superimposed dead load moment) are all defined in Table 16-2.
Deterministic
parameter
Notation or random variablea Unit
Y1 7.0 in.
Y2 37.8; 0.10 ksi
Y3 80.0 ft
Y4 8.0
Ys 0.865; 0.04 kips/ft
Y6 1119; 0.214 kips·ft
Y7 59.7; 0.214 kips
Yg 105; 0.11 ksi
Y9 37.8; 0.10 ksi
YIO 3.4; 0.18 ksi
Yll 137.3; 0.09 kips·ft
Y12 84.0 in.
Y13 0.49 kips/ft3
For a fixed deck-slab thickness (YI = ct), the objective function to be minimized is the weight of the
steel plate girder alone, expressed for the unstiffened case as
where Xl is the width of the top flange, X 2 is the thickness of the top flange, X3 is the thickness of the
web, X4 is the depth of the web, Xs is the width of the bottom flange, X6 is the thickness of the bottom
flange, X 7 is the width of the stiffener, Xs is the thickness of the stiffener, and X9 is the stiffener spacing.
Consequently, the design variable vector x has six (Le., Xl to X 6) and nine (i.e., Xl to X 9) components
for the unstiffened and stiffened cases, respectively. All the random variables (Le., Y2 and Ys to Yll ) are
considered independent and normally distributed. Furthermore, the allowable reliability indices ~~ to
~~ are considered to have the same value ~~ = ~g = ... = ~~ = 3.5.
Table 16-3 shows the results of the optimization process for both the unstiffened and stiffened cases.
The optimum solutions were obtained using the following options available in the computer program
ADS (Vanderplaats, 1986): the sequential linear programming, the modified method of feasible direc-
tions, and search bounds followed by polynomial interpolation. The optimum solution for the un stiffened
case was obtained on a VAX 6000-510 in 2.83 sec (CPU time) after 6 iterations and 49 function
evaluations, and that of the stiffened case in 3.11 sec after 6 iterations and 50 function evaluations. The
discrete solutions (available steel dimensions close to optimum solutions) are also shown in Table
16-3.
where ga, gDj, and gYk are performance functions, and ga :::; 0, gDj :::; 0, and gYk :::; 0 are failure mode
Reliability-Based Optimum Structural Design 371
expressions with regard to collapse, excessive deformation, and yielding occurrence, respectively. The
above three system failure probabilities are conservatively estimated by Ditlevsen's (1979) upper bound.
The eight random variables considered in this problem are the five plastic moment capacities (Mi;
i = 1, 2, ... , 5) and the three loads (Si; i = 1, 2, 3). They are assumed to have independent normal
distributions. The component plastic moment capacities are related to component areas by an approxi-
mation (Grierson and Cameron, 1984):
(i = 1, 2, ... , 5)
where O"i is the random yielding stress of component i; it contributes randomness to the component
plastic moment capacity Mi. The mean and the coefficient of variation of the yielding stress are assumed
to be 21 kN/cm2 and 0.10, respectively. The loads Sh S2, and S3 have mean values of 1.0, 1.5, and 2.5
kN, respectively; their coefficients of variation are 0.20, 0.15, and 0.15, respectively. Young's modulus
for all the members is assumed to be deterministic, E = 20,000 kN/cm2 • The 28 failure modes (mech-
anisms) for P fCOL identified by Cohn (1972) are used here. The linear elastic deformations at three
sections are considered for the evaluation of P fDFM , namely, section 7 (in horizontal direction), and
S2 53
51 2 A2 4 As 7
M2 3 A, Ms 6
AI
1
M1 AI Mt
5
AI Ms
8
1,00=
400cm 400cm 800cm 400cm
I
Figure 16-5. One-story, two-bay frame: Geometry and loading. (Source: Fu, G., and D. M. Frangopol [1990b).
Reliability-based vector optimization of structural systems. Journal of Structural Engineering, ASCE 116(8):2141-
2161. Reprinted with permission from the American Society of Civil Engineers.)
372 Reliability-Based Optimum Structural Design
sections 3 and 6 (both in vertical direction). They are regarded as excessive if they exceed 4, 4, and 6
cm, respectively. The evaluation of Pfyw involves consideration of the elastic moments at 12 critical
sections (six column sections and six beam sections) of the frame shown in Fig. 16-5. They are either
at a connection or at a load-acting section. The system probability of first yielding occurrence is com-
puted considering that at least 1 of those 12 moments may exceed the elastic moment capacity of the
corresponding critical section. The elastic section moduli of component sections are approximately
related to the corresponding area Ai as indicated by Fu and Frangopol (1988).
According to the previously proposed solution search strategy (see Section 7.1) the three upper
bounds on system failure probabilities are chosen to be considered only in the following ranges:
Because excessive deformation and yielding initiation are related to disruption of the functional use
of the structure, larger probabilities of occurrence may be tolerated than in the case of collapse.
Three biobjective optimization problems are solved first in order to investigate the interactive influ-
ence of objective function pairs, namely,
The results of these biobjective optimizations are presented in Fu and Frangopol (1990b). Figure 16-6
presents the decision support space for the four-objective vector optimization problem associated with
the frame in Fig. 16-5. It exhibits a group of constant-volume surfaces fl(A), subject to the three failure
probabilities h(A), h,(A), and ~(A). These surfaces display the interaction among the four objectives,
namely, V, P fCOL, P fDFM, and P fyLD • A point on these isovolume surfaces is defined by its coordinates
in the three-dimensional unreliability decision space. It can be seen in Fig. 16-6 that the collapse
probability is not interactive in most cases. Its effect increases when this probability is very low (close
to 10- 9) and the other two failure probabilities are relatively high (say, ~1O-3). This can be observed
in the lower right-hand part of the decision space shown in Fig. 16-6. It is therefore possible to consider
only the other two failure probabilities in certain regions for final decision making. This observation
also indicates that P rcoL may not be the dominant factor to be taken into account in design. Considering
only collapse occurrence in the optimum design process may implicitly sacrifice reliability with respect
to excessive deformation or to other limit states of concern.
x =A
Considering the symmetric vertical (vehicle) load S as shown in Fig. 16-7, symmetric truss is required.
To illustrate the basic philosophy in a simpler way the components are grouped into four deterministic
Reliability-Based Optimum Structural Design 373
P fCOL
V =26.0 dms
27.2
24.5
27.6
23.0
I
....
,,
I
I
"
I "
I
10-'
PfDFM
Figure 16-6. Decision support space for four-objective RBSO: Three-dimensional unreliability space with isovol-
ume surfaces. (Source: Fu, G., and D. M. Frangopol [1990b]. Reliability-based vector optimization of structural
systems. Journal of Structural Engineering, ASCE 116(8):2141-2161. Reprinted with permission from the Amer-
ican Society of Civil Engineers.)
Figure 16-7. Truss bridge: Geometry and loading. (Source: Frangopol and Fu [1989]).
374 Reliability-Based Optimum Structural Design
design variables (i.e., cross-sectional areas) as follows: vertical members (Ai), lower chord members
(A2)' diagonal members (A3), and upper chord members (A4)' They are also indicated in Fig. 16-7. On
the basis of this classification, the total material volume of the truss bridge system V (in cm3) is computed
as follows:
Table 16-4 lists the random variables (i.e., load S, and resistances Rl> R2 , R3 , R4 of the four types of
components) considered in this problem and their parameters, where if; is the mean value of the yield
stress limit of component i. The superscripts + and - are used to indicate the tension and compression
capacities, respectively. The latter are reduced considering buckling effects implicitly. All the random
variables are assumed to be normally distributed and independent of each other. The failure mode
expressions for system failure probabilities regarding intact and damaged states are given in Frangopol
and Fu (1989). The damaged states here refer to those states with 1 component lost (there are 16 such
situations). The failure modes are found by the incremental loading method (Moses, 1982) and the
failure probabilities are evaluated by Ditlevsen's (1979) upper bound.
The solution to this problem was obtained by following the multicriteria solving strategy described
in Section 7.1, as follows.
Step 1: Ranges on upper bounds for the failure probabilities to be considered are chosen as follows:
in which the superscripts I and u indicate lower and upper bounds, respectively, on the upper limit of the failure
probabilities.
Step 2: The following two biobjective optimization problems are solved:
Coefficient of
Random variable Mean variation (%)
Rl ulAl 11
R2 u2A 2 11
R3 U~3 11
R4 U~4 11
S 320 kN 30
These problems were solved by using the E-constraint method (Duckstein, 1984).
Step 3: The following problem is solved, using different values of c and d to provide the decision support space:
min: V
(i = 1, 2, 3, 4)
( b) 1000
,....
~
800
.§
'-"
600 :>
~
400 ~
:30
200 :>
100 0
0 P
~ 10- 2 fDMG
...0
Il.
10-4
10- 10 10-9 10- 8 10- 7 10- 6 10-5 10-4
Figure 16-8. Biobjective RBSO of a truss bridge for volume and failure probability of the intact system.
(a) Optimum areas and volume; (b) optimum objectives and associated failure probability of the damaged bridge.
(Source: Frangopol and Fu [1989]).
Reliability-Based Optimum Structural Design 377
1000 100
cf'I--- 900 90 ---
N
.§ e
CJ
....
....."
800 80 ....."
;>
70
<
~
700
CIl
~
~
0 600 60
....l
0
;> 500 50
400 A2 40
At
300 10 - 3 10- 2 .5 10- 2 10-1.5 10- 1 10-0 .5 10·30
0
(b) 1000
-
cf'I
.§
....."
800 ;>
~
600
10- 5 400 0
;>
10- 1
10- 9
~
.....
Po. 10- 11
10-13~--~----~----~--~~--~--~
10-
3 10-
2 .5 2 110- 10-1.5
0 .5 100 10- 10-
1000
900
800
riI
::s
~
...:I 700
o
>
500
Figure 16-10. Three-objective RBSO decision support space of a truss bridge. (Source: Frangopol and Fu [1989]).
H,
L1
I I
L1
I
L2
I L1 I L1 I I ~
Figure 16-11. 1\vo-story, four-bay frame: Geometry and loading. (Source: Frangopol and Iizuka [1991b]).
Reliability-Based Optimum Structural Design 379
where x = (Ai> A2, A3, A4l, V is the volume of the frame, ~YLD = ~-1(1 - PfYLD ), R (a redundancy
factor) = ~COL - ~YLD' and ~COL = ~-1(1 - PfCOL)'
The above formulation considers explicitly the redundancy requirement in multicriteria optimization.
A larger redundancy means a higher probability of postyielding behavior. This requirement is usually
imposed by standard specifications for buildings, bridges, and offshore platforms.
The Pareto objectives obtained by using the three-step multicriteria solution strategy previously
described (see Section 7.1) are plotted in Fig. 16-12. They represent isovolume curves in the space of
yielding reliability index and redundancy factor. It is interesting to note that a small increase in volume
causes a significant improvement of redundancy under a constant yielding-reliability-index. Therefore,
the postyielding reliability of the frame may be considerably improved at only slight extra cost.
9. CONCLUDING REMARKS
On the basis of the information presented in this chapter, the following concluding remarks can be
drawn.
1. Structural design based on both reliability and optimization represents the ultimate goal in design under
uncertainty.
5.0
I
I
I
4.0
I
I
3.0
-'-
/
2.0
.....
"'~
;t'
~~..
".
"
.
."
"
"
.
1 .0 DEclsI01N SPACE ~OUNDS >~
'\.... .... '''?SI''
"" "" O"}
"" "" .'C'"",.?
0.0
0.0 1.0 2.0 3.0 4.0 ~.O
2. Although structural design based on both reliability and optimization has not yet achieved the acceptance
level enjoyed by the reliability-based design, a firm knowledge and some experience base exists for the
further development and implementation of RBSO.
3. With the progress in computational design optimization methods, computational stochastic mechanics, and
reliability-based analysis and design of elements and structural systems, RBSO is now practically possible
for several types of applications.
4. Multicriteria, multilimit state RBSO for simple structures is practically feasible in terms of computational
effort. This also indicates that by increasing the efficiency of computer codes and using parallel processing,
multicriteria RBSO of large, complex systems will be possible.
5. Consequences of failure should be an explicit factor in the development of RBSO. It is not easy to evaluate
in monetary terms all these consequences, especially those related to loss of human life and occurrence of
injuries. Research in this area is needed.
6. Sensitivity information plays an important role in the development of RBSO. This information for both
objectives and constraints should be produced by RBSO computer codes. Research in this area is also
needed.
7. Structural systems should be optimized over their expected lifetime. Therefore, damage-tolerant RBSO
should be used in conjunction with time-dependent reliability-based design. Also, reliability-based opti-
mization of structural codes and specifications should be performed in connection with lifetime optimization
of structural systems. Again, more research in these areas is needed.
8. Multicriteria RBSO provides an excellent support tool for the decision maker. Robust decision-making
techniques in the face of conflicting objectives involving uncertainty have to be developed.
9. Finally, it should be emphasized that with the availability of easy-to-use and efficient RBSO computer codes
for sizing optimization of elements and structural systems, the next logical step will be the increase in both
the acceptance level and the implementation level of structural design based on reliability and optimization.
Further research is needed on RBSO in connection with shape, topology, and total optimization. 2
REFERENCES
AASHTO (American Association for State Highway and Transportation Officials) (1983). Standard Specifications
for Highway Bridges, 13th edition. Washington, D.C.: American Association for State Highway and Trans-
portation Officials.
MG, A. H.-S. (1989). Foreword. In: New Directions in Structural System Reliability. D. M. Frangopol, Ed. Boulder,
Colorado: University of Colorado, p. xi.
MG, A. H.-S., and C. A. CORNELL (1974). Reliability bases of structural safety and design. Journal of the Struc-
tural Division, ASCE 100(9):1755-1769.
MG, A. H.-S., and W. H. TANG (1984). Probability Concepts in Engineering Planning and Design, Vol. II. New
York: John Wiley & Sons.
ARORA, J. S. (1989a). Introduction to Optimum Design. New York: McGraw-Hill.
ARORA, J. S. (1989b). Computational design optimization: A review and future directions. In: New Directions in
Structural System Reliability. D. M. Frangopol, Ed. Boulder, Colorado: University of Colorado, pp. 29-44.
ARORA, J. S., D. F. HAsKELL, and A. K. GOVILL (1980). Optimal design of large structures for damage tolerance.
AIAA Journal 118(5):563-570.
AsHLEY, H. (1982). On making things the best-aeronautical uses of optimization. Journal of Aircraft 19:5-28.
ATREK, E., R. H. GALlAGHER, K. M. RAGSDELL, and o. C. ZIENKIEWICZ, Eds. (1984). New Directions in Optimum
Structural Design. New York: John Wiley & Sons.
• 2Some discussion on RBSO may also be found in Chapters 8 and 25 of this book.
Reliability-Based Optimum Structural Design 381
AUGUSTI, G., A BARATTA, and F. CASCIATI1 (1984). Probabilistic Methods in Structural Engineering. London:
Chapman and Hall.
BJERAGER, P. (1989). On computation methods for structural reliability analysis. In: New Directions in Structural
System Reliability. D. M. Frangopol, Ed. Boulder, Colorado: University of Colorado, pp. 52-67.
BJERAGER, P., and S. KRENK (1989). Parametric sensitivity in first order reliability theory. Journal of Engineering
Mechanics, ASCE 115(7):1577-1582.
BoRKOWSKI, A., and S. JENDO (1990). Structural Optimization-MathematicaIProgramming, Vol. 2. M. Save and
W. Prager, Eds. New York: Plenum.
BOURGUND, U. (1987). Reliability-based optimization of structural systems. In: Stochastic Structural Mechanics,
Lecture Notes in Engineering, Vol. 31. Y. K. Lin and G. I. Schueller, Eds. Berlin: Springer-Verlag, pp. 52-
65.
CARMICHAEL, D. G. (1981). Probabilistic optimal design of framed structures. Computer Aided Design 13:261-
264.
CASCIATI1, F., and L. FARAVELLI (1985). Structural reliability and structural design optimization. In: Proceedings
of the 4th International Conference on Structural Safety and Reliability, (Kobe, Japan), Vol. 3. I. Konishi,
A H.-S. Ang, and M. Shinozuka, Eds. Kyoto, Japan: Shinko Printing Company, pp. 261-264.
COHN, M. Z. (1972). Analysis and Design of Inelastic Structures: Problems, Vol. 2. Waterloo, Ontario, Canada:
Solid Mechanics Division, University of Waterloo.
CORNELL, C. A (1967). Bounds on the reliability of structural systems. Journal of the Structural Division, ASCE
93(1):171-200.
CORNELL, C. A (1969a). A probability-based structural code. Journal of the American Concrete Institute 66(12):
974-985.
CoRNELL, C. A (1969b). Structural safety specifications based on second moment reliability analysis. In: Final
Report of the Symposium on Concepts of Safety and Methods of Design. London, England: International
Association for Bridge and Structural Engineering, pp. 235-245.
DER KIUREGHIAN, A, and P. THOFf-CHRISTENSEN, Eds. (1991). Reliability and Optimization of Structural Systems
'90. (Proceedings of the 3rd IFIP WG 7.5 Conference on Reliability and Optimization of Structural Systems).
Lecture Notes in Engineering, Vol. 61. Berlin: Springer-Verlag.
DmLLON, B. S., and C.-H. Kuo (1991). Optimum design of composite hybrid plate girders. Journal of Structural
Engineering, ASCE 117(7):2088-2098.
DITLEVSEN, O. (1973). Structural Reliability Analysis and the Invariance Problem. Copenhagen, Denmark: De-
partment of Civil Engineering, Danish Engineering Academy.
DITLEVSEN, O. (1979). Narrow reliability bounds for structural systems. Journal of Structural Mechanics 7(4):453-
472.
DITIEVSEN, O. (1981). Uncertainty Modeling with Applications to Civil Engineering Systems. New York: McGraw-
Hill.
DITLEVSEN, 0., and BJERAGER, P. (1986). Methods of structural systems reliability. Structural Safety 3:195-229.
DUCKSTEIN, L. (1984). Multiobjective optimization in structural design: The model choice problem. In: New Di-
rections in Optimum Structural Design. E. Atrek, et al., Eds. Chichester, England: John Wiley & Sons, pp.
459-481.
ENEVOLDSEN, I. (1991). Reliability-Based Structural Optimization. Ph.D. Thesis. Aalborg, Denmark: Department
of Building Technology and Structural Engineering, Aalborg University.
ENEVOLDSEN, I., and 1. D. SIIlRENSEN (1990). Reliability-Based Optimization of Series Systems of Parallel Systems.
Structural Reliability Theory Paper No. 82. Aalborg, Denmark: Aalborg University.
ENEVOLDSEN, I., J. D. SIIlRENSEN, and P. THOFf-CHRisTENSEN (1989). Shape optimization of mono-tower offshore
platform. In: Proceedings of the International Conference on Computer Aided Design of Structures: Appli-
cations. C. A Brebbia and S. Hernandez, Eds. Southampton, England: Computational Mechanics
Publications, pp. 297-308.
382 Reliability-Based Optimum Structural Design
ENEVOLDSEN, I., J. D. Sf/lRENSEN, and G. SIGURDSSON (1990). Reliability-Based Shape Optimization Using Sto-
chastic Finite Element Methods. Structural Reliability Theory Paper No. 73. Aalborg, Denmark: Institute of
Building Technology and Structural Engineering, Aalborg University.
EsCHENAUER, H. A, J. KOSKI, and A OSYCZKA, Eds. (1990). Multicriteria Design Optimization: Procedures and
Applications. Berlin: Springer-Verlag.
FARKAS, 1. (1984). Optimum Design of Metal Structures. Chichester, England: Ellis Horwood.
FENG, Y. S., and F. MOSES (1986a). A method of structural optimization based on structural system reliability.
Journal of Structural Mechanics 14(4):437-453.
PENG, Y. S., and F. MOSES (1986b). Optimum design, redundancy and reliability of structural systems. Computers
and Structures 24(2):239-25l.
FERRy-BoRGES, J. (1954). 0 Dimensionamento de Estruturas. Lisbon, Portugal: Ministry of Public Works, National
Laboratory of Civil Engineering.
FORSSELL, C. (1924). Ekonomi och byggnadsvasen (economy and construction). Sunt Fornoft 4:74-77 (in Swedish)
(Translated to English in excerpts in Lind, N. C. [1970]. Structural Reliability and Codified Design. Waterloo,
Ontario, Canada: Solid Mechanics Division, University of Waterloo.)
FRANGOPOL, D. M. (1984a). A reliability-based optimization technique for automatic plastic design. Computer
Methods in Applied Mechanics and Engineering 44:105-117.
FRANGOPOL, D. M. (1984b). Interactive reliability-based structural optimization. Computers and Structures 19(4):
559-563.
FRANGOPOL, D. M. (1985a). Sensitivity of reliability-based optimum design. Journal of Structural Engineering,
ASCE 111(8):1703-172l.
FRANGOPOL, D. M. (1985b). Multicriteria reliability-based optimum design. Structural Safety 3(1):23-28.
FRANGOPOL, D. M. (1985c). Structural optimization using reliability concepts. Journal of Structural Engineering,
ASCE 111(11):2288-230l.
FRANGOPOL, D. M. (1985d). Towards reliability-based computer aided optimization of reinforced concrete struc-
tures. Engineering Optimization 8(4):301-313.
FRANGOPOL, D. M. (1986a). Computer-automated design of structural systems under reliability-based performance
constraints. Engineering Computations 3(2):109-115.
FRANGOPOL, D. M. (1986b). Structural optimization under conditions of uncertainty, with reference to serviceability
and ultimate limit states. In: Recent Developments in Structural Optimization. F. Y. Cheng, Ed. New York:
American Society of Civil Engineers, pp. 54-71.
FRANGOPOL, D. M. (1986c). Computer-automated sensitivity analysis in reliability-based plastic design. Computers
and Structures 22(1):63-75.
FRANGOPOL, D. M. (1987). Unified approach to reliability-based structural optimization. In: Dynamics of Structures.
J. M. Roesset, Ed. New York: American Society of Civil Engineers, pp. 156-167.
FRANGOPOL, D. M., Ed. (1989). New Directions in Structural System Reliability. Boulder, Colorado: University of
Colorado.
FRANGOPOL, D. M. (1991). Reliability-based optimization research at the University of Colorado: A brief retro-
spective. In: Progress in Structural Engineering. D. E. Grierson, A Franchi, and P. Riva, Eds. Dordrecht,
The Netherlands: Kluwer Academic, pp. 481-49l.
FRANGOPOL, D. M. (1993). How to include reliability constraints in structural optimization. In: Structural Engi-
neering in Natural Hazards Mitigation, Vol. 2. A H.-S. Ang and R. Vllaverde, Eds. New York: American
Society of Civil Engineers, pp. 1632-1637.
FRANGOPOL, D. M., and R. B. COROTIS, Eds. (1990). System Reliability in Structural Analysis, Design and Opti-
mization [Special Issue of Structural Safety (Journal). 7(2-4)].
FRANGOPOL, D. M., and G. Fu (1989). Optimization of structural systems under reserve and residual reliability
requirements. In: Reliability and Optimization of Structural Systems '88 (Lecture Notes in Engineering, Vol.
48). P. Thoft-Christensen, Ed. Berlin, Germany: Springer-Verlag, pp. 135-145.
Reliability-Based Optimum Structural Design 383
FRANGOPOL, D. M., and G. Fu (1990). Limit states reliability interaction in optimum design of structural systems.
In: Structural Safety and Reliability, Vol. III. A H.-S. Ang, M. Shinozuka, and G. I. Schueller, Eds. New
York: American Society of Civil Engineers, pp. 1879-1886.
FRANGOPOL, D. M., and M. IrZUKA (1991a). Multiobjective decision support spaces for optimum design of non-
deterministic structural systems. In: Probabilistic Safety Assessment and Management, Vol. 2. G. Apostolakis,
Ed. New York: Elsevier, pp. 977-982.
FRANGOPOL, D. M., and M. IrzUKA (1991b). Pareto optimum solutions for nondeterministic systems. In: Proceed-
ings of the 6th International Conference on Applications of Statistics and Probability in Civil Engineering
(ICASP6), Vol. 1. L. Esteva and S. E. Ruis, Eds. pp. 216-223, Mexico City, Mexico.
FRANGOPOL, D. M., and M. IrZUKA (1992a). Structural system design under uncertainty via Pareto optimization.
In: Probabilistic Mechanics and Structural and Geotechnical Reliability. Y. K. Lin, Ed. New York: American
Society of Civil Engineers, pp. 551-554.
FRANGOPOL, D. M., and M. IrZUKA (1992b). Probability-based structural system design using multicriteria optim-
ization. In: Proceedings of the 4th AlAA / USAF/NASA/OAf Symposium on Multidisciplinary Analysis and
Optimization. AIAA-92-4788-CP Paper, Part 2, pp. 794-798, Cleveland, Ohio.
FRANGOPOL, D. M., and M. KLISINSKI (1992). Design for safety, serviceability and damage tolerability. In: De-
signing Concrete Structures for Serviceability and Safety. SP 133-12. E. G. Nawy and A Scanlon, Eds.
Detroit, Michigan: American Concrete Institute, pp. 225-254.
FRANGOPOL, D. M., and F. MOSES (1994). Reliability-based structural optimization. In: Advances in Design Op-
timization. H. Adeli, Ed. London: Chapman and Hall (in press), pp. 492-570.
FRANGOPOL, D. M., and R. NAKIB (1990). Examples of system optimization and reliability in bridge design. In:
Structural Safety and Reliability, Vol. II. A. H.-S. Ang, M. Shinozuka, and G. I. Schueller, Eds. New York:
American Society of Civil Engineers, pp. 871-878.
FRANGOPOL, D. M., and 1. RONDAL (1976). Considerations on optimum combination of safety and economy. In:
Final Report of the 10th Congress of the International Association for Bridge and Structural Engineering,
Zurich, Switzerland: International Association for Bridge and Structural Engineering, pp. 45-48.
FRANGOPOL, D. M., M. KLISINSKI, and M. IrZUKA (1991). Computational experience with damage-tolerant opti-
mization of structural systems. In: Proceedings of the 1st International Conference on Computational Sto-
chastic Mechanics. P. D. Spanos and C. A. Brebbia, Eds. Southampton, England: Computational Mechanics
PublicationslLondon: Elsevier Applied Science, pp. 199-210.
FREUDENTIIAL, AM. (1956). Safety and the probability of structural failure. Transactions. ASCE, 121:1337-1375.
FREUDENTIIAL, A. M., J. M. GARRELTS, and M. SHINOZUKA (1966). The analysis of structural safety. Journal of
the Structural Division, ASCE 92(1):267-325.
Fu, G., and D. M. FRANGOPOL (1988). Reliability-Based Multiobjective Structural Optimization. Phase 2: Appli-
cations to Frame Systems. Structural Research Series, No. 88-01. Boulder, Colorado: Department of Civil
Engineering, University of Colorado.
Fu, G., and D. M. FRANGOPOL (1990a). Balancing weight, system reliability and redundancy in a multiobjective
optimization framework. Structural Safety 7(2-4):165-175.
Fu, G., and D. M. FRANGOPOL (1990b). Reliability-based vector optimization of structural systems. Journal of
Structural Engineering, ASCE 116(8):2141-2161.
fuRUTA, H. (1980). Fundamental Study on Geometrical Configuration and Reliability of Framed Structures Used
for Bridges. Ph.D. Thesis. Kyoto, Japan: Department of Civil Engineering, Kyoto University.
GELLALTY, R. A, and R. H. GALLAGHER (1966). A procedure for automated minimum weight structural design.
Part I-Theoretical bases, Part II-Applications. Aeronautics Quarterly 17(3):216-230 and 17(4):332-342.
GILL, P. E., W. MURRAY, and M. H. WRIGHT (1981). Practical Optimization. New York: Academic Press.
GRIERSON, D. E. (1983). The intelligent use of structural analysis. Perspectives in Computing 3(4):32-39.
GRIERSON, D. E., and C. E. CAMERON (1984). Computer-Automated Synthesis of Building Frameworks. Paper No.
189. Waterloo, Ontario, Canada: Solid Mechanics Division, University of Waterloo.
384 Reliability-Based Optimum Structural Design
GRIMMELT, M., and G. I. SCHUELLER (1982). Benchmark study on methods to determine collapse failure prob-
abilities of redundant structures. Structural Safety 1(2}:93-106.
lfAFrKA, R. T., and M. P. KAMAT (1985). Elements of Structural Optimization. Amsterdam: Martinius Nijhoff.
HASOFER, A. M., and N. C. LIND (1974). Exact and invariant second moment code format. Journal of the Engi-
neering Mechanics Division, ASCE 1OO(1}:111-12l.
HAUG, E. l, and l S. ARORA (1979). Applied Optimal Design: Mechanical and Structural Systems. New York:
Wiley-Interscience.
HENDAWI, S., and D. M. FRANGOPOL (1993). Reliability-based optimization of composite-hybrid plate girders. In:
Proceedings of the 6th International Conference on Structural Safety and Reliability. ICOSSAR '93,
Innsbruck, Austria. G. I. Schueller, M. Shinozuka, J. T. P. Yao, and A. A. Balkema, Eds. (in press).
HILTON, H. H., and M. FEIGEN (1960). Minimum weight analysis based on structural reliability. Journal of the
Aerospace Sciences 27:641-653.
IIZUKA, M. (1991). Time Invariant and Time Variant Reliability Analysis and Optimization of Structural Systems.
Ph.D. Thesis. Boulder, Colorado: Department of Civil Engineering, University of Colorado.
ISHIKAWA, N., and M. IIZUKA (1987). Optimum reliability-based design of large framed structures. Engineering
Optimization 1O(4}:245-26l.
JOHNSON, A. I. (1953). Strength, Safety and Economical Dimensions of Structures. Stockholm, Sweden: Division
of Building Statistics and Structural Engineering, Royal Institute of Technology.
KAiABA, R. E. (1962). Design of minimum weight structures given reliability and cost. Journal of the Aerospace
Sciences 29:355-356.
KIM, S. H., and Y. K. WEN (1987). Reliability-Based Structural Optimization under Stochastic Time Varying Loads.
Civil Engineering Studies, Structural Research Series No. 533. Urbana, Illinois: University of Illinois.
KIM, S. H., and Y. K. WEN (1990). Optimization of structures under stochastic loads. Structural Safety 7(2-4}:
177-190.
KIRSCH, U. (1981). Optimum Structural Design. New York: McGraw-Hill.
KOSKI, l (1984). Multicriterion optimization in structural design. In: New Directions in Optimum Structural Design.
E. Atrek, R. H. Gallagher, K. M. Ragsdell, and O. C. Zienkiewicz, Eds. Chichester, England: John Wiley
& Sons, pp. 483-503.
LEE, Y.-H., S. HENDAWI, and D. M. FRANGOPOL (1993). RELTRAN: A Structural Reliability Analysis Program:
Version 2.0. Report No. CU/SR-93/6. Boulder, Colorado: Department of Civil Engineering, University of
Colorado.
LEV, O. E., Ed. (1981). Structural Optimization: Recent Developments and Applications. New York: American
Society of Civil Engineers.
LEVY, R., and O. LEV (1987). Recent developments in structural optimization. Journal of Structural Engineering,
ASCE 113(9}:1939-1962.
LIND, N. C. (1971). Consistent partial safety factors. Journal of the Structural Division, ASCE 97(6}:1651-1670.
LIU, Y., and F. MOSES (1991). Bridge design with reserve and residual reliability constraints. Structural Safety
11(1}:29-42.
MADSEN, H. 0., S. KRENK, and N. C. LIND (1986). Methods of Structural Safety. Englewood Cliffs, New Jersey:
Prentice-Hall.
MAHAoEVAN, S., and A. HALDAR (1991). Reliability-based optimization using SFEM. In: Reliability and Opti-
mization of Structural Systems '90 (Lecture Notes in Engineering, Vol. 61), A. Der Kiureghian and P.
Thoft-Christensen, Eds. Berlin: Springer-Verlag, pp. 241-250.
MAu, S. T., and R. G. SEXSMIlH (1972). Minimum expected cost optimization. Journal of the Structural Division,
ASCE 98(9}:2043-2058.
MELCHERS, R. E. (1987). Structural Reliability Analysis and Prediction. Chichester, England: Ellis Horwood.
MOSES, F. (1969). Approaches to structural reliability and optimization. In: An Introduction to Structural Optimi-
Reliability-Based Optimum Structural Design 385
zation. M. Z. Cohn, Ed. Waterloo, Ontario, Canada: Solid Mechanics Division, University of Waterloo, pp.
81-120.
MOSES, F. (1973). Design for reliability-concepts and applications. In: Optimum Structural Design. R. H. Gal-
lagher and o. C. Zienkiewicz, Eds. London: John Wiley & Sons, pp. 241-265.
MOSES, F. (1974). Reliability of structural systems. Journal of the Structural Division, ASCE 100(9):1813-1820.
MOSES, F. (1977). Structural system reliability and optimization. Computers and Structures 7:283-290.
MOSES, F. (1979). Sensitivity studies in structural reliability. In: Structural Reliability and Codified Design. Wa-
terloo, Ontario, Canada: Solid Mechanics Division, University of Waterloo, pp. 1-17.
MOSES, F. (1982). System reliability developments in structural engineering. Structural Safety 1(1):3-13.
MOSES, F., and D. E. KINSER (1967). Optimum structural design with failure probability constraints. AlAA Journal
5(6): 1152-1158.
MOSES, F., and J. D. STEVENSON (1970). Reliability-based structural design. Journal of the Structural Division,
ASCE 96(2):221-244.
MOTA SOARES, C. A, Ed. (1987). Computer-aided optimal design: Structural and mechanical systems. In: Pro-
ceedings of the NATO Advanced Study Institute. Series F, Vol. 27. New York: Springer-Verlag.
MUROTSU, Y., and S. SHAO (1990). Optimum shape design of truss structures based on reliability. Structural
Optimization 2(2):65-76.
MUROTSU, Y., M. MUG, and S. SHAO (1992). Optimal configuration for fiber reinforced composites under uncer-
tainties of material properties and loadings. In: Probabilistic Mechanics and Structural and Geotechnical
Reliability. Y. K. Lin, Ed. New York: American Society of Civil Engineers, pp. 547-550.
NAKIB, R., and D. M. FRANGOPOL (1990a). RSBA and RSBA-OPT: 1\vo computer programs for structural system
reliability analysis and optimization. Computers and Structures 36(1):13-27.
NAKIB, R, and D. M. FRANGOPOL (1990b). Reliability-based structural optimization using interactive graphics.
Computers and Structures 37(1):27-34.
OSYCZKA, A (1984). Multicriterion Optimization in Engineering. Chichester, England: Ellis Horwood.
PAEZ, A, and E. TORROJA (1959). La Determination del Coefficiente de Seguridad en las Distintas Obras. Madrid,
Spain: Instituto Technic de la Construcion y del Cemento.
PARIMI, S. R, and M. Z. COHN (1978). Optimum solutions in probabilistic structural design. Journal of Applied
Mechanics 2(1):47-92.
RACKWITZ, R, and R CUNTZE (1987). Formulations of reliability-oriented optimization. Engineering Optimization
11(1,2):69-76.
RACKWITZ, R, and P. THOFf-CmuSTENSEN, Eds. (1992). Reliability and optimization of structural systems '9l.
In: Proceedings of the 4th IFIP WG 7.5 Conference on Reliability and Optimization of Structural Systems.
Lecture Notes in Engineering, Vol. 76. Berlin: Springer-Verlag.
RAo, S. S. (1984). Optimization: Theory and Applications, 2nd ed. New York: John Wiley & Sons.
REKLAlTIs, G. v., A RAVINDRAN, and K. M. RAGSDELL (1983). Engineering Optimization. New York: Wiley-
Interscience.
ROJIANI, K. B., and G. L. BAILEY (1984). Reliability-based optimum design of steel structures. In: New Directions
in Optimum Structural Design. E. Atrek, et al., Eds. Chichester, England: John Wiley & Sons, pp. 332-457.
ROSENBLUETH, E. (1986). Optimum reliabilities and optimum design. Structural Safety 3(1):69-83.
ROSENBLUETH, E., and L. ESTEVA (1972). Reliability basis for some Mexican codes. In: Probabilistic Design of
Reinforced Concrete Buildings. ACI Publication SP-31. Detroit, Michigan: American Concrete Institute, pp.
1-41.
ROSENBLUETH, E., and E. MENDOZA (1971). Reliability optimization in isostatic structures. Journal of the Engi-
neering Mechanics Division, ASCE 97(6):1625-1640.
SAVE, M., and W. PRAGER (1985). Structural Optimization-Optimality Criteria, Vol. 1. New York: Plenum.
SCHMIT, L. A (1960). Structural design by systematic synthesis. In: Proceedings of the 2nd ASCE Conference on
Electronic Computation. pp. 105-122, Pittsburgh, Pennsylvania.
386 Reliability-Based Optimum Structural Design
SCHMIT, L. A (1984). Structural optimization-some key ideas and insights. In: New Directions in Optimum
Structural Design. E. Atrek, et al., Eds. Chichester, England: John Wiley & Sons, pp. 1-45.
SCHMIT, L. A, and K. J. CHENG (1982). Optimum design sensitivity based on approximation concepts and dual
methods. In: Proceedings of the 23rd AIAA/ASME/ASCE/AHS Structures, Structural Dynamics and Materials
Conference. AIAA Paper No. 82-0713-CP, New Orleans, Louisiana.
SHAO, S. (1991). Reliability-Based Shape Optimization of Structural and Material Systems. Ph.D. Thesis. Osaka,
Japan: Division of Engineering, University of Osaka Prefecture.
SHINOZUKA, M. (1983). Basic analysis of structural safety. Journal of Structural Engineering, ASCE 109(3):721-
740.
SOBIESZCZANSKI-SOBIESKI, l, J. F. BARTHELEMY, and K. M. RILEY (1982). Sensitivity of optimum solutions to
problem parameters, AIAA Journal 20(9):1291-1299.
SOLTANI, M., and R. B. COROTIS (1988). Failure cost design of structural systems. Structural Safety 5:238-252.
S~RENSEN, J. D. (1986). Reliability-Based Optimization of Structural Elements. Structural Reliability Theory Paper
No. 18. Aalborg, Denmark: Institute of Building Technology and Structural Engineering, Aalborg University.
S~RENSEN, J. D. (1987). Reliability-Based Optimization of Structural Systems. Structural Reliability Theory Paper
No. 32. Aalborg, Denmark: Institute of Building Technology and Structural Engineering, Aalborg University.
S~RENSEN, J. D. (1988). Optimal Design with Reliability Constraints. Structural Reliability Theory Paper No. 45.
Aalborg, Denmark: Institute of Building Technology and Structural Engineering, Aalborg University.
S~RENSEN, J. D., and I. ENEvoLDsEN (1989). Sensitivity Analysis in Reliability-Based Shape Optimization. Struc-
tural Reliability Theory Paper No. 69. Aalborg, Denmark: Institute of Building Technology and Structural
Engineering, Aalborg University.
S~RENSEN, J. D., and P. THOFf-CHRISTENSEN (1987). Integrated Reliability-Based Optimal Design of Structures.
Structural Reliability Theory Paper No. 29. Aalborg, Denmark: Institute of Building Technology and Struc-
tural Engineering, Aalborg University.
SPILLERS, W. R. (1975). Iterative Structural Design. Amsterdam: North-Holland.
SURAHMAN, A, and K. B. ROJIANI (1983). Reliability-based optimum design of concrete frames. Journal of Struc-
tural Engineering, ASCE 109(3):741-757.
SWITZKY, H. (1964). Minimum weight design with structural reliability. In: Proceedings of the 5th AIAA Annual
Structures and Materials Conference, Palm Springs, California, pp. 316-322.
TAO, Z., J. H. ELLIS, and R. B. COROTIS (1992). Markov decision processes in structural optimization. In: Prob-
abilistic Mechanics and Structural and Geotechnical Reliability. Y. K. Lin, Ed. New York: American Society
of Civil Engineers, pp. 539-542.
TEMPLEMAN, A B. (1983). Optimization methods in structural design practice. Journal of Structural Engineering,
ASCE 109(12):2420-2433.
THoFf-CHRISTENSEN, P., Ed. (1987a). Reliability and optimization of structural systems. In: Proceedings of the 1st
IFIP W7.5 Working Conference on Reliability and Optimization of Structural Systems. Lecture Notes in
Engineering, Vol. 33. Berlin: Springer-Verlag.
THOFf-CHR!STENSEN, P. (1987b). Application of Optimization Methods in Structural Systems Reliability Theory.
Structural Reliability Theory Paper No. 33. Aalborg, Denmark: Institute of Building Technology and Struc-
tural Engineering, Aalborg University.
THoFf-CHRISTENSEN, P., Ed. (1989). Reliability and optimization of structural systems '88. In: Proceedings of the
2nd IFIP WG 7.5 Conference on Reliability and Optimization of Structural Systems. Lecture Notes in En-
gineering, Vol. 48. Berlin: Springer-Verlag.
THoFf-CHRISTENSEN, P. (1991). On reliability-based structural optimization. In: Reliability and Optimization of
Structural Systems '90 (Lecture Notes in Engineering, Vol. 61), A Der Kiureghian and P. Thoft-Christensen,
Eds. Berlin: Springer-Verlag, pp. 387-402.
THoFf-CHRISTENSEN, P. (1992). Risk-based structural optimization. In: Probabilistic Mechanics and Structural and
Geotechnical Reliability. Y. K. Lin, Ed. New York: American Society of Civil Engineers, pp. 535-538.
Reliability-Based Optimum Structural Design 387
THOFf-CmuSTENSEN, P., and M. J. BAKER (1982). Structural Reliability Theory and Its Applications. Berlin:
Springer-Verlag.
THOFf-CmuSTENSEN, P., and Y. MUROTSU (1986). Applications of Structural Systems Reliability Theory. Berlin:
Springer-Verlag.
THOFf-CmuSTENSEN, P., and J. D. S0RENSEN (1987). Optimal strategy for inspection and repair of structural
systems. Civil Engineering Systems 4:94-100.
TuRKSTRA, C. 1. (1970). Theory of Structural Design Decisions. SM Study No.2. N. C. Lind, Ed. Waterloo,
Ontario, Canada: Solid Mechanics Division, University of Waterloo.
VANDERPlAATS, G. N. (1982). Structural optimization: Past, present, and future. AIAA 10urnaI20:992-1000.
VANDERPlAATS, G. N. (1984a). Numerical Optimization Techniques for Engineering Design. New York: McGraw-
Hill.
VANDERPlAATS, G. N. (1984b). Efficient calculation of optimum design sensitivity. In: Proceedings of the 25th
AIAA/ASME/ASCE/AHS Structures, Structural Dynamics and Materials Conference, Palm Springs, Cali-
fornia. AIAA Paper No. 84-0855-CP, 1. pp. 34-40.
VANDERPlAATS, G. N. (1986). ADS-a Fortran Program for Automated Design Synthesis: Version 1:10. Santa
Barbara, California: Engineering Design Optimization.
VANMARCKE, E. (1971). Matrix formulation of reliability analysis and reliability-based design. Computers and
Structures 3:757-770.
VANMARCKE, E. (1984). Random Fields: Analysis and Synthesis. Cambridge, Massachusetts: MIT Press.
YAO, J. T. P. (1985). Safety and Reliability of Existing Structures. Boston: Pitman.
17
RISK-BASED INSPECTION AND
MAINTENANCE
1. INTRODUCTION
Effective inspection or maintenance programs can play a significant role in minimizing equipment and
structural failures. All aspects of inspection/maintenance, that is, scope, method, timing, and acceptance
criteria, can significantly affect the likelihood of component failure. Most of the present-day in-service
inspection and maintenance requirements are based on prior experience and engineering judgment. At
best, some include an implicit consideration of risk (probability of failure times consequence).
Probabilistic structural mechanics has been used as a tool for assessing the reliability of structures
and components in many industries. Structural reliability analysis techniques have also been used to
develop in-service inspection criteria for structural components. A number of such applications are
described in some of the other chapters in this book and these applications are summarized here in
Section 3. These methods determine the inspection criteria on the basis of the reliability requirement
of individual components. Engineering systems and industrial plants consist of a number of components.
Therefore, in recent years, component in-service inspection criteria have been developed on the basis
of system or plant risk. Probabilistic structural mechanics (PSM) and plant-level probabilistic risk as-
sessment (PRA) techniques are used in combination to establish cost-effective inspection criteria.!
Probabilistic risk assessment is an evolving technique for quantifying the risk of adverse effects such
as accidents in nuclear power plants. Probabilistic risk assessment is the process of scientifically eval-
uating the probability and impact of an adverse effect. This impact may be in the form of hazardous
material dispersion, shock wave propagation, health effects, and/or environmental damage. The proba-
bility of an adverse effect is generally determined using "logic" trees (e.g., fault trees) and branching
decision networks (e.g., event trees). Probabilistic risk assessment methodology has been applied pri-
marily to the evaluation of facility-wide, or macroscopic, risk assessments, as compared to the assess-
ment of individual components of a system, or microscopic assessment. Probabilistic risk assessment
technology, which has been used extensively in the nuclear industry following the Reactor Safety Study
'Probabilistic structural mechanics techniques are discussed in Chapters 2 to 8 of this handbook and PRA techniques are
discussed in Chapter 9.
388
Risk-Based Inspection and Maintenance 389
(U.S. Nuclear Regulatory Commission [NRC], 1975), has been applied successfully in several other
industries also (American Society of Mechanical Engineers, [1984]).
Efforts are well underway in North America, Europe, and Japan to develop and introduce such risk-
based methods or related approaches as a technical basis for evaluating and establishing in-service
inspection programs for structural components in nuclear power plants and other facilities. In the United
States a multidisciplined ASME Research Task Force on Risk-Based Inspection Guidelines has been
addressing the general question of how to incorporate risk considerations formally into plans and re-
quirements for the in-service inspection of components and structural systems since late 1988. The
Pacific Northwest Laboratory (PNL) has also been developing risk-based inspection strategies for the
NRC since 1987, and has worked closely with the ASME Research Task Force. The Atomic Energy
of Canada, Limited, Japan Power Engineering and Inspection Corporation, Swedish Nuclear Power
Directorate, and the U.K. Nuclear Submarine Program are also engaged in developing such procedures.
These projects, the methodologies, and applications are described in Section 4. These methods can also
be adapted for use in maintenance programs. A project on risk-based maintenance of nuclear plants is
also in progress at the Pacific Northwest Laboratory. A brief summary of initial efforts by the PNL to
do this work is also provided in Section 4.
2.1. Notations
2.2. Abbreviations
Probabilistic structural mechanics methods have been used in a number of industries to develop in-
service inspection criteria for a variety of structures, so that specified levels of component (structural)
reliability are maintained through its service life. Such applications are discussed in detail in a number
of chapters in this handbook and so they are not described in this chapter. Instead, a brief summary of
these applications is provided in this section. (Unless otherwise stated, it is assumed in the following
summary that degradation [e.g., cracks] detected during inspection is repaired immediately.)
Chapter 6 describes a case study of developing a cost-effective inspection strategy for butt welds in
a container ship. Weld failure probabilities corresponding to different inspection schedules are computed.
Three inspection schedules are considered: every 0.1 years; at 0.0, 0.5, 1.0, 1.5, and 2.0 years; at 0.0,
0.5, 1.0, 1.5, 2.0, 5.0, 8.0, 14.0, and 17 years. The results show that the useful life of the welds is
greatly increased by frequent inspections, especially early in the life of the welds.
Chapter 18 discusses the effect of inspection on the reliability of the hull plates of a ship. Failure
probabilities as a function of time (years in service) are computed and plotted for I-year inspection
intervals and 2-year inspection intervals.
Chapter 22 gives an example in which the required inspection intervals to keep the failure probability
of an ammonia vessel below 2 X 10-5 are determined using probabilistic fracture mechanics techniques.
The example also illustrates the influence of probability of crack detection of the nondestructive ex-
amination techniques on the required inspection interval. Although acoustic emission with external
ultrasonic inspection requires inspections at 5, 9, 13, 17, ... years, magnetic particle inspection, which
has a higher crack detection probability, requires inspections at 5, 12, 19, ... years only.
Risk-Based inspection and Maintenance 391
Chapter 23 discusses methods of utilizing results of inspection at half-way points in the life of aircraft
to update the initial estimates of crack propagation and expected life. A method of computing the
reliability of a fleet of aircraft on the basis of individual aircraft inspection data is also described.
Chapter 25 presents a procedure for optimizing inspection strategies for offshore oil platforms. Costs
of inspection, repair, and failure are considered in the optimization. Inspection intervals and inspection
quality (probability of crack detection) are optimized subject to contraints that inspection quality and
intervals be within some specified bounds and the structural failure probability does not exceed a
specified level.
Chapter 26 provides an application to bridges. An inspection schedule required to keep the failure
probability of the Yellow Mill Pond Bridge in Connecticut below 0.023 is determined using probabilistic
fracture mechanics techniques. Without any inspection, this failure probability would have been ex-
ceeded after 16 years in service. It is predicted that the failure probability could be maintained below
0.023 by inspecting the bridge after 16, 24, 33, 41, 48, 56, 63, 70, and 76 years of service.
In the aforementioned applications, the component in-service inspection schedules are established on
the basis of reliability requirements of individual components. Because industrial plants usually contain
a large number of components, in recent years, component in-service inspection schedules are being
determined on the basis of plant-level risk considerations in risk-based inspection research and tech-
nology development programs. Components are first prioritized on the basis of their contribution to
plant risk and then optimal or cost-effective inspection schedules and procedures are established for
each component or group of components on the basis of plant risk considerations. This is the subject
of the next section.
• The use of a multidisciplinary, top-down approach that starts at the system level before focusing the inspection
at the component level
• The use of a "living" process that is flexible, strives for completeness, and can be easily implemented
• The use of quantitative risk measures
392 Risk-Based Inspection and Maintenance
• The use of effective and efficient analytical methods that provide results that are readily reviewable and that
are familiar to those involved in in-service inspection technology
Figure 17-1 outlines the overall risk-based inspection process based on the features defined above.
The process is composed of the following four parts.
SYSTEM DEFINITION
o Define System Boundary
and Success Criteria
o Assemble Information
--
QUALITATIVE RISK ASSESSMENT
o Define Failure Modes
o Define Failure Causes
o Identify Consequences -
o Rank Subsystems
o Rank Components/Elements
,..-
(1) FAD..URE MODES, EFFECfS, AND CRITICALITY ANALYSIS
o Redefine Failure Modes
o Redefine Failure Causes
- o Redefine Failure Consequences
• 0 Assess Failure Probabilities
• 0 Assess Consequences
• 0 Risk Evaluation
• 0 Risk-Based Ranking
QUANTITATIVE
RISK
ANALYSIS
Figure 17-1. Risk-based inspection process. (Source: ASME [1991]. Reprinted with permission from the American
Society of Mechanical Engineers.)
Risk-Based Inspection and Maintenance 393
• Application of quantitative risk analysis methods, primarily using an enhanced failure modes, effects, and
criticality analysis (FMECA) and treating uncertainties, as necessary, to focus the inspection efforts on systems
and components associated with the highest calculated safety, economic, or environmental risk
• Development of the inspection program for the components, using decision risk analysis methods to include
economic considerations, beginning with an initial inspection strategy and ending with an update of that
strategy, based on the findings from the inspection that is performed
Several feedback loops are shown in Fig. 17-1 to represent a living process for the definition of the
system, the ranking of components, and the inspection strategy for each component. A key objective is
to develop a risk-based inspection process that is first established and then kept up to date by incor-
porating new information from each subsequent inspection.
4.1.1.1. SYSTEM DEFINITION. A key step in defining a system for inspection, as shown in the first
box of Fig. 17-1, is the assembly of information that is needed for the risk-based approach. In particular,
the interviewing of key personnel, who are knowledgeable of degradation mechanisms or errors that
may not be documented, is vital to the process.
4.1.1.2. INSPECTION PRIORmZATION. The qualitative risk assessment, as included in the second
box of Fig. 17-1, utilizes expert judgment and experience in prioritizing systems, and components for
inspection. A key element of this assessment is to identify potential failure modes and causes, including
design, operational, and maintenance errors and potential degradation mechanisms.
Figure 17-2 shows an example of a qualitative risk assessment matrix. In this approach, the likeli-
hoods of failure and severity of the consequences (injuries/deaths, economic loss, environmental dam-
age, etc.) are categorized into low, medium, and high, and the combinations of failure likelihoods and
consequences that are of various levels of concern is identified. Obviously, the components that have
the highest likelihood of failure and the highest consequences are of highest concern, and should be
concentrated on in an inspection program. Another representation of the same concept is shown in Fig.
17-3; in which each box is representative of a given component, and a box is used to show the range
of estimated consequence and failure probability. Once numbers are placed on the axes, the risk as-
sessment becomes quantitative, with uncertainty being represented by the size of the boxes. Then Fig.
17-3 represents the quantitative risk assessment of the lower two boxes of Fig. 17-1.
The 45° lines shown in Fig. 17-3 are lines of constant risk, where
...==
~
==
~
~
'"'
...0~ ...~
~
Q
Q
r.l
0 ~
0
==
~
el
~
~
0
'"'
LOW MEDIUM HIGH
SEVERITY OF CONSEQUENCES
~ Combinations that identify situations of highest concern
PROBABIUiY
OF FAILURE
D
D
CONSEQUENCES
Figure 17-3. Risk ranking based on lines of constant risk. A - High risk; B - intermediate risk; C - low risk.
(Source: ASME [1991]. Reprinted with permission from the American Society of Mechanical Engineers.)
Risk-Based Inspection and Maintenance 395
and consequences are obtained from prior PRAs (if any), reliability/availability analyses (if any), and
expert opinion. In this way, the key information is integrated to provide the safety, economic, or en-
vironmental risk associated with the systems, subsystems, and components under consideration.
Prior PRA results can be helpful in the process, and this is one of the reasons why much progress
has been made in applying these methodologies to commercial nuclear power reactors (which have
prior PRAs), as discussed in ASME (1992a) and Balkey and Simonen (1991). Although traditional plant
PRAs provide the needed information on consequences of structural failures, they do not provide the
comparable types of information needed to estimate probabilities of structural failures. Probabilistic risk
assessments consider structural failures to be only small contributors to core damage consequences, and
therefore give only brief treatment of this initiator. Estimating failure probabilities for structural com-
ponents is perhaps the most difficult part of the FMECA, and is probably the greatest source of uncer-
tainty in the resulting inspection priorities.
4.1.1.3. INSPECTION PROGRAM DEVELOPMENT. Once the FMECA is completed, and the compo-
nents are ranked or categorized, the next step is to develop an inspection program for each group of
components. This constitutes the bottom box in Fig. 17-1. This process is schematically shown in Fig.
17-5. It can also be used to establish an inspection program for an individual component or a system,
as necessary. The recommended process is divided into three basic steps.
1. Choose candidate inspection strategies that define the frequency, method, and sampling procedure for
inspection: The method of inspection includes the procedure, equipment, and level of personnel qualification
to perform the inspection. The inspection strategy may also take advantage of monitoring systems and
maintenance testing programs. Critical uncertainties associated with this step are the potential for degra-
dation to exist in the component, the potential for inspection damage (which also includes the potential for
danger to the inspector), and the reliability of the inspection method, including the potential for false calls.
2. Choose an inspection strategy and perform inspection: From the candidate inspection strategies, defined in
the above step, the effect of each of these strategies on the failure probability of the component is estimated.
The key uncertainties to be considered in this estimate are the inspection reliability, the chance that certain
degradation mechanisms are occurring, the potential for certain levels of loads to occur, and the potential
______~~OO~~~IU~~
________ X-------C-~--SE-O-~-~-E------ RISK
* • OPEAATHi ElCPERIENCE
Do\TA BASES;
• POTENTW. OEGIWlAllON
MEQWI5I.tS
Figure 17-4. Integration of technical information into an FMECA for risk-ba~ed inspection. (Source: ASME [1991].
Reprinted with permission from the American Society of Mechanical Engineers.)
(M
ASME Task Group Special ASME, Section XI Task A comprehensive review of operating ASME (1990)
on Fatigue Group report. Reviews fatigue experience, and describes occurrences of
of nuclear power plant crackings
components and makes
recommendations to ASME,
Section XI
NRC Pipe Crack Formed by the NRC to evaluate Identifies potential solutions for eliminating or Frank et aZ.
the causes of unexpected mitigating reactor piping systems cracking (1980); NRC
cracking of reactor piping (1979)
systems
EPRI EPRI-sponsored study on material Contains information relating to fabrication Copeland et aZ.
degradations and environmental processes that contribute to degradation. (1987)
effects on components for plant Identifies flaws in LWR components
life extension
EPRI Computer software developed by Widely used by utilities Chexal and
EPRI to predict piping Horowitz
locations subject to erosion! (1989)
corrosion
"NPRDS - Nuclear Plant Reliability Data System; LER - licensee event report; NPAR - nuclear plant aging research; INPO - Institute of Nuclear Power Operations;
NUMARC - Nuclear Management and Resource Council; EPRI - Electric Power Research Institute.
~
398 Risk-Based Inspection and Maintenance
failure mode of the component. Structural reliability analysis techniques can be used to evaluate the impact
of the candidate inspection strategies on the failure probabilities and the sensitivity to uncertainties. In-
spection costs and costs related to failure are also estimated for each strategy. An inspection strategy is
chosen on the basis of these results, and the inspection is performed.
3. Choose appropriate action and update state of knowledge: Following the performance of the inspection,
another critical decision is faced. That is, should the component be repaired or replaced if significant findings
occur, or should nothing be done except to redefine the inspection program (going back to part 1 of the
overall process shown in Fig. 17-1)? If a repair or replacement is required, another decision that is faced
(2) IMPLEMENT
ACTION
NOW
Figure 17-5. Inspection program development. (Source: ASME [1991]. Reprinted with permission from the Amer-
ican Society of Mechanical Engineers.)
Risk-Based Inspection and Maintenance 399
is whether to take the action now or later. This depends on whether this action will indeed keep the
component in a success (normal) state for the intended period of operation, or whether the potential exists
for new damage to be introduced. Structural reliability analysis can be used once again to determine the
effects of inspection findings and potential corrective actions on the failure probabilities. In any case, all
of the results related to the inspection should be used to update the FMECA information on a periodic basis
to rerank the components on the basis of risk and to redefine the inspection program, starting with part 1
of the overall procedure, providing a "living process" as long as the component is in service.
Decision risk analysis logic trees are used to perform the three steps of the inspection program
development process. An example is provided in the next section to help clarify the process of selecting
the best inspection strategy (fourth part of the process shown in Fig. 17-1).
4.1.1.4. TUTORIAL EXAMPLE FOR SELECTION OF AN INSPECTION STRATEGY. Consider a simple
example in which three candidate inspection strategies are being considered for detecting repairable
cracking degradation in a section of high-pressure piping. The candidate strategies are "no inspection,"
"current inspection," and "new inspection." A strategy of "no inspection" is evaluated so that a
potential relaxation from the current method is also considered. The new method has a higher detection
probability than the current method, but also a higher implementation cost. No inspection obviously
has a low detection probability and no implementation cost.
Figure 17-6 depicts a decision tree that illustrates the sequence of decisions and uncertainties involved
in the choice between the three alternatives. Starting from the left end of the tree and following any
particular path through the tree leads to a single value of the decision criterion (total cost). The prob-
abilities attached to the branches at each chance node represent the likelihood of following that path.
By starting at the top (left end) of the tree and following a process of taking expected values at chance
nodes and optimizing (i.e., choosing the highest expected value) at decision nodes, the tree is usually
"averaged out and folded back" to yield an expected total cost for each alternative. For the sake of
example, the numerical calculations are shown to the right of the tree along with the path scenario
probabilities, many of which can be used to evaluate an acceptable failure probability level by the user.
In this case, the new method is seen to have the lowest expected cost ($376K) versus the current
method ($532K). The strategy of "no inspection" yields the highest expected cost ($766K), and is
dropped from further consideration. This strategy also yields the highest failure probabilities. Exami-
nation of the tree reveals that the probability of a "rupture before end of life" is high enough that the
new method avoids sufficient "consequential costs" to offset its higher "inspection cost." Sensitivity
analysis could reveal the uncertainties that are critical in effecting the choice of inspection method. This
analysis could also provide estimates of the dollar amount that should be invested in information-
gathering activities directed toward resolving or reducing the critical uncertainties. Finally, the decision
maker's "risk aversion" (e.g., to the possibility of following the path to the $20 million consequential
cost) could be formally incorporated into the final decision.
The failure probabilities, even for the new inspection method, may be considered to be unacceptable
by the user. Additional strategies, possibly considering more frequent inspections or including moni-
toring systems, may be developed in an attempt to yield acceptable failure probabilities. If acceptable
failure probabilities cannot be achieved by any inspection strategy, the user now faces repair or replace
decisions before carrying the inspection process any further. For many applications, structural reliability
analysis is needed to evaluate the failure probabilities, particularly when these values cannot be rea-
sonably obtained from expert opinion.
In summary, prudent management of inspection programs requires that the technical information
from structural reliability and other engineering analyses be integrated with financial, regulatory, and
other information into a comprehensive framework for evaluating alternatives. Decision analysis can
provide that framework.
400 Risk-Based Inspection and Maintenance
4.1.2. Pacific Northwest Laboratory project The methodology developed by the Pacific North-
west Laboratory (PNL) for the NRC is focused on nuclear power plant applications (Yo et al., 1989a,
1990, 1992, 1993). The method is focused on the prioritization of nuclear plant systems and pressure
boundary components (pressure vessels, piping, and associated welds) according to their contribution
to core damage frequency (CDF). (The ASME Research Task Force approach, on the other hand, not
only prioritizes components for inspection but also determines the required inspection frequency and
inspection method.)
Let us discuss the method as it is applied for prioritizing pipe welds in nuclear plants. The priori-
tization procedure consists of two steps: (1) prioritization of nuclear plant systems and (2) prioritization
of components (pipe welds) within important systems.
The systems are prioritized according to the risk (core damage) contribution of the piping welds in
the system. Then the components (welds) in each of the important systems are ranked according to
....
25 250 3275 .0092 30
..
25 250 20275 .0092 187
@)
25 0 25 .4500 11
..
..
50
50
250
250
3300
5300
[i]
.0095
9
50
..
50 250 20300 .0027 55
(§)
50
....
50 0 .1350 7
t:mJ
0 250 3250 .0185 60
.. 0
0
250
250
5250
20250
.0630
.0185
331
375
..
@)
0 0 0 .9000 0
rn!J
KEY
o = decision node
o = chance node
• Bo.ed cost is ~ t Circled number Is failure probability =
92§!. of altema lye = sum of sum of probabilities for scenarios
probability weighted leading to failure.
scenario costs.
Figure 17-6. Example decision tree for choosing an inspection strategy. (Source: ASME [1991]. Reprinted with
permission from the American Society of Mechanical Engineers.)
Risk-Based Inspection and Maintenance 401
their contribution to core damage. Existing PRA results are used as much as possible in order to limit
additional analyses.
There are many importance measures that could be used to rank systems (see Chapter 9). The Fussel-
Vesely importance measure is a good candidate. However, most nuclear plant PRAs do not include pipe
failures in their fault trees and event trees because of their low probabilities as compared to other
equipment failure probabilities. Generally, the Fussel-Vesely measure cannot be obtained from existing
PRA results without additional analyses.
The Birnbaum importance measure is another possible candidate. This measure does not reflect the
failure probabilities of systems or components; a high-consequence, but low failure probability system
such as the reactor pressure vessel system has a higher Birnbaum importance measure than systems
that have much higher failure probabilities. The PNL developed a new importance measure, called the
weld inspection importance measure or inspection importance measure. This measure for system i is
equal to the product of the pipe break (weld failure) probability of system i (i.e., probability of a pipe
break within the system) and the Birnbaum importance measure of system i. Systems are ranked ac-
cording to this measure.
The weld inspection importance measure can be interpreted as the core damage probability due to
system failures resulting from pipe breaks. This measure can be applied to prioritize not only welds but
also pipe cracks and wall thinning in the base metal of piping. The PNL study focused on weld failures
because welds are more susceptible to failure.
Next, pipe welds in the more important systems are prioritized. The probability of core damage
resulting from failures of weld j is given by the following equation:
The welds within each system are ranked according to this core damage probability. Usually this
core damage probability calculation can be made using existing PRA results, available pipe failure data
(e.g., Wright et ai., 1987), and expert opinion.
This method has been demonstrated by applying it to system and weld prioritization in the Oconee-3
nuclear plant (Yo et ai., 1989a). The system ranking procedure has been applied to eight nuclear plants
(Yo et ai., 1990). Some numerical results from the Oconee-3 application are presented in Section 4.3.2.
4.1.3. Swedish program. An approach, which can be classified as a qualitative risk assessment
technique, has been implemented in Sweden for nuclear plant inspections (Nilsson et ai., 1988). Pri-
orities for inspection are based on an assigned scale of a "consequence index" and a "fracture index,"
as shown in Fig. 17-7. The Swedish approach is similar in concept to the ASME Boiler and Pressure
Vessel Code (BPVC), Section XI (ASME, 1992b). Rather than applying a generic ranking of inspection
priorities for all plants, as is done by ASME BPVC Section XI, the personnel at each plant must identify
the high consequence and high probability of failure components at their facility that warrant the highest
priority for inspection. This requires that qualitative, detailed knowledge of plant systems, operating
practices, and component degradation be factored into the inspection planning process.
The Swedish Nuclear Power Inspectorate has issued regulations based on this qualitative risk ranking
approach to the planning of the inspections of pressurized components. This action has resulted in a
greater concentration of inspection resources on the highest risk systems and components. As part of
the Swedish approach, the inspection priorities for each plant are revised continuously to address the
402 Risk-Based Inspection and Maintenance
significance of new safety-related information. In initial applications of the new methodology to Swedish
plants, the evaluations have supported a reduction of in-service inspection for pressurized water reactor
plants, whereas a need for increased inspection has been shown for many boiling water reactor plants.
To aid in decisions about allocation of in-service inspection resources, Dillstrom et al. (1992) have
developed a simple probabilistic fracture mechanics model to study boiling water reactor vessels. It has
been used to investigate whether different regions of a specific reactor vessel exhibit significant differ-
ences in fracture probability, thereby affecting the inspection priority.
4.1.4. Japanese program. The use of formalized risk assessment processes has not been adopted
to date in Japan because of societal reluctance. Japanese society wants "zero risk" as a goal.
Given this public belief, the Japan Power Engineering and Inspection Corporation (JAPEIC) has
organized a research project on inspection and maintenance technology assessment (IMA) (Iida et ai.,
1990), composed of many elements that relate to risk-based inspection.
Work began in 1989 to develop an evaluation method usable for comprehensive and quantitative
judgment of inspection and maintenance techniques in power generation facilities. The key elements in
assessing new inspection and maintenance technology are as follows (Iida et ai., 1990).
1. Assessment of safety
a. Facilities (accidents, etc.)
b. Surrounding inhabitants (fire, etc.)
c. Human factor (misoperation, etc.)
Consequence Index
1 2 3 4
I A A B C
Fracture II A B C C
Index
III B C C C
Notes:
inspection categories - The A, B and C categories define the required Inspection sample size,
with a 75% sample for category A, 10% sample for category B, and Imle or no Inspection for
category C components. All Inspections 81e to be performed on a 6 year Interval.
COnsequence Index - The consequence Index Is assigned making use of insights provided by
plant specific PRA results. Category I corresponds to the most safety significant components,
whereas the least slgnHlcant components are of category 4.
Fracture Index - The fracture Index corresponds to the likelihood of failure for the component.
Most components will be assigned to category II. Factors that can elevate a component to category
I are the presence of erosion/corrosion, thermal fatigue, high fatigue usage factors, and service
failures experienced by similar components. Only components with very low design stresses and
with Insignificant degradation mechanisms are assigned to category III.
Figure 17-7. Swedish approach for establishing inspection intervals. (Source: Adapted from Nilsson et al., 1988.)
Risk-Based Inspection and Maintenance 403
2. Assessment of economy
a. Cost (inexpensive, etc.)
b. Personnel (operator, etc.)
3. Assessment of reliability
a. Operation (handling, etc.)
b. Accuracy (measuring accuracy, etc.)
c. Function (damage, etc.)
d. Interference (space, electricity, etc.)
e. Experience (thermal power plant, etc.)
4. Others
a. Reduction of exposure dose (time, etc.)
b. Engineering test (logical analysis, etc.)
c. Organization (responsible person, etc.)
d. Legal proceedings (enterprise activity of electric power company, etc.)
e. Comparison with convention techniques
A newly developed system should be evaluated objectively with unbiased judgment, and the evaluation
should be made in a rational manner by a method taking into account not only the performance and
functions of the new system but also other various merits of the new technology.
The interpretive structural modeling (ISM) methods, described by Sage (1977), have been found to
best satisfy the requirements, particularly in the following respects.
The ISM method has been applied to the automatic ultrasonic inspection system for a piping elbow.
Weight assessments and a pairwise comparison, relative to current technology, have been completed by
representatives from regulatory, electric power company, and plant supplier groups according to the
following structure.
• Compliance of regulation
• Safety
• Operation
• Application
• Precision
• Radiation exposure
• Inspection time
• Cost
• Experience of use
• Ability to automate
The method results in evaluation of the new technology with respect to each of these items by each
group. The importance weighting for each item may vary from group to group. However, the results
can be integrated to obtain an overall ranking.
This approach is still under study, and the JAPEIC plans more research on this area in the future.
404 Risk-Based Inspection and Maintenance
4.1.5. Canadian program. Inspection requirements for Canada's CANDU nuclear power plant
components are defined by the Canadian Standards Association (1984), using a qualitative risk-based
approach similar to that used in Sweden, which is discussed in Section 4.1.3.
A study of interest has been performed by Platten (1984) for Canadian reactors that incorporates
both safety and economic risk factors in defining inspection priorities. A method is presented for de-
termining the optimal number of piping welds to be inspected that results in a minimum overall radiation
risk. The risk has two parts: (1) that incurred by inspection personnel, and (2) the expected risk to the
public from piping system failure.
The probability of system failure is quantified in terms of the number of welds inspected. Radiation
exposures are evaluated by a method based on radiation detriment optimization measures. Knowing the
radiation release to the public due to failure, it is possible to quantify the overall radiation risk in terms
of the number of welds inspected.
Optimization methods, using cost-benefit techniques, allow determination of the optimal number of
welds to inspect to minimize the overall radiation risk. Economic factors include the following.
The following four variables are the key parameters of the optimization problem.
The optimal number of welds to inspect (lop!) is a function of N and the product P(GD) X CR. The
general trend of the relationship between lop!, N, and P(GD) X CR is shown graphically in Fig. 17-8.
In applying the methodology to a primary heat transport piping system in a specific CANDU nuclear
station with N = 290, the value of P(GD) X CR = 0.02 X 56 = 1.1. Platten's (1984) results show that,
for this case, no periodic inspection of welds would be required, based on radiation risk (Le., lop! = 0).
However, when economic costs for failure are folded into the above CA term and inspection costs are
integrated into the CI term, inspection of many welds can be justified. P(GD) X CR = 0.02 X 4900 =
98. In this case, Platten's (1984) results show that inspection for about 40 welds may be justified, which
is close to the level required by the Canadian Standard. (The National Standard of Canada CAN3-
N285.4-M83 requires 43 welds to be inspected, which is 15% of the total.) Thus, economic factors may
have the potential to provide a stronger incentive for in-service inspection than just considerations of
safety.
4.1.6. United Kingdom developments. Another nuclear-related activity is in the United Kingdom
Nuclear Submarine Program. Chapman (1983, 1989) describes a probability-based approach for both
optimizing and measuring the gain in confidence from in-service inspection of vessel and piping com-
ponents. This approach uses an expert system together with mathematical modeling to form an initial
best estimate of the state-of-life defect distribution for welds. The through-life history of the defects is
then calculated to arrive at an end-of-life failure probability. Inspection programs are then hypothesized
through the life of the components and their effect on the failure probability calculated. Clearly, the
results, and hence conclusions, about the optimum inspection depend on the initial assumptions and
Risk-Based Inspection and Maintenance 405
judgments. To overcome this drawback, once an inspection program is set out and results from the
inspections become available, Bayesian logic is applied to update initial defect distribution and through-
life prediction.
P(GD) x CR = 500
200 400
~
~'"
~
!.
...;I
s 300
:2
...~c
.
i
100
z 200
"ii
EI
:::
0'"
100
oL-____====~c===========~i~10~
o 200 400
________J
Total Number of Welds (JtI)
Figure 17-8. The general trend of the relationship between [cpI' N, and p(GD) x CR (not to scale).
406 Risk-Based Inspection and Maintenance
4.3. Applications
4.3.1. Application of the ASME research task force approach. The ASME Research Task Force
approach discussed in Section 4.1.1 provides a framework for allocating inspection resources in a cost-
effective manner and helps to focus inspections where they are most needed. This general methodology
is further refined and expanded for application to light water reactor (LWR) nuclear power plant com-
ponents. The general process, which was shown previously in Fig. 17-1, now includes five parts.
For the first two parts of the process, systems are already well defined for LWR nuclear power plants,
and qualitative risk measures have been implicitly incorporated into current inspection programs. The
last three parts of the process, which outline the use of quantitative risk analysis methods to formulate
nuclear component inspection programs, are the primary focus of the latest research work, which is
described in detail in ASME (1992a). Some key aspects are as follows.
1. The use of information from PRAs, which have now been conducted for many nuclear power plants, to
quantify risks associated with pressure boundary component failures
2. A procedure for specifying target component failure probabilities from these quantitative risk estimates
3. The method for determining the characteristics that an inspection program must possess in order to meet
target failure probabilities, while considering cost-benefit factors
Risk-based ranking of components for inspection: The selection and risk prioritization of com-
ponents for inspection is performed by combining information from PRAs with probabilities of pressure
boundary and structural failures, using a modified FMECA procedure.
Probabilistic risk assessment is a systematic method that identifies and delineates the combination
of events that, if they occur, will lead to a severe accident and/or other undesired events. The method
is used to estimate the frequency of occurrence for each combination and estimate the consequences.
The key advantages of using PRA information are that its results can be used to allocate resources to
develop an inspection program. The American Nuclear Society (ANS) and Institute of Electrical and
Electronics Engineers (IEEE) (1983) provides a comprehensive technique for developing PRAs for
nucle~lr power plants.
The NRC (1988) has prepared a plan for completing severe accident issues, which requires each
licensee of a nuclear power plant under construction or in commercial operation to perform an Individual
Plant Examination (IPE). The purpose of the IPE is to identify previously unrecognized severe accident
vulnerabilities and to provide a systematic and structured approach to analyzing various plant modifi-
Risk-Based Inspection and Maintenance 407
cations that may be required. When the IPE results become available, these can be used to support
prioritization activities. However, for those facilities without the developed PRAs or IPEs, other tech-
niques such as FMECA or fault tree analysis (FTA), may be used in combination with relevant plant
information, service experience, etc., to obtain the desired results.
Nuclear power plant PRAs usually do not include most structural failures (including pressure bound-
ary component failures such as piping and associated weld failures) in the event trees or fault trees
because these failures are of very low probability compared to other equipment failures, and their
contribution to the COF is small. However, structural component prioritization for inspection requires
their contributions to the COP. Consequently, some "reanalysis of PRA" is required to compute these
contributions.
In the modified FMECA, the risk associated with the failure of each component is calculated as the
product of the probability of failure times the consequence as determined from the PRA reanalysis. This
product is the additional COF resulting from the possibility of failure of the component in question.
This is a fairly extensive analysis, requiring both the estimation of failure probabilities and additional
analysis of system fault trees from PRAs.
Because structural failures are rare events, historical data provide only a limited basis for estimating
these failure probabilities. Although structural reliability analysis techniques can be used to estimate
component failure probabilities, it would be very expensive to perform such analyses for all the com-
ponents of interest. Consequently, an expert elicitation process similar to that developed for the NRC
in NUREG-1150 (NRC, 1989) is recommended for estimating the rupture probabilities.
The final result of the FMECA procedure is a table or plot of components in the important systems
in the plant that shows the order of their estimated contributions to risk as measured by the COF (e.g.,
see Fig. 17-9). The "cumulative risk" due to components is shown in Fig. 17-9; the cumulative risk
corresponding to the nth component is the sum of the contributions of components 1 to n. The ranking
can also be performed using economic loss as the measure if the user wants to extend the evaluation
beyond safety concerns.
Target failure probability selection: A philosophy and approach for selecting target values of risk
2.0xUt~
1 RPV -Iteltbe Reafoa Weld:! 15 AFW - Mala Steam to AFW Pump 31 RCS - PnIIariler Relid'&tfety IJDe
• RPV - _ . . . . . . _ om. 3Z RCS-l'roso1uber _ u.e
3 RPV-~SIoeII 16 RCS- ..... _ _ '-P 33 U'I-A- _ _ u.e
4 AFW - csr. Supply u.e Stop V.I..... RPV (Cold l.<o) 34 RCS - PIpe _ _
5 RPV-Clr,nuptoNozdeCoone, 17 lJ'I-LI"lPumpSDdloaUae SGmdRCP
_v..... _lJae
UpporSllell, _ _ W..... 18 RCS- ............. _ u . e 35 RCS-PipeSepaea'_'-P
6 U'I-A-Plpe___ " RCS-PipeSepaeat_RPV... SIopV_ .... SG(IIotl.<o)
~_ ... RCS '-PSIopV_(RotLoo) 36 RCS-Plpes.-ot_RCP
Z8 AFW-AFWID ...... "''-PStopV_(CoIdLoo)
7 U'I-PipeSepaeat_ Zl AFW-Plpe _ _ U.lU 37 LPI-A-_.SactIoau.e
~boLV.... (IIukIo)"" AFW .......
CoId .... _ zz AFW-_V_IoSG
1.0xl~ 8 LPI-PipeSepaeat_ Z3 RPV-N_IoV.... W.....
CcNItatameM IsoI. Val" (bukIt) aad 24 RPV - Veuel Studs
H...... _ Z5 AFW-AFWMDPSactIoau.e
9 U'I-Suurao(RWST.Sump).SupplyLiDe U AFW-AFWMDP_u.e
10 U'I-PipeSepaeat_...... .7 RPV-Uppor.CIoIurellad,fIute
~ .... C _ " " V " " 18 RPV-N.-F...... hIIeIIOaIIeI
11 U'I-Plpe _ _ ~ 19 AFW-AFWIDP_u.e
boIatIaaV..... 30 AFW-Plpes.-ot _ _......
1Z RPV-CIUlM3 M..... ~ __
13 RPV---"U-
14 AFW- ..... S ' - O t _ c . ..........
............ SG ......... V.....
o~----~----~-----L-----L-----L----~----~~----~
o 5 10 15 20 25 30 35 40
Component identification
Figure 17-9. Cumulative risk contributions of Surry 1 components, showing decreasing contributions of lower
ranked components. (Source: ASME [1993]. Reprinted with permission from the American Society of Mechanical
Engineers.)
408 Risk-Based Inspection and Maintenance
and failure probability for individual components has been recommended. The philosophy is that in-
spections should ensure that the core damage risk (or CDF) resulting from structural failures is main-
tained at less than a small fraction of the total core damage risk estimated in the PRA from all internal
events. This risk due to structural failures is referred to as the "target risk," and 5% of the total PRA-
estimated risk resulting from internal events has been recommended as an appropriate target value. It
is further recommended that this target risk be apportioned among the risk-important components in
proportion to the risk associated with the failure of each component (unless other apportionment is
necessary because unachievably low target failure probabilities result from this apportionment). Using
the conditional probability of core damage given component failure, the target failure probability can
then be calculated for each component.
Inspection program development: The final part of the recommended process is to develop the
inspection program and to perform the actual inspections. Alternative strategies to current inspection
practices are discussed and include changes in frequency, sample size, and methods of examination.
The evaluation of candidate inspection strategies requires that several issues be addressed, including
the following.
In making a final choice for the inspection program, it must be ensured that the selected strategy
meet the following requirements.
Decision/risk analysis techniques are used to help with the choice in this multiattribute situation.
Detailed logic trees are used to assist in choosing candidate inspection strategies for further evaluation,
using historical data, expert elicitation, or structural reliability analysis to determine if they meet Goal
1 above, and for further refinement.
If it can be shown that, during the operating life of a component, the expected failure probability of
the component does not exceed its target failure probability (with an acceptable degree of confidence),
a sampling inspection plan or no inspection at all may be an acceptable strategy. Such a demonstration
could be provided by an analysis based on historical data, expert elicitation, or structural reliability
analysis that takes into account specific initial and operational thermal, mechanical, and environmental
loading conditions the component would be expected to encounter during its operating period. The
failure probability is calculated in structural reliability analysis evaluations as a function of time, starting
with initial information on flaw distributions and material property variations, inspection reliability, and
using stress information to evaluate the rate of degradation (e.g., crack growth).
A more enhanced inspection strategy would be required, however, if structural reliability evaluations
indicate that the failure probability exceeds the target value before the end of the operating period of
the component. In that case, additional refinement to the inspection strategy could be postulated and
evaluated through additional reliability analyses. The results of specific inspections at specified time
intervals are included in the analysis by using probability of detection information to update damage
distributions (e.g., crack size), assuming that detected degradation is repaired or the component is
replaced. The result of these calculations would again be failure probability as a function of time, but
in this case the inspection data would "reset" the failure probability to a new value after each inspection
Risk-Based Inspection and Maintenance 409
to reflect the confidence gain in the state of knowledge of the "true" failure rate and the effect of
repair or replacement, if any.
By performing such analyses, in which different inspection frequencies and detection capabilities are
assumed, several potentially satisfactory inspection strategies can be developed. Alternatively, it may
be determined that advanced inspection techniques would have to be developed to achieve target failure
probabilities for certain components.
Once successful strategies are established, the decision analysis techniques are used again to integrate
the structural reliability analysis results with related PRA consequence data and economic factors for
selecting the best inspection strategy. The cost factors include the following.
The above recommended process should include sensitivity studies. This process is also recom-
mended to be applied to assist in decisions that are faced following inspection (e.g., repair or replace
the component, or revise the inspection strategy or continue with the same strategy). A method is
suggested in ASME (1992a) for updating the overall risk-based inspection process, based on findings
or actions that are taken as a result of in-service inspections and other sources of new information on
component degradation, making the inspection program a living process.
Status and plans for further development: The risk-based method has been applied to individual
plants. The longer term objective of future efforts is to make comprehensive recommendations to ASME
Boiler and Pressure Vessel Code (BPVC) Section XI (ASME, 1992b), which specifies in-service in-
spection criteria. This effort requires development of a firm technical basis for such recommendations,
which does not presently exist. Achievement of that objective requires several years of work in applying
and refining the methodology outlined above.
Not only do risk prioritizations need to be conducted at the component level, but results need to be
aggregated across many plants at a level of detail that identifies generic trends (assuming that they do,
indeed, exist) across the industry. Consequently, much of the work done to date has involved the pilot
testing of the methodology and initial applications of it to identify the existence of generic trends.
A pilot test of the methodology has been performed that demonstrates the ability to calculate a
quantitative value of risk due to potential failure of any plant component addressed in the PRA. This
allows the ranking of individual components in various systems on the basis of risk. In addition, the
pilot test demonstrates that the methodology is workable, requires reasonable resource commitments,
and that the results are reasonable and in agreement with common-sense qualitative assessments.
Following the pilot test, analysis efforts have been directed toward determining to what extent results
can be generalized across plants. These analyses address system-level risk importance (as opposed to
component level). The system-level information thus developed is needed for subsequent component-
level efforts that are now being performed. As shown in Fig. 17-10, the results of analyses of six
pressurized water reactor (PWR) plants indicate that it may be reasonable to group systems into cate-
gories of high- to low-risk importance.
At the present time, component-level risk prioritization is being carried out for components at the
Surry-1 plant, as previously shown in Fig. 17-9. Risks associated with the failure of individual com-
ponents have been calculated for components in four systems. Future work will complete the calculation
410 Risk-Based Inspection and Maintenance
of individual component failure risks for the Surry-1 plant. Similar analyses will be done for other
pressurized water reactor and boiling water reactor plants, and generic trends in component importance
will be developed.
Future work will also demonstrate the development of inspection strategies for selected components
that will maintain risks due to component failure below target risk values in a cost-effective manner.
4.3.2. Application of the Pacific Northwest Laboratory Approach. Application of the PNL ap-
proach to piping system welds in the Oconee-3 nuclear power plant is described in this section (Vo et
af., 1989a).
First, the Birnbaum importance measure (IB) for each system in the plant is computed. The Birnbaum
measure for system i is equal to the derivative of the CDF with respect to the system i failure probability.
This can be interpreted as the conditional probability of core damage, given system i failure. This value
was computed using existing PRA results. All accident sequences involving failures of the equipment
and components in the system were itemized. Then the failure probabilities of all equipment and com-
ponents in the system were set to unity and the resulting core damage probability was calculated for
each accident sequence. The sum of the probabilities over all the accident sequences is the required
conditional probability of core damage, given system i failure. Birnbaum importance measures were
calculated for 10 systems that are of interest to ASME code-type in-service inspection. These values
are listed in Table 17-2.
Piping system failure probability (pipe break probability) is the sum of the failure probabilities of
all welds in the system (pipe breaks due to cracking in pipe base metal were ignored because of their
relatively much lower probabilities). These probabilities were estimated on the basis of generic data
(Wright et af., 1987).
Next the weld inspection importance measure (I), as defined in Section 4.1.2, was calculated for
....
1.0E..o1
>: RCS - Reactor Coolanl
c PCS - Power Conversion
::s 1.0E..o2 SG - Sleam Generalor
~
u.
1.0E..o3
I!!
+PCS
~
::s
e.>- 1.0E-G4
i=
.a
1.0E..oS
• Surry-1
e
Il. 1.0E-06
+ Sequoyah - 1
A Zlon-1
D Crystal River - 3
¢ Calvert Clllls - 1
1.0E..o7 l:I.. Ocon8e-3
1.0E..oe
1.0E..oO 1.0E-06 1.0E-04 1.0E..o2
Consequence (Conditional Probability 01 Core Damage)
Figure 17-10. Risk-based evaluations for selected systems at six representative PWR plants. (Source: ASME
[1993]. Reprinted with permission from the American Society of Mechanical Engineers.)
Risk-Based Inspection and Maintenance 411
each system, by multiplying the Birnbaum measure by the piping system failure probability. These
values are listed in Table 17-2. The 10 systems were ranked according to this measure.
The second step is to rank the pipe welds in the most important systems. The emergency feedwater
(EFW) system was selected for the illustrative example. Instead of ranking each weld, groups of welds
in a section (segment) of the piping system may be ranked if the failure probabilities of the welds in
each section are approximately the same. In the illustrative analysis, the EFW piping system was divided
into the seven sections listed in Table 17-3. Weld failure probabilities may be computed by structural
reliability analysis. Because such analyses are expensive, in this application failure probabilities of each
group of welds were estimated on the basis of stress levels at these welds, generic data provided by
Wright et al. (1987), and expert opinion. Then the conditional probability of EFW system failure, given
a break in the section, was estimated on the basis of engineering judgment of systems analysts. Next
the conditional probability of core damage, given EFW system failure, was computed from existing
PRA results. The conditional probability of core damage, given a break (weld failure) in the pipe section,
was computed using Eq. (17-2)? This conditional probability of core damage was computed for each
pipe section and tabulated in Table 17-3. The pipe sections were ranked according to this probability.
4.3.3. AppUcations to fossil-fuel-fired power plants. In comparing nuclear and fossil fuel power
plants, some of the similarities and differences that become apparent are as follows.
1. Fossil and nuclear power plants have many similar systems, fluids, and components, especially balance-of-
plant (BOP) systems.
2. Fossil power plants tend to be handling a milder process (lower pressure and temperature).
3. Nuclear plants tend to expose their BOP components to a milder environment (humidity, temperature,
cleanliness, etc.).
2Note that the "probability that operator fails to recover, given system failure," as given in Eq. (17-2), was omitted in this
example calculation.
Weld inspection
Birnbaum Pipe break importance Weld inspection
System" importance probability measure ranking
4. Nuclear plants focus on preventing core damage and reducing number of plant transients. Fossil fuel plants
focus on plant availability, personnel safety, environment, and reduction of operation and maintenance costs.
5. Nuclear plants have PRAs. These studies provide a beginning point and information for developing a risk-
based inspection and maintenance programs. A fossil fuel plant program must start from raw data.
6. Nuclear and fossil fuel plants both contain systems that have redundancy. However, nuclear plants tend to
have more redundancy.
7. Nuclear plants tend to have fewer failures in their databases compared to fossil fuel units. Maintenance
tends to intervene more often at nuclear stations than at fossil fuel units, resulting in replacement of
components before catastrophic failures occur.
8. The downtime costs for a nuclear plant are usually much more expensive than for a fossil fuel unit.
9. The operation and maintenance of nuclear plants are more proceduralized than at fossil fuel plants.
These similarities and differences must be considered when adapting the risk-based techniques from
nuclear plants to fossil fuel units.
Methods and examples on how to develop a risk-based inspection program for fossil fuel-fired power
plant components are discussed in a forthcoming ASME report (ASME, 1993), and are briefly described
below.
The overall methodology is the same as that described in Section 4.1.1. Component reliability data
are obtained from generic data available from the electric power and insurance industries. Although
most nuclear power plants have conducted PRAs and thus have system event trees and fault trees, such
information is not available for most fossil power plants. Therefore event trees and fault trees may have
to be developed when necessary. The aforementioned ASME report (ASME, 1993) uses event trees and
fault trees developed by the Niagara Mohawk Power Corporation and Factory Mutual Research
Corporation.
The Niagara Mohawk Power Corporation used PRA techniques to support a control upgrade project
in a 35-year-old fossil fuel unit as part of their life assessment program in the mid-1980s. Component
ranking according to their contributions to plant unavailability is shown in Fig. 17-11. On the basis of
this and similar results, the company focused its resources to areas where they are needed the most.
The generally decreasing trend in the unavailability contributions of the higher ranked components, as
seen in Fig. 17-11, is attributed to this strategy. At the time the study was initiated, overall plant
availability stood at 70.4%. By 1989, after these methods had been applied, plant availability rose to
84.5%, as reported by the Utility Data Institute (1992). The methods are currently being expanded to
address several other Niagara Mohawk fossil fuel plants.
A multiple-plant economic optimization methodology has been developed by David Mauney at Caro-
Annual core
Piping section damage probability Rank
lina Power and Light Company. After components are selected for inspection and inspection strategies
are evaluated for each component, a model is developed to evaluate the economic constraints that a
utility faces in balancing resources over many components across several fossil units. Decision/risk
analysis tools are applied to the model in evaluating both inspection and replacement decisions across
the several units.
5. CONCLUDING REMARKS
Probabilistic structural mechanics has been used as an aid in determining in-service inspection frequen-
cie~ of structural components in many industries. These approaches consider the reliability requirements
of individual components. In recent years, risk-based methods have been developed that consider the
system- or plant-level risk in developing inspection strategies for the components. Components are
ranked or grouped according to their contribution to plant risk. A cost-effective in-service inspection
Turbine
r---------------------,O 2 4 6 8 10 12 14 16 18
- 1986 ~1989
t\\\\\\\\\\\1 1987 ~ 1990 Plont Unovoitobility (%)
~1988 Contribution by Individuol Component
Figure 17-11. Component contribution to plant unavailability. (Source: ASME [1993]. Reprinted with permission
from the American Society of Mechanical Engineers and Niagara Mohawk Power Corporation; this figure was
originally prepared by the Niagara Mohawk Power Corporation.)
414 Risk-Based Inspection and Maintenance
frequency and method are then determined for each component or each group of components. Thus
resources are allocated to where they are needed the most.
A Research Task Force of the American Society of Mechanical Engineers has completed the devel-
opment of a risk-based approach that could be applied in the nuclear power, fossil fuel power, chemical,
petroleum, and aerospace industries. The method has been demonstrated for nuclear power plants; some
preliminary applications are also reported for fossil fuel-fired power plants. The Pacific Northwest
Laboratory (PNL) has also developed a risk-based approach for nuclear plants and demonstrated it in
several applications. Atomic Energy of Canada Limited, the United Kingdom Nuclear Submarine Pro-
gram, the Swedish Nuclear Power Directorate, and Japan's JAPEIC all have their own risk-based in-
spection programs under development. The PNL is also developing a risk-based maintenance program
for nuclear power plant components.
REFERENCES
ANS (American Nuclear Society) and Institute of Electrical and Electronics Engineers (IEEE) (1983). A Guide to
the Performance of Probabilistic Risk Assessment for Nuclear Power Plants. NUREG/CR-2300. Washington,
D.C.: U.S. Nuclear Regulatory Commission.
ASME (American Society of Mechanical Engineers) (1990). Metal Fatigue in Operating Nuclear Power Plants-
a review of Design and Monitoring Requirements, Field Failure Experience, and Recommendations to ASME,
Section XI Actions. New York: American Society of Mechanical Engineers.
ASME (American Society of Mechanical Engineers) (1991). Risk-Based Inspection Guidelines-Development of
Guidelines, Vol. 1: General Document. New York: The American Society of Mechanical Engineers.
ASME (American Society of Mechanical Engineers) (1992a). Risk-Based Inspection Guidelines-Development of
Guidelines, Vol. 2, Part 1: Light Water Reactor Nuclear Power Plant Components. New York: The American
Society of Mechanical Engineers.
ASME (American Society of Mechanical Engineers) (1992b). Section XI, Rules for Inservice Inspection of Nuclear
Power Plant Components, ASME Boiler and Pressure Vessel Code. New York: American Society of Me-
chanical Engineers.
ASME (American Society of Mechanical Engineers) (1993). Risk-Based Inspection Guidelines-Development of
Guidelines, Vol. 3: Fossil Fuel-Fired Electric Power Generating Station Application. New York: The Amer-
ican Society of Mechanical Engineers.
BALKEY, K. R., and D. O. HARRIS (1991). Risk-based inspection guidelines: The general process from ASME
research efforts. In: Fatigue, Fracture, and Risk-1991. New York: American Society of Mechanical Engi-
neers, pp. 33-38.
BALKEY, K. R., and E A. SIMONEN (1991). Risk-based inspection guidelines for nuclear power plant components.
In: Transactions of the 11 th International Conference on Structural Mechanics in Reactor Technology. Berlin,
Germany: International Association for Structural Mechanics in Reactor Technology.
Canadian Standards Association (1984). Periodic Inspection of CANDU Nuclear Power Plant Components. National
Standard of Canada CAN3-N285.4-M83. Toronto, Ontario, Canada: Canadian Standards Association.
CHAPMAN, O. J. V. (1983). A statistical approach to the analysis of lSI data using the Bayes method. In: Trans-
actions of the 7th SMiRT Conference. Berlin, Germany: International Association for Structural Mechanics
in Reactor Technology.
CHAPMAN, O. 1. V. (1989). Probabilistic-based lSI and life extension. In: Transactions of the 10th SMiRT Confer-
ence. Berlin, Germany: International Association for Structural Mechanics in Reactor Technology.
CHEXAL, V. K., and 1. S. HOROWITZ (1989). Flow-assisted corrosion in carbon steel piping. In: Proceedings of the
4th Symposium on Environmental Degradation of Materials in Nuclear Power Plant Systems.
COPELAND, J. E, et al. (1987). Component Life Estimation: LWR Structural Materials Degradation Mechanisms.
Palo Alto, California: Electric Power Research Institute.
Risk-Based Inspection and Maintenance 415
DILLSTROM, P., F. NILSSON, B. BRICKSTAD, and M. BERMAN (1992). Application of probabilistic fracture mechanics
to allocation of NDT for nuclear pressure vessels: A comparison between initiation and fracture probabilities.
In: Fatigue, Fracture, and Risk. New York: American Society of Mechanical Engineers, pp. 127-132.
FRANK, L., W. S. HAzELTON, R. A HERMANN, V. S. NOONAN, and A TABOADA (1980). Pipe Crack Experience
in Light Water Reactors. NUREG-0679. Washington, D.C.: U.S. Nuclear Regulatory Commission.
bDA, K., A KUROKAWA, and Y. YAMADA (1990). Present situation of automatic inspection techniques in Japan.
In: Proceedings of the 10th ASM International NDE Conference. Materials Park, Ohio: ASM International.
LERCARI, F. A (1989). Guidance for the Preparation of a Risk Management and Prevention Program. Sacramento,
California: California Office of Emergency Services.
NILSSON, F., L. SKANBERG, and P. BYSTEDT (1988). New Swedish regulations for safety of pressurized components.
Paper presented at the 9th International Conference on Nondestructive Evaluation in the Nuclear Industry,
Tokyo, Japan.
NRC (Nuclear Regulatory Commission) (1975). Reactor Safety Study: An Assessment of Accident Risks in U.S.
Commercial Nuclear Power Plants. Report No. WASH-1400 (NUREG-75/014). Washington, D.C.: U.S.
Nuclear Regulatory Commission.
NRC (Nuclear Regulatory Commission) (1979). Investigation and Evaluation of Stress Corrosion Cracking in
Piping in Light Water Reactors. Washington, D.C.: U.S. Nuclear Regulatory Commission.
NRC (Nuclear Regulatory Commission) (1988). Individual Plant Examination for Severe Accident Vulnerabilities.
Generic Letter 88-20. Washington, D.C.: U.S. Nuclear Regulatory Commission.
NRC (Nuclear Regulatory Commission) (1989). Severe Accident Risks: An Assessment for Five U.S. Nuclear Power
Plants, Vol. 1. NUREG-1150. Final Summary Report. Washington, D.C.: U.S. Nuclear Regulatory
Commission.
PLATTEN, J. L. (1984). Periodic (inservice) inspection of nuclear station piping welds, for the minimum overall
radiation risk. In: Proceedings of the 5th International Meeting on Thermal Nuclear Reactor Safety, Vol. 1.
Karlsruhe, Germany: Nuclear Research Center, pp. 617-625.
SAGE, A P. (1977). Methodology for Large Scale System. New York: McGraw-Hill.
SHAH, V. N., and P. E. McDONALD (1989). Residual Life Assessment of Major Light Water Reactor Components-
Overview. NUREG/CR-4731. Washington, D.C.: U.S. Nuclear Regulatory Commission.
U.S. Environmental Protection Agency (1987). Technical Guidance for Hazard Analysis. Washington, D.C.: U.S.
Environmental Protection Agency.
Utility Data Institute (1992). Power Plant Reliability Pioneering Effort by Niagara Mohawk. Washington, D.C.:
Utility Data Institute.
Vo, T. V., B. F. GORE, E. J. ESCHBACH, and F. A SIMONEN (1989a). Probabilistic risk assessment based guidance
for piping in-service inspection. Nuclear Technology 88:13-20.
Vo, T. v., B. F. GORE, and M. S. HARRIS (1989b). Arkansas Nuclear One Unit 1 inspection. Nuclear Technology
84:14-22.
Vo, T. V., B. F. GORE, F. A SIMONEN, and S. R. DOCTOR (1990). Development of generic inservice inspection
guidance for pressure boundary systems. Nuclear Technology 92(3):291-299.
Vo, T. v., P. G. HEASLER, S. R. DOCTOR, F. A SIMONEN, and B. F. GORE (1991). Estimates of component rupture
probabilities: expert judgment elicitation. Nuclear Technology 94(1):259-271.
Vo, T. v., et al. (1992). Risk-based inspection for management of aging degradation. In: Proceedings of the Aging
Research Information Conference. Washington, D.C.: U.S. Nuclear Regulatory Commission.
Vo, T. v., B. F. GORE, F. A SIMONEN, and S. R. DOCTOR (1993). Development of inservice inspection plans for
nuclear components at the Surry-1 Nuclear Power Station. Nuclear Technology 102(3):403-415.
WHEELER, T. A, S. C. HORA, W. R. CRAMOND, and S. D. UNWIN (1989). Analysis of Core Damage Frequency
from Internal Events: Expert Judgment Elicitation, Vol. 2. NUREG/CR-4550. Washington, D.C.: U.S. Nu-
clear Regulatory Commission.
WRIGHT, R. E., J. A STEVERSON, and W. F. ZUROFF (1987). Pipe Break Frequency Estimation for Nuclear Power
Plants. NUREG/CR-4407. Washington, D.C.: U.S. Nuclear Regulatory Commission.
18
PROBABI LITY-BASED LI FE
PREDICTION
1. INTRODUCTION
The estimation of the life expectancy of a structure is not a simple task. Many factors affect the life of
a structure. These factors include design parameters, design safety factors, design methods, type of
structure, structural details, materials, construction methods and quality, loads, maintenance practices,
inspection methods, and other environmental factors. These factors have different types of uncertainty
associated with them. Generally, these factors have the following types of uncertainty: (1) physical
randomness in magnitude and time of occurrence, (2) statistical uncertainties due to using limited
amount of information in estimating the characteristics of the population, (3) model uncertainties due
to approximations in the prediction models, and (4) vagueness in the definition of various factors and
their effect on life. Therefore, the estimation of life expectancy is a complex process.
Because of the stochastic nature of many of the uncertainties, a probabilistic approach, as opposed
to a deterministic approach, is better suited for life expectancy prediction. Probabilistic methods of life
expectancy prediction is the subject of this chapter. Life expectancy with failure modes such as yielding,
plastic deformation and buckling is estimated using the extreme value modeling approach, and life
expectancy associated with failure modes such as fracture and fatigue is estimated using the cumulative
value modeling approach.
2.1. Notations
E Modulus of elasticity
F Number of failure modes
Fx Cumulative distribution function of X
Ix Probability density function of X
Fy Yield stress
416
Probability-Based Life Prediction 417
2.2. Abbreviations
(18-1)
in which the Xi(i = 1, ... ,p) are the p basic random variables that define the loads, material properties,
and other structural parameters, g(.) is the functional relationship between the basic random variables
and failure (or survival), R is resistance or strength, and L is load effect. The performance function can
be defined such that the limit state, or failure surface, is given by M = O. The failure event is defined
as the space where M < 0, and the survival event is defined as the space where M > 0; that is, the
structure is said to have failed if M < O. Thus, the probability of failure can be evaluated by the following
integral:
(18-2)
where Ix is the joint density function of Xl> X 2 , ••• , Xp , and the integration is performed over the region
where M < O. In general, it may not be easy to evaluate this integral analytically. The stress-strength
interaction method, first-orderlsecond-order reliability methods, or Monte Carlo simulation (with or
without variance reduction techniques) may have to be used. The methods are discussed in Chapters 2,
3, and 4 respectively.
The strength (or resistance) R of a structural component and the load effect L are generally functions
of time. Therefore, the probability of failure is also a function of time. The time effect can be incor-
porated in the reliability assessment by considering the time dependence of one or both of the strength
and load effects.
number of load cycles to failure can be related to the number of years of structural service. Therefore,
the cumulative value loading and resulting probability of failure are functions of the number of load
applications, which means that they are functions of time. The second and third methods are discussed
in Sections 5 and 6, respectively. The strength or resistance of a structure is also a function of time.
Generally, all structural members become weaker in the course of time, due to material corrosion and
deterioration. However, there are structural members that become stronger in the course of time, for
example, concrete structural members.
On the basis of the time-dependent load effect L(t) and strength R(t), the probability of failure can
be computed for a specified failure mode using one of the applicable methods described in Chapters 2
to 7 of this book. The resulting probability of failure Pf(t) is a function of time. Mathematically, the
probability of failure can vary from zero to one during the life of a structure. Realistically, the probability
of failure varies from an initial (design) probability of failure based on design values to a final prob-
ability of failure at the end of useful structural life. The resulting variation of the probability of failure
with time can be viewed as the cumulative distribution function of the structural life SL according to
the specified failure mode. The curve satisfies all the conditions of a cumulative probability distribution
function. This relationship can be expressed as follows:
(18-3)
where FSL(t) is the cumulative distribution function of structural life. The probability density function
of structural life !SL(t) can be determined by taking the first derivative of the cumulative distribution
function with respect to time t. The probability density function of structural life can be viewed, on the
basis of the basic definition of density functions, as the unconditional probability of failure per unit
time, or the unconditional instantaneous probability of failure, or the unconditional failure rate. In
contrast, a conditional probability of failure per unit time can be defined. If conditioning is performed
on the event "structural survival in the time period (0, t)," the resulting conditional probability of
failure is called the hazard function h(t). The hazard function h(t) is a measure of risk (or the probability
of failure per unit time) given that the structure did not fail in the time interval (0, t). Therefore,
the hazard function plot with time shows the change in risk level as the structural component becomes
older. Mathematically, the hazard function can be related to !SL(t) and FSL(t) as follows (Thoft-
Christensen and Murotsu, 1985):
(18-4)
For small values of FSL(t), the hazard function is approximately equal to the density function of structural
life.
On the basis of the above distributions of structural life, the mean value and variance of structural
life can be determined. Confidence levels on the structural life can also be determined. The calculations
need to be performed for each possible failure mode. Then the results need to be combined over all
the failure modes in the final assessment of structural life of the system (see Chapter 8 of this book).
In some applications, life expectancy is based on the failure of a critical component of the structure.
In other applications, the failure of the complete structural system consisting of a number of components
is considered. In the latter case, failure probabilities of the individual components are first computed
using methods described in Chapters 2 to 7 and in the foregoing discussion of this chapter, and then
the structural system failure probability is computed using methods discussed in Chapter 8 or other
such references on structural system reliability.
420 Probability-Based Life Prediction
where Pr,w is the probability of failure of the structural component within the warranty period W, and
Pr,! is the probability of failure of the structure within any inspection period I. Equation (18-5) is based
on the assumptions that the probability of failure within any inspection period (not including the war-
ranty period) is the same, the events of failure within inspection intervals are statistically independent,
and all the inspection intervals, except the warranty inspection, have the same duration. These assump-
tions are considered herein for illustration purposes. Variations to these assumptions can be utilized and
the mathematical model can be modified accordingly.
Relating to theoretical determination of structural reliability to actual practice requires the owner of a
structure to make some difficult decisions. The most important of these is translating the economic
definitions of the end of the useful life of a structure into a structural definition of end of life. Cata-
strophic loss of a structure represents an economic as well as structural (and possibly human) disaster,
and dealing with the avoidance of catastrophic failures has dominated the investigations of engineers
using reliability methods. However, how does one apply reliability methods to decisions involving
service life extension? Certainly, if enough money is applied to repair and maintain a structure, the life
of the structure can be extended almost indefinitely. However, owners of commercial structures tend to
define the useful life of a structure as that period until the cost of repairing and maintaining a structure
to preserve some level of reliability exceeds a limiting amount. The limiting amount can be continually
changing, based on the owner's other commitments and the profitability of the operation in which the
structure is engaged. This is the economic definition of structural life.
Translating the economic definition into a structural one requires careful consideration and a certain
amount of decision making on the part of the owner. What is the target level of reliability? On the basis
of past experience with repair costs, how much damage can be accumulated before ceiling costs to repair
are exceeded? For the most likely failure modes of the structure, how much does it cost, both in lost
service time and physical repair, to correct the damage? Once questions such as these are answered, the
reliability engineer can begin to assess the likelihood of that level of damage being exceeded in a specified
duration of time or the likely duration of time required to reach that level of damage.
The life expectancy of a structural system depends largely on the structural definition of the end of
life for the system. In some cases, failure of a single component is considered as the end of life of the
Probability-Based Life Prediction 421
system. In other cases, system redundancy is taken into account. For example, the structural definition
of the end of life of a redundant system can be defined as the failure according to a specific failure
mode of at least n components of the system out of N components that define the critical region of
failure of the system. This definition is based on a local treatment of the critical region rather than a
global consideration of the system as a whole. Global definition may also be considered in end-of-life
consideration. Methods of analyzing a system as a whole are considered in Chapter 8 of this book.
Mathematically, the probability of failure of at least n out of N components, P fnlN , can be computed in
the form of a lower limit and an upper limit. These limits correspond to a correlation coefficient p of
zero and one, respectively, among the N components. The lower limit can be computed on the basis of
the binomial probability distribution as follows:
;=N
N!
PfniN = t:
~
i!(N -
. N'
i)! (Pfp)'(1 - Pfp) -, (18-6)
Some basic concepts and equations of extreme value statistics are discussed first. Then the extreme
value modeling approach is described through an example problem.
(18-7)
The exact cumulative and density probability distribution functions of the maximum value are given
by, respectively (Ang and Tang, 1984):
It can be shown that for relatively large values of k, the extreme distribution approaches an asymptotic
form that is not dependent on the exact form of the parent distribution; however, it depends on the tail
characteristics of the parent distribution in the direction of the extreme. The central portion of the parent
distribution has little influence on the asymptotic form of the extreme distribution. These facts are of
practical interest and importance.
For parent probability distributions of exponential tails, the extreme distribution approaches an ex-
treme value distribution of double-exponential form as k -+ 00. For example, a normal or lognormal
probability distribution approaches a type I extreme value distribution as k -+ 00 (Ang and Tang, 1984).
422 Probability-Based Life Prediction
In this case, the difference between an exact distribution for Mk and the type I extreme value distribution
is relatively small. The difference diminishes as k - 00. Practically, the difference is negligible for k
larger than approximately 25.
For the purpose of the structural life expectancy assessment, the mathematical model for the extreme
distribution needs to be a function of k in order to relate the outcome of the analysis of extreme statistics
to time. Extreme value distributions, like the type I largest extreme value distribution, are used in this
chapter to model extreme load effects. Because the mathematical model is not sensitive to the type of
the parent distribution, as long as it is within the same general class, the mathematical model used in
this chapter is based on a normal parent distribution.
For a normal parent probability distribution of the random variables (Xl> X 2 , ••• , X k ) with a mean
value f.l and standard deviation <T, the cumulative distribution function of the largest value Mk of k
identically distributed and independent random variables (Xl> X 2, ••• , X k ) is given by (Ang and Tang,
1984)
(18-10)
(18-11)
where
The mean value and standard deviation of Mk can be determined approximately using the central and
dispersion characteristics of the type I extreme value distribution. They are given, respectively, by the
following:
The constants 1T and 'Y have the values of 3.141593 and 0.577216, respectively.
than five plate panels in a specified critical region at the end of any inspection period. It is assumed
that plate panels are to be replaced when the ratio of plastic deformation to plate thickness is greater
than or equal to 2.0. The limit of five panels is usually based on the resources allocated for repair and
steel replacement for the vessels in their lifetime maintenance cycle. When more than five panels need
to be replaced, the allocated costs are exceeded and the boat meets the economic definition of end of
structural life. The inspection schedule of the boat includes the warranty inspection at the end of the
first year followed by regular inspections every I years, where I can be either 1 or 2 years.
In this case, the performance function takes the following general form:
Each of the terms in the above equation is expressed in units of pressure. The still water load is the
hydrostatic pressure at the depth of the critical region. It can be determined on the basis of the design
draft. The dynamic load is the extreme dynamic pressure, based on the results from full-scale experi-
ments conducted on one of the vessels. The resistance term is an empirical expression developed by
Hughes (1981) on the basis of elastoplastic plate response, and is given as
(18-16)
where Fy is the yield stress of the material, E is the modulus of elasticity of the material, Qy is the
initial yield load calculated from elastic theory, AQo accounts for the curved transition portion of the
load-deflection curve, AQl accounts for the subsequent straight portion of the load-deflection curve, Rw
is the ratio of the deflection wp at a given loading to the deflection at the completion of the edge hinge
formation wpo, and T(Rw) is given by
for Rw ::; 1
(18-17)
for Rw > 1
In Eqs. (18-16) and (18-17), only Fy and E are treated as random variables. The rest of the variables
can be computed on the basis of plate geometry, boundary conditions, and F y and E. A more detailed
discussion of Eqs. (18-16) and (18-17) and the development of the limit state equation for plastic plate
deformation are provided by Ayyub and White (1988) and Ayyub et al. (1989).
In this example, only loads and load effects in head seas are considered. No other heading is con-
sidered because reported stress records by Purcell et al. (1988) indicate that they result in much smaller
stresses than the head-seas condition. Eight combinations of vessel speed and sea state for the head-
seas condition are considered herein. These combinations are summarized in Table 18-1. For the eight
combinations, strain measurements at locations of interest were performed by Purcell et al. (1988). The
combination of high sea state and high speeds was not tested. The percentages shown in Table 18-1
for each combination represent the percentage usage of the vessel in the corresponding speed/sea-state
combination. These percentages are based on a survey conducted by the same researchers. The total of
the percentages in Table 18-1 is about 20%, which is the expected percentage usage in head seas.
The performance function as given by Eq. (18-15) includes two components of pressure, that is, still
water and dynamic pressure. The still water pressure can be modeled using random variables. It need
not be modeled using the statistics of extreme value because it is a static load that is not time dependent.
It can therefore be defined by a single random variable, unlike time-dependent dynamic loads, which
need to be modeled using random processes and statistics of extremes. Because the still water pressure
was not measured, the mean value of the still water pressure was determined on the basis of hydrostatic
analysis, using the draft of the vessel, and was found to be 2.667 psi (Purcell et al., 1988). The
424 Probability-Based Life Prediction
Ship speed
Sea state
(wave height) Low (12 knots) Medium (24 knots) High (29 knots)
Low (3 ft) Case 1 (12 knots, 3 ft): Case 2 (24 knots, 3 ft): Case 3 (29 knots, 3 ft):
4.0% 1.7% 1.0%
Medium (8 ft) Case 4 (12 knots, 8 ft): Case 5 (24 knots, 8 ft): Case 6 (29 knots, 8 ft):
4.7% 1.3% 0.7%
High (10 ft) Case 7 (12 knots, 10 ft): Case 8 (24 knots, 10 ft): Not considered
5.3% 1.0%
coefficient of variation and distribution type of still water pressure are assumed to be 0.20 and normal,
respectively.
The strains due to the dynamic pressure were measured, and the computed stresses should be modeled
using the statistics of extremes. In order to use the statistics of extreme, the parent distribution for the
measured stress needs to be defined. The parent distribution of the stress is defined as the probability
distribution of a random variable that is defined as the maximum stress due to dynamic pressure in a
30-sec interval for all the cases in Table 18-1, except case 8. For case 8, the interval is taken as 10
sec. On the basis of this definition, the statistical characteristics of the parent distribution of stress for
the eight cases were determined using the data reported by Purcell et al. (1988). The selection of the
30- and lO-sec intervals were based on limitation on the buffer memory of the data acquisition system
used to measure the strains. The mean values and coefficients of variation (COY) for cases 6 and 8
were based on 10 and 23 maximum values taken from 10 and 23 records of stress time history, re-
spectively. Each record represents the stress time history for 10- and 30-sec intervals of measurement,
respectively. For the other cases, one record per case was reported; therefore the maximum value in
each record was considered as the mean value of the maximum stress for the corresponding case and
the COY was assumed to be the same as the COY for case 8, that is, 0.0993. Then plate theory and
finite element analysis were used to determine the mean value of the maximum dynamic pressure,
mean(Pmax), that causes the measured stresses. The results are summarized in Table 18-2. It is reasonable
to assume that the COY of the maximum dynamic pressure, COV(Pmax), is the same as the COY of
the maximum measured stress. The mean value and COY of the extreme pressure were, then, determined
using Eqs. (18-13) to (18-14) for a vessel usage period of 15 years at a rate of 3000 hr/yr and according
Table 18·2. Statistical Characteristics of Pressure for Example Problem (15 Years of Usage)
to the percentage use presented in Table 18-1. The results are shown in Table 18-2. The selection of
the usage period of 15 years and the 3000 hr of operation per year were for the purpose of illustration.
It is also assumed that the extreme pressure follows a type I largest extreme value probability
distribution.
It is evident from Table 18-2 that case 8 is the most critical sea-statelboat-speed combination. There-
fore, for this case the statistics of the extreme pressures were determined with usage periods of 0.2,
0.5, 1, 2, 5, 10, 15, 50, and 100 years, using Eqs. (18-12), (18-13), and (18-14). These equations are
based on the assumption of an exponential tail for the parent distribution for the extreme value analysis.
In this example, the parent distribution is normal. The results of these evaluations are shown in Table
18-3.
The statistical characteristics of the strength of the material used for the vessel and the dimensions
of a plate within the critical region were determined by Ayyub and White (1987). The mean values and
COY of the yield stress and modulus of elasticity of the material were estimated to be 47.8 and 29,774
ksi, and 0.13 and 0.038, respectively. The mean values and COY of the thickness and the overall
dimensions of the plate were estimated to be 0.161 in. and 11.75 in. X 23.5 in., and 0.01, 0.05, and
0.05, respectively.
The failure probabilities of a plate according to the limit state of Eq. (18-15) can be determined by
Monte Carlo simulation with variance reduction techniques. Conditional expectation with antithetic
variates variance reduction technique was used in the analysis (see Chapter 4 of this book for a de-
scription of this method). The average probability of failure of a plate (Pfp), coefficients of variation of
the estimate of the probability of failure COV(Pfp), and the numbers of simulation cycles for different
usage periods of the boat are shown in Table 18-4.
The end of the structural life of a vessel depends on many factors that include the rate of usage of
the vessel, loading condition and distribution, strength characteristics, the inspection and maintenance
strategies, and the definition of end of structural life. The critical region for the vessel was defined as
the region that consists of a total of 28 plates. These plates were assumed to experience the same
loading and to have approximately the same strength characteristics; therefore they have approximately
the same probability of failure. Because the end of life is defined as failure of more than 5 plates (of
the 28 plates), the vessel (or structural system) can be considered to fail if 6 plates or more of the 28
fail.
Let us first consider a warranty period of 1 year and an inspection interval of 1 year (Le., W = 1,
and 1= 1 in Eq. [18-5]). For a period of 1 year, plate failure probability (Pfp) is 0.06765 (from Table
18-4). Because end of life is defined as failure of at least 6 of 28 plates, we consider the n-out-of-N
system with n = 6 and N = 28. The failure probability of this system depends on the statistical correlation
between the plate failures. An upper bound failure probability is obtained when the correlation coeffi-
cient is unity and a lower bound is obtained when the correlation coefficient is zero. In our example,
the correlation between the plate failures is assumed to be small, and so the lower bound is closer to
the actual (unknown) value. The lower bound failure probability is given by Eq. (18-6). The equation
results in the probability of failure of at least 6 of 28 plates (Pf6/28,I) at the end of 1 year of service as
0.00989. Similar calculations for inspection intervals of 2 years (I = 2) with Pfp = 0.09403 (Table 18-
4) yields Pf6/28,I = 0.042719. Because the warranty period is 1 year (W = 1), Pf6/28,W = 0.009895.
Failure probabilities at different durations of service T (years of usage) were computed using Eq.
(18-5) for both I = 1 and 2. Failure probabilities at T = 1, 3, 5, 11, 15, 21, 31, and 51 years were
computed and plotted in Fig. 18-1. This graph of failure probability versus time is also the cumulative
distribution function of structural life. It is evident from Fig. 18-1 that by reducing the inspection
interval, expected structural life can be increased. This is because at the end of each inspection interval,
any reported deformation damage is to be fixed before sending the vessel out for the next usage period.
It may also be interesting to look at the hazard function (Eq. [18-4]) as applied to the probabilities of
failure from Fig. 18-1. The hazard function was computed. It decreases slightly with time. This is to
0.7 -r-----,----.----"T"""---.----.....,..---....,
e
;:...0
.-:: Il.
~ 0.3
.g '-' 0.2 +---_l~--4_O::::::"-_+_---'_--......j.---_I
~
0.1 +-~~~~~-_+---_+_---'_--......j.---_I
O~~-_l~--_+---_+_---'_--......j.---_I
o 10 20 30 40 50 60
be expected because of the manner in which failure was defined. Because the definition required any
damage other than that significant enough to have reached the defined level of "failure" to be repaired
at the end of each inspection interval, the hazard from year to year remains fairly constant. What is
interesting to note is that by decreasing the inspection interval the hazard is also reduced. However, it
should be noted that these results are highly dependent on the underlying assumptions in this example.
As noted in Section 3.2, the cumulative value modeling approach is suitable for failure modes such as
fracture and fatigue under cyclic loads. Probabilistic fracture mechanics and probabilistic fatigue analysis
techniques (discussed in Chapters 6 and 7 of this book, respectively) are used to compute structural
failure probabilities as a function of time. Examples of life prediction using these methods are provided
in Chapters 5, 6, and 22 of this book. Other examples in this area include the work of Ayyub et al.
(1989, 1990), Tallin et al. (1988) and Yazdani and Albrecht (1987, 1990).
7. CONCLUSIONS
The factors that affect the life of a structure include design parameters, design safety factors, design
methods, structure type, structural details, materials, construction methods and quality, loads, mainte-
nance practices, inspection methods, and other environmental factors. All possible failure modes of a
structure need to be identified. Then the most critical failure modes can be selected. Structural life
should be determined on the basis of these failure modes.
In this chapter, methods of structural life assessment are discussed. The methods are based on prob-
abilistic reliability concepts, statistics of extremes, and cumulative damage. The methods provide the
probability of failure of structural system according to the identified failure modes as a function of time.
This function can be interpreted as the cumulative probability distribution function (CDF) of structural
life. The effect of inspection strategies on structural life is discussed. 1\\'0 methods of life prediction,
namely, the extreme value modeling method and the cumulative value modeling method, are discussed.
An example illustrating the use of the extreme value modeling methodology is presented. The cumu-
lative value modeling methodology is described and references are made to other chapters in the book
where this method is used.
REFERENCES
MG, A. H.-S., and W. H. TANG (1984). Probability Concepts in Engineering Planning and Design, Vol. 2:
Decision, Risk and Reliability. New York: John Wiley & Sons.
AYYUB, B. M., and G. I. WmTE (1987). Reliability Analysis of the Island-Class Patrol Boat. Report. Avery Point,
Connecticut: U.S. Coast Guard R&D Center.
AYYUB, B. M., and G. I. WHITE (1988). Life Expectancy Assessment and Durability of the Island-Class Patrol
Boat Hull Structure. Report. Avery Point, Connecticut: U.S. Coast Guard R&D Center.
AYYUB, B. M., G. I. WmTE, and E. S. PURCELL (1989). Estimation of structural service iife of ships. Naval
Engineers 10urnaI101(3):156-166.
AYYUB, B. M., G. I. WIDTE, T. F. BELL-WRIGHT, and E. S. PURCELL (1990). Reliability-Based Comparative Life
Expectancy Assessment of Patrol Boat Hull Structures. Report No. CG-D-02-90. Washington, D.C.: U.S.
Department of Transportation.
428 Probability-Based Life Prediction
DE KRAKER, A, 1. W. TICHLER, and A C. W. M. VROUWENVELDER (1982). Safety, Reliability and Service Life of
Structures. Delft, the Netherlands: Department of Civil Engineering, Delft University of Technology.
HUGHEs, O. F. (1981). Design of laterally loaded plating-uniform pressure loads. SNAME Journal of Ship Re-
search 25(2):77-89.
PURCELL, E. S., S. 1. ALLEN, and R. J. WALKER (1988). Structural analysis of the U.S. Coast Guard island-class
patrol boat. SNAME Transactions 96(7):1-23.
SCHUMACHER, M. (1979). Seawater Corrosion Handbook. New Jersey: Noyes Data Corporation.
TALLIN, A, F. KIRKIMO, and H. MADSEN (1988). Probabilistic fatigue crack growth analysis with reliability up-
dating. In: Proceedings of 5th ASCE Specialty Conference on Probabilisitic Methods in Civil Engineering.
New York: American Society of Civil Engineers.
THoFf-CHRISTENSEN, P., and Y. MUROTSU (1985). Applications of Structural Systems Reliability Theory. New
York: Springer-Verlag.
YAZDANI, N., and P. ALBRECHT (1987). Risk analysis of fatigue failure of highway bridges. ASCE Journal of
Structural Engineering 113(3):483-500.
YAZDANI, N., and P. ALBRECHT (1990). Probabilistic fracture mechanics applications to bridges. Engineering Frac-
ture Mechanics 77(5):769-785.
19
SEISMIC RISK ASSESSMENT
M. K. RAVINDRA
1. INTRODUCTION
Large earthquakes could potentially cause extensive property damage, loss of life, and environmental
damage as a result of damage to critical facilities. Assessment of seismic risk to critical facilities such
as nuclear power plants and chemical plants is not only important from the standpoint of a large capital
investment but also because of potential damage to environment and public in terms of radiological or
toxic chemical releases. Seismic risk assessment is also important for an insurance portfolio of properties
and for lifelines such as transportation, water distribution, and electric transmission systems.
Damage from earthquakes could be from shaking-caused damage or from fires following earthquakes.
Such damage could impact on the safety of critical systems and/or lead to business interruption. The
concern for seismic safety extends from a single facility to multiple facilities in a region requiring
emergency planning. For nuclear power plants, seismic safety is ensured by following the Nuclear
Regulatory Commission requirements. For chemical facilities, the requirements under the Risk Man-
agement and Prevention Program of California (Lercari, 1989) ensure that the public in the vicinity of
chemical plants is not exposed to undue risks. The design of ordinary buildings is governed by building
codes such as the Uniform Building Code (International Conference of Building Officials, 1991), which
provides for an acceptable level of seismic safety for the building occupants. The requirements of the
state insurance commission limit the seismic exposure of insurance companies so that policy holders
are protected in a large earthquake.
In seismic risk assessment, most elements of probabilistic structural mechanics, probabilistic mod-
eling of earthquakes, and the system response to earthquakes are incorporated to derive outputs and
insights that aid in making decisions on seismic upgrades or insurance.
The organization of this chapter is as follows: Section 3 describes the methodology of seismic risk
assessment, focusing on the different elements of the assessment and on the different goals of risk
assessment (e.g., minimizing financial loss vs. toxic release). Section 4 describes the application of
seismic risk assessment to nuclear power plants and energy facilities. Section 5 discusses seismic risk
assessment methods for chemical facilities. Section 6 describes the methods used in analyzing the
seismic risk to insurance portfolios. Section 7 lists the major conclusions of the chapter.
429
430 Seismic Risk Assessment
2.1. Notations
2.2 Abbreviations
3. METHODOWGY
• How often do large earthquakes occur in the vicinity of the facility(ies) of interest?
• What is the response (i.e., failure or damage) of the facility to these earthquakes?
• What is the impact of the adverse response of the facility?
The key elements of a seismic risk assessment can be identified as (1) seismic hazard analysis, (2)
seismic fragility/vulnerability analysis, (3) system response/accident analysis, and (4) evaluation of con-
sequences. The output of seismic risk assessment include the following.
• Frequencies of occurrence of accidents with different consequences (e.g., core damage, early fatalities, large
toxic chemical release, potential health effects, and property damage)
• Identification of dominant seismic risk contributors: If the seismic risk is not acceptable, these elements (i.e.,
components in a facility, different facilities in a region or portfolio) may be upgraded to reduce the risk or
aid in insurance decisions
In the following, each of the key elements of a seismic risk assessment is briefly described.
2. Evaluation of the earthquake history of the region to assess the frequencies of occurrence of earthquakes
of different magnitudes or epicentral intensities
3. Development of attenuation relationships to estimate the intensity of earthquake-induced ground motion
(e.g., peak ground acceleration) at the site
4. Integration of the above information to estimate the frequency of exceedance for selected ground motion
parameters
The hazard estimate depends on uncertain estimates of attenuation, upper bound magnitudes, and
the geometry of the postulated sources. Such uncertainties are included in the hazard analysis by as-
signing probabilities to alternative hypotheses about these parameters. A probability distribution for the
frequency of occurrence is thereby developed. The annual frequencies for exceeding specified values
of the ground motion parameter are displayed as a family of curves with different probabilities (Fig.
19-1) assigned to different experts and hypotheses.
1\\'0 basic types of analytical models are available for probabilistic hazard analysis, depending
on whether faults or seismotectonic provinces are used for modeling the sources of future earthquakes
in the region. Many such methods have been developed over the last 20 years (Cornell, 1968; Der
Kiureghian and Ang, 1977; Mortgat and Shah, 1979; Bender, 1984).
Analysis based on seismotectonic province models the sources of future earthquakes as point sources
that occur randomly over time and space in a given province. The occurrence rate over a source zone
(province) is considered constant and independent of occurrence in other zones. Such models are more
appropriate where fault sources are not clearly identifiable (e.g., eastern United States). The fault models
consider the location, length, and movement of faults and are relevant for certain regions, such as
California. However, the basic procedure is similar for both the models. The different steps of the
analysis procedure are as follows.
--
10-2
Curve 1
Curve 2
r--. ,/
~ ~,
~ Ii!!. ........ // ____
Curve 3
Curve 4
c:
~~~ ////
~ ~~ ~
Q) ...... ~ . - - - CurveS
~ 10-3 Curve 6
...
C"
Q)
u. '- '- ~,v / ./
Curve 7
Curve 8
Q)
................ ~/ /~
' Curve9
0 / Curve 10
c: ,,)(~
Q) 10-4
"0
Q)
Q)
~
W
a; 10-5
"''\' ' ~~ ~ \..
~~ ~
::l
c:
c:
« " ""'\. "'\ "
10-6 \ ~ \ .'\ ~
10 100 1000
Acceleration (em/Sec 2 )
Figure 19-1. Seismic hazard curves for a nuclear plant site.
Seismic Risk Assessment 433
(19-1)
where Amax is the peak ground acceleration, m is the earthquake magnitude, R is the distance measure,
and kb /c.z, and k3 are empirical constants. A variety of attenuation relationships have been proposed
(McGuire, 1978; Donovan and Bernstein, 1978; Schnabel and Seed, 1973). Because different ground
motion experts prescribe different relationships for attenuation, the resulting uncertainty is reflected by
weights assigned to different relationships. The resulting peak acceleration is assumed to be the median
value and the actual values are considered to be lognormally distributed.
STEP 4. DEVELOPMENT OF HAZARD CURVES. The information obtained from the above steps is
consolidated to estimate the frequency of exceedance for different values of ground motion parameter
by using the total probability theorem.
The hazard estimate depends on uncertain estimates for attenuation, upper bound magnitudes, and
the geometry of the postulated sources. Distinction is generally made between uncertainty due to random
variation inherent in the occurrence of earthquakes and the propagation of the ground motion to the
site, and the modeling uncertainties due to diversity of expert opinions about zonations and attenuations.
This latter feature results in a family of hazard curves, each with different probabilities, reflecting the
subjective degree of belief in each relationship.
There may be wide differences in the set of final curves, depending on the methodology selected for
hazard analysis (e.g., 10 different curves are shown in Fig. 19-1). Among others, Bernreuter et al.
(1985), the Electric Power Research Institute (EPRI) (1989), the Yankee Atomic Electric Company
(1983), and Algermissen et al. (1982) have proposed parametric methods such as that discussed above,
whereas Veneziano et al. (1984) have presented a nonparametric historical methodology. These results
were critically compared in a study by Bernreuter et al. (1989). It was noted that the median hazard
curves based on a relatively large sample of expert judgment are in reasonable agreement for these
different methods. For an excellent discussion of the use of seismic hazard analysis in nuclear power
plant siting and licensing, the reader is referred to the book by Reiter (1991).
434 Seismic Risk Assessment
3.1.2. Multiple sites. The above description of the seismic hazard analysis applies to a single site
(e.g., nuclear power plant, chemical plant, dam, or tall building). By modeling a region as consisting
of a number of sites, this approach has been used to develop regional seismic hazard maps. For example,
the National Earthquake Hazard Reduction Program (NEHRP) has developed seismic hazard maps of
spectral acceleration (at periods of 0.1, 0.3, and 1 sec) for 10% probability of exceeding in 50, 100,
and 250 years (NEHRP, 1991); these maps have been adopted as part of the seismic design criteria for
buildings.
In many practical applications, the probability of a single earthquake affecting a number of sites
(e.g., nodes of a lifeline, portfolio of buildings, and essential or critical facilities in a county) is of
interest. For example, a business may have its warehouses sited at two different geographic locations.
The owner would like to know the probability of a single earthquake affecting both the warehous~s
and disrupting the business. If this probability is unacceptably large, the owner may decide to strengthen
the buildings or to relocate to minimize the seismic risk. The seismic hazard model described above
could be utilized to calculate this probability and the level of upgrading (McGuire, 1976).
are discussed in Kennedy et al. (1988a,b), Kipp et al. (1988), Ravindra (1988), and Ravindra et al.
(1987).
Seismic fragilities are needed to estimate the frequency of occurrence of initiating events and to
quantify the fault trees for obtaining the seismically induced accident sequence frequencies. Seismic
fragility is described by means of a family of fragility curves reflecting the uncertainty in the parameter
values and in the models. A subjective probability is assigned to each curve, representing the degree
of belief in the set of parameter values and the model that yielded that curve. It is customary to show
the median fragility curve, and the 95% confidence fragility curve, and the 5% confidence curve (Fig.
19-2). Using the double lognormal model, the fragility family is concisely described by means of three
parameters: Am, the median capacity of the component, the logarithmic standard deviation I3R, reflecting
the randomness in the capacity, and the logarithmic standard deviation l3u, reflecting the uncertainty in
the median capacity. These parameters are evaluated for each component for all critical failure modes
using the design information, earthquake experience database, and qualification and fragility test data.
In many applications, it is sufficient to use the mean fragility curve, whose parameters are Am and I3c,
where I3c is the composite variability given by (13~ + 13~)1!2.
In Sections 4 and 5 we describe the development of seismic fragilities for structures and equipment
in nuclear power plants and chemical facilities.
3.3.2. Vulnerability assessment. Seismic vulnerability is defined as the ratio of damage to the
replacement value of the structure or equipment as a function of earthquake ground motion. This ratio
is a random variable and as such is described by a probability density function. Typically, the mean
damage ratio is used as a function of the modified Mercalli intensity (MMI) at the site. This methodology
was first proposed by Whitman et al. (1975). It has been adopted in the Applied Technology Council
document ATC-13 (Applied Technology Council, 1985). In ATC-13, damage probability matrices for
different structures are given. The damage probability matrix gives the probability that a certain level
w
a: Am = 0.87 g
::::J
«
...J j3 R = 0.25
j3 U = 0.35
.... .... ....
LL
.... ....
LL
o 0.8
....
........
>- .... Mean
!::
...J
III 0.6
....
.........
«
III ..'"'"
oa:
a..
0.4
«
...J
z
o
i=
(5 0.2
z
o
o
a L -____ ~~~ ..~~~ ______ ~~_L ____ ~ ____ ~ _ _...J
Figure 19-2. Median, 5% nonexceedance, and 95% nonexceedance fragility curves for a component.
436 Seismic Risk Assessment
of damage occurs for a given MM intensity. The damage probability matrices were developed by
soliciting expert opinion.
1. On the basis of the preliminary systems analysis and on previous seismic PRAs, select a set of structures
and equipment (about 300 items) for fragility evaluation.
2. Collect plant design and seismic qualification information.
3. Develop probabilistic floor and structural response by analysis or by appropriate extrapolation of the design
information.
4. Perform plant walkdown to search for seismic vulnerabilities, to assist in screening out high-capacity com-
ponents, and to collect additional data on components needing detailed fragility analysis. Procedures for
seismic walkdown are given in the EPRI seismic margin assessment methodology report (EPRI, 1991).
Typically, about 100 components are selected for detailed fragility analysis.
5. For each component, identify the critical failure modes. Past seismic PRAs could be used as a guide in
this identification. It is important to relate the failure mode of the component to the performance of the
system. The median capacity of the component in each failure mode is estimated using the data sources
discussed below. The randomness and uncertainty variabilities are also estimated using the same data
sources.
4.3.1. Fragility model. The entire fragility family for an element (structure or equipment) cor-
responding to a particular failure mode can be expressed in terms of the best estimate of the median
ground acceleration capacity Am and two random variables. Thus, the ground acceleration capacity, A
is given by
(19-2)
~
0"
g-i
.....
&1
J:=
Seismic motion
parameter
+
.i'c
....
Weather data
~ -- J(XX\ ~
Release 00)
category Atmospheric
dispersion g-i
~ !'i
r+ Population
r- u"
Event trees
Evacuation
HeaRh effects
£=
FauRtrees
Containment analysis Frequency Property damage Damage
Consequence
+
Systems Release Risk
analysis frequency analysis
17
~
1
It~
I!!
-'"
"'IL
....~o
.t:
C
0 ;-
U
Seismic motion
parameter
Component-fragIlity
..
evaluation 2HO 3S3nbJRASSMFEE
in which ER and Eu are random variables with unit medians, representing, respectively, the inherent
randomness about the median and the uncertainty in the median value. In this model, called the double
lognormal model, we assume that both ER and Eu are lognormally distributed with logarithmic standard
deviations J3R and J3u, respectively. The formulation for fragility given by Eq. (19-2) and the assumption
of lognormal distribution allow easy development of the family of fragility curves that appropriately
represent fragility uncertainty. For the quantification of fault trees in the plant system and accident
sequence analyses, the uncertainty in fragility needs to be expressed as a range of conditional failure
probabilities for a given ground acceleration. This is achieved as explained below. With perfect knowl-
edge (i.e., accounting only for the random variability J3R), the conditional probability of failure 10' for
a given peak group acceleration level a is given by
(19-3)
where <1>(.) is the standard Gaussian cumulative distribution function. The relationship between 10 and
a is the median fragility curve plotted in Fig. 19-2 for a component with a median ground acceleration
capacity Am =0.87g and J3R =0.25. For the median conditional probability of failure range of 5 to 95%,
the ground acceleration capacity would range from 0.58g to 1.31g.
When the modeling uncertainty J3u is included, the fragility at each peak ground acceleration value
a becomes a random variable (uncertain). At each acceleration value, the fragility 1 can be represented
by a subjective probability density function. The fragility value f for a subjective probability Q (also
known as "confidence") is given by
(19-4)
where Q =P[f < fIla], that is, the subjective probability (confidence) that the conditional probability
of failure 1 is less than f' for a peak ground acceleration a, and <1>-1 is the inverse of the standard
Gaussian cumulative distribution function.
For example, the conditional probability of failure f' at acceleration 0.6g that has a 95% nonex-
ceedence subjective probability (confidence) is obtained from Eq. (19-4) as 0.79. The 5 and 95%
probability (confidence) interval on the failure probability at 0.6g is 0 to 0.79. Subsequent computations
are made easier by discretizing the random variable probability of failure 1 (fragility, which varies from
o to 1) into different intervals and deriving subjective probability qi for each interval (Fig. 19-4). Note
that the sum of qi associated with all the intervals is unity. The process develops a family of fragility
curves, each with an associated subjective probability qi.
The median ground acceleration capacity Am and its variability estimates J3R and J3u are evaluated
by taking into account the safety margins inherent in capacity predictions, response analysis, and equip-
ment qualification, as explained below.
4.3.2. FaUure modes. The first step in generating fragility curves such as those in Fig. 19-2 is
to develop a clear definition of what constitutes failure for each of the critical components in the plant.
This definition of failure must be agreeable to both the structural analyst generating the fragility curves
and the systems analyst, who must judge the consequences of component failure. Several modes of
failure (each with a different consequence) may have to be considered and fragility curves may have
to be generated for each of these modes. For example, a motor-actuated valve may fail in any of the
following ways (Kennedy et al., 1980).
1. Failure of power or controls to the valve (generally related to the seismic capacity of the cable trays, control
Seismic Risk Assessment 439
room, and emergency power): Because they are not related to the specific item of equipment (i.e., motor-
actuated valve) and are common to all active equipment, such failure modes are most easily handled as
failures of separate systems linked in a series to the equipment
2. Failure of the motor
3. Binding of the valve due to distortion and, thus, failure to operate
4. Rupture of the pressure boundary
It may be possible to identify the failure mode most likely to be caused by the seismic event by
reviewing the equipment design and considering only that mode. Otherwise, fragility curves are devel-
oped on the premise that the component could fail in anyone of the potential failure modes.
Identification of the credible modes of failure is largely based on the experience and judgment of
the analyst. Review of plant design criteria, calculated stress levels in relation to the allowable limits,
qualification test results, seismic fragility evaluation studies done on other plants, and reported failures
(in past earthquakes, in licensee event reports, and fragility tests) are useful in this task.
Structures are considered to have failed functionally when they cannot perform their designated
functions. In general, structures have failed functionally when inelastic deformations under seismic load
are estimated to be sufficient to potentially interfere with the operability of safety-related equipment
attached to the structure, or fractured sufficiently so that equipment attachments fail. These failure modes
represent a conservative lower bound of seismic capacity because a larger margin of safety against
collapse exists for nuclear structures. Also, a structural failure has been generally assumed to result in
a common cause failure of multiple safety systems, if these are housed in the same structure. For
example, the service water pumps in Zion nuclear power plant were assumed to fail when the crib
house pump enclosure roof collapses (Commonwealth Edison Company, 1981). Structures that are
susceptible to sliding are considered to have failed when sufficient sliding deformation has occurred to
fail buried or interconnecting piping or electrical duct banks.
For piping, failure of the support system and plastic collapse of the pressure boundary are considered
1.0 -r----------~--~---_::::::o---_.
.2
...
Q)
-°cu
u..
~
0
>-
0.8
0.6
:c(\j
...0
.0
c.. 0.4
(\j
c:
0
:;::::
:cc: 0.2
0
()
dominant failure modes. Failure modes of equipment examined may include structural failure modes
(e.g., bending, buckling of supports, anchor bolt pullout), functional failures (binding of valve, excessive
deflection), and relay trip or chatter.
Consideration should also be given to the potential for soil failure modes (e.g., liquefaction, toe
bearing pressure failure, base slab uplift, and slope failures). For buried equipment (Le., piping and
tanks), failure due to lateral soil pressures may be an important mode. Seismically induced failures of
structures or equipment under impact of another structure or equipment (e.g., a crane) may also be a
consideration. Seismically induced failures of dams, if present, resulting in either flooding or loss of
cooling source, should also be investigated.
4.3.3. Estimation of fragility parameters. In estimating fragility parameters, it is convenient to
work in terms of an intermediate random variable called the factor of safety. The factor of safety F on
ground acceleration capacity above the safe shutdown earthquake (SSE) level specified for design ASSE
is defined as follows:
A = FAsSE
actual seismic capacity of element
F = ,;,;.;,....-----..,;,.~~----
actual response due to SSE
F is further simplified as
Note that F can also be defined with reference to a different earthquake, such as the review earthquake
level in a seismic margin study.
The median factor of safety Fm can be directly related to the median ground acceleration capacity
Am as
(19-6)
The logarithmic standard deviations of F, representing inherent randomness and uncertainty, are
identical to those for the ground acceleration capacity A.
4.3.3.1. FRAGIliTY OF STRucruRES. For structures, the factor of safety can be modeled as the
product of three random variables:
(19-7)
The strength factor Fs represents the ratio of ultimate strength (or strength at loss of function) to the
stress calculated for ASSE• In calculating the value of F s, the nonseismic portion of the total load acting
on the structure is subtracted from the strength as follows:
(19-8)
Seismic Risk Assessment 441
where S is the strength of the structural element for the specific failure mode, P N is the normal operating
load (i.e., dead load, operating temperature load, etc.), and PT is the total load on the structure (i.e.,
sum of the seismic load for ASSE and the normal operating load). For higher earthquake levels, other
transients (e.g., safety relief valve discharge and turbine trip) may have a high probability of occurring
simultaneously with the earthquake; the definition of PN in such cases should be extended to include
the loads from these transients.
The inelastic energy absorption factor (ductility) F 1'-' accounts for the fact that an earthquake repre-
sents a limited energy source and many structures or equipment items are capable of absorbing sub-
stantial amounts of energy beyond yield without loss of function. A suggested method to determine the
deamplification effect resulting from inelastic energy dissipation involves the use of ductility-modified
response spectra (Newmark, 1977). The deamplification factor is primarily a function of the ductility
ratio j.L defined as the ratio of maximum displacement to displacement at yield. More recent analyses
(Riddell and Newmark, 1979) have shown the de amplification factor to be a function of system damping.
One might estimate a median value of j.L for low-rise concrete shear walls (typical of auxiliary building
walls) of 4.0. The corresponding median FI'- value would be 2.4. The variabilities in the inelastic energy
absorption factor FI'- are both estimated as J3R = 0.21 and J3u = 0.21, taking into account the uncertainty
in the predicted relationship between F 1'-' j.L, and system damping.
The structure response factor FRS recognizes that in the design analysis structural response was
computed using specific (often conservative) deterministic response parameters (material properties and
loads) for the structure. Because many of these parameters are random (often with wide variability) the
actual response may differ substantially from the design analysis-calculated response for a given peak
ground acceleration.
The structure response factor FRS is modeled as a product of factors influencing the response
variability:
(19-9)
where
FSA = spectral shape factor representing variability in ground motion and associated ground response spectra
F4> = direction factor representing the variability in the two earthquake direction response spectral values about
the mean value
Fa = damping factor representing variability in response due to difference between actual damping and design
damping
F M = modeling factor accounting for uncertainty in response due to modeling assumptions
FMC = mode combination factor accounting for variability in response due to the method used in combining
(19-10)
and
(19-11)
442 Seismic Risk Assessment
The logarithmic standard deviation ~F is further divided into random variability ~R and uncertainty
~u. To obtain ~R' use in Eq. (19-11) ~R values of the variables affecting the factor of safety. Similarly,
to obtain ~u, use the ~u values of those variables. To obtain the median ground acceleration capacity
Am the median factor of safety F m is multiplied by the safe shutdown earthquake peak ground
acceleration.
For each variable affecting the factor of safety, the random (~R) and uncertainty (~u) variabilities
must be separately estimated. The differentiation is somewhat judgmental, but it can be based on general
guidelines. Essentially, ~R represents variability due to the randomness of the earthquake characteristics
for the same acceleration and to the structural response parameters that relate to these characteristics.
The dispersion represented by ~u is due to factors such as the following.
• Our lack of understanding of structural material properties such as strength, inelastic energy absorption, and
damping
• Errors in calculated response due to use of approximate modeling of the structure and inaccuracies in mass
and stiffness representations
• Usage of engineering judgment in lieu of complete plant-specific data on fragility levels of equipment ca-
pacities, and responses
Table 19-1 gives a range of median, ~R' and ~u values of different variables affecting the fragilities
of nuclear components.
4.3.3.2. FRAGILITY OF EQUIPMENT For equipment and other components, the factor of safety is
composed of a capacity factor Fc, a structure response factor FRS, and an equipment response (relative
Table 19-1 Estimated Median Factors of Safety and Logarithmic Standard Deviation Associated with Safe
Shutdown Earthquake
(19-12)
The capacity factor Fe for the equipment is the ratio of the acceleration level at which the equipment
ceases to perform its intended function to the seismic design level. This acceleration level could cor-
respond to, for example, a breaker tripping in a switchgear, excessive deflection of the control rod drive
tubes, or failure of a stream generator support. The capacity factor for the equipment may be calculated
as the product of Fs and F,... The strength factor Fs is calculated using Eq. (19-8). The strength S of
equipment is a function of the failure mode. Equipment failures can be classified into three categories.
Elastic functional failures involve the loss of intended function while the component is stressed
below its yield point. Examples of this type of failure include the following.
The load level at which functional failure occurs is considered the strength of the component.
Brittle failure modes are those that have little or no system inelastic energy absorption capability.
Examples include the following.
Each of these failure modes has the ability to absorb some inelastic energy on the component level,
but the plastic zone is localized and the system ductility for an anchor bolt or a support weld is small.
The strength of the component failing in a brittle mode is therefore calculated using the ultimate strength
of the material.
Ductile failure modes are those in which the structural system can absorb a significant amount of
energy through inelastic deformation. Examples include the following.
The strength of the component failing in a ductile mode is calculated using the yield strength of the
material for tensile loading. For flexural loading, the strength is defined as the limit load or load to
develop a plastic hinge.
The inelastic energy absorption factor F,.. for an item of equipment is a function of the ductility ratio IJ..
The median value F,.. is considered close to 1.0 for brittle and functional failure modes. For ductile
444 Seismic Risk Assessment
failure modes of equipment that respond in the amplified acceleration region of the design spectrum,
(19-13)
where E is a random variable reflecting the error in Eq. (19-13) and has a median value of 1.0 and a
logarithmic standard deviation l3u ranging from 0.02 to 0.10 (increasing with the ductility ratio). For
rigid equipment, F,", is given by
(19-14)
Again, E is a random variable of median equal to 1.0 and logarithmic standard deviation ranging
from 0.02 to 0.10.
The median and logarithmic standard deviation of ductility ratios for different equipment are cal-
culated considering recommendations of Newmark (1977). This reference gives a range of ductility
ratios to be used for design. The upper end of this range might be considered to represent approximately
the median value, whereas the lower end of the range might be estimated at about two logarithmic
standard deviations below the median.
The equipment response factor FRE is the ratio of equipment response calculated in the design to the
realistic equipment response; both responses are calculated for design floor spectra. The equipment
response factor FREis the factor of safety inherent in the computation of equipment response. It depends
on the response characteristics of the equipment and is influenced by some of the variables listed for
Eq. (19-9). These variables differ according to the seismic qualification procedure. For equipment qual-
ified by dynamic analysis, the important variables that influence response and variability, are as follows.
For rigid equipment qualified by static analysis, all variables, except the qualification method, are
not significant. The equipment response factor is the ratio of the specified static coefficient divided by
the zero period acceleration of the floor level where the equipment is mounted. If the equipment is
flexible and was designed via the static coefficient method, the dynamic characteristics of the equipment
must be considered. This requires estimating the fundamental frequency and damping, if the equipment
responds predominantly in one mode. The equipment response factor is the ratio of the static coefficient
to the spectral acceleration at the equipment fundamental frequency.
Where testing is conducted for seismic qualification, the response factor must take into account the
following.
The overall equipment response factor is the product of these factors of safety corresponding to each
of the variables identified above. The median and logarithmic standard deviations for randomness and
uncertainty are estimated following Eqs. (19-10) and (19-11).
The structural response factor FRs is based on the response characteristics of the structure at the
location of component (equipment) support. The variables pertinent to the structural response analyses
used to generate floor spectra for equipment design are the only variables of interest to equipment
fragility. Time history analyses using the same structural models used to conduct structural response
analysis for structural design are typically used to generate floor spectra. The applicable variables are
as follows.
For equipment with a seismic capacity level that has been reached while the structure is still within
the elastic range, the structural response factors should be calculated using damping values correspond-
ing to less than yield conditions (e.g., about 5% median damping for reinforced concrete). The com-
bination of earthquake components is not included in the structural response because the variable is to
be addressed for specific equipment orientation in the treatment of equipment response.
Median Fm and variability I3R and l3u estimates are made for each of the parameters affecting capacity
and response factors. These median and variability estimates are then combined using the properties of
lognormal distribution in accordance with Eqs. (19-10) and (19-11) to obtain the overall median factor
of safety Fm and variability I3R and l3u estimates required to define the fragility curves for equipment.
4.3.4. Data sources. For structures such as the concrete shear walls, prestressed concrete con-
tainment, steel frames, masonry walls, field-erected tanks, and buried structures, the fragility parameters
are generally estimated using plant-specific information. For major passive equipment (e.g., reactor
pressure vessel, steam generator, reactor coolant pump, recirculation pump, major vessels, heat exchang-
ers, and major piping), it is preferable to develop plant-specific fragilities using original design analyses.
Because of the large quantities of other types of passive equipment (e.g., piping and supports, cable
trays and supports, heating, ventilating and air conditioning [HVAC] ducting and supports, conduits,
and miscellaneous vessels and heat exchangers), it is generally necessary to use generic fragilities for
them. For active equipment (e.g., mechanical and electrical pumps and switchgear), a combination of
generic and plant-specific information is needed to develop fragilities.
Several sources of information are utilized in developing plant-specific and generic fragilities for
equipment. These sources include the following.
Sources 1 to 5 are plant specific; sources 6 to 8 are generic data collected for similar types of
equipment.
Generic information consists of earthquake experience data (EQE, 1982, 1991), fragility test data
(Bandyopadhyay and Hofmayer, 1986; Bandyopadhyay et al., 1987, 1990; Holman and Chou, 1986a,b,
1990), qualification tests of similar components in other plants (ANCO, 1991), and expert opinion
(Cover et al., 1985). Fragility parameter values derived for several components in the past seismic PRAs
have been compiled in Campbell et al. (1988).
Table 19-2 shows the fragility parameter values Am, I3R, and l3u for some typical components (struc-
tures and equipment) in a nuclear power plant.
4.3.5. Seismic safety margins research program method. The fragility analysis of equipment and
structures discussed in the preceding sections was developed in the course of the seismic risk assess-
ments of individual plants funded by utilities. In a research program called the Seismic Safety Margins
Research Program (SSMRP) funded by the U.S. Nuclear Regulatory Commission at the Lawrence
Livermore National Laboratory, a comprehensive methodology for seismic risk analysis was developed
(Bohn et al., 1983). It differs from the method followed in the utility studies in the development of
probabilistic responses of components (structures and equipment) and description of seismic fragilities
in terms of local response parameter (e.g., spectral acceleration at the support level, displacement, and
moment) instead of ground motion parameters such as peak ground acceleration. The dependence be-
Am
Structure equipment (g) ~R ~u
tween component failures is explicitly accounted for by calculating the correlation between component
responses using the SMACS and SEISIM computer codes. The methodology for seismic risk quanti-
tation is also more rigorous, as explained in Section 4.5. Because the SSMRP method is significantly
more expensive than the utility method in terms of man hours and computer time, it is seldom used in
individual plant analyses. Some simplifications of the SSMRP method have been proposed and applied
in NUREG-1l50 studies (Bohn and Lambright, 1990) and shutdown decay heat removal risk studies
(Cramond et ai., 1987).
4.3.6. Discussion. Fragility analyses have been performed for a number of nuclear power plants
in the United States, England, Sweden, Finland, Switzerland, Japan, Korea, and the Republic of China.
The techniques have matured over the last 10 years. They have also been subjected to critical review
by peer review committees and regulatory agencies. Actual fragility test data are becoming available
to validate the expert opinion.
The level of accuracy needed in the fragility analysis is related to the uncertainties in the seismic
hazard and to the ability to define the failure modes of components. Past seismic PRAs have shown
that large uncertainties exist in the seismic hazard estimates and dominate the overall uncertainty in the
accident sequence frequencies. Therefore, it does not serve to demand hig11 accuracy (Le., low uncer-
tainty) in the fragility estimates. In addition, the level at which failure occurs (Le., equipment operability
is impaired) is only crudely related to "excessive" drift, cracking, and deformation.
Seismic risk analysis is used more for the insights than for the "bottom line" numbers. Using the
seismic PRA, different sequences, systems, and components could be ranked for their contribution to
seismic risk. The robustness of this ranking is generally examined by sensitivity studies.
4.4. Systems Analysis l
Nuclear power plants have many safety systems to bring the plant into a safe shutdown condition,
thereby preventing core damage and mitigating any accident. There are many front-line systems per-
forming these functions; these front-line systems derive support (Le., water, instrument air, steam, and
control power) from supporting systems. Analysis of nuclear power plants composed of multiple trains
of redundant safety systems is accomplished using event trees and fault trees (NRC, 1983). This analysis
was developed initially for the initiating events induced by operator errors and random failures, the so-
called "internal events."
Systems analysis for seismic events follows the approach taken for the internal events analysis.
However, the major differences between seismic and internal events are in the following.
1. The identification of initiating events
2. The increased likelihood of multiple failures of safety systems requiring a more detailed event tree
development
3. The more pronounced dependencies between component failures as a result of correlation between com-
ponent responses and between capacities
4.4.1 Initiating events. In addition to the standard list of initiating events postulated in the internal
event PRA, such as large loss-of-coolant-accidents (LOCAs), small LOCAs, and loss-of-offsite power
transient, possible building failures and multiple piping failures are considered in a seismic PRA.
4.4.2. Event trees. For each initiating event, an event tree is constructed. Again, the analyst must
be aware of the increased likelihood of multiple failures of safety systems under earthquake conditions.
The systems that are essentially guaranteed to be available for mitigating accidents initiated by internal
IMethods of system reliability and risk analysis including initiating events, event tree analysis, fault tree analysis, cut sets,
risk quantitation, and ranking of contributors are discussed in some detail in Chapter 9 of this handbook.
448 Seismic Risk Assessment
events may fail under earthquake conditions. For example, in a risk study for a PWR, the analyst may
judge that, because three of five containment fan coolers will provide sufficient cooling for the con-
tainment in the event of core damage, the fan cooler system is always available for mitigation. But a
large earthquake may damage all five coolers, and this possibility must be reflected in the seismic event
trees.
4.4.3. Fault trees. The major difference between earthquakes and internal events lies in the quan-
titation of the fault trees. Also, because the fragility of components increases with the earthquake level,
pruning of the fault trees (fault tree reduction) should be done with more care than for the internal
events. The probability of failure estimated for each component in a seismic fault tree is composed of
both the seismic fragility and the random (non seismic) unavailability of the component. Each accident
sequence is expressed by means of a union of minimal cut sets or by Boolean expressions. Different
accident sequences are grouped by plant damage states or release categories. Calculation of the accident
sequence probability conditional on the earthquake occurrence is done by considering the fragilities,
non seismic unavailabilities, and the Boolean expressions. In addition, the median ground acceleration
capacity Am and variabilities I3R and l3u for each accident sequence and core damage may be computed
through the fault tree analysis.
lines compiled from past PRAs may be of value for future seismic PRAs. The following components
are found to be dominant risk contributors.
• Diesel generator (DG) system components such as diesel day tank, fuel oil tank, starting air receiver, and
DG control panel
• Electrical components such as 4-kV switchgear, batteries, and racks, and motor control center
• Heat exchangers such as residual heat removal (RHR) system and component cooling water (CCW) system
• Reactor internals and control rod drive (CRD) system
• Standby liquid control system tank
• Nuclear steam supply system (NSSS) supports
• Shearwalls
4.7.2. ABWR seismic probabilistic risk assessment. General Electric Company (San Jose, Cali-
fornia) has conducted a seismic PRA on the advanced boiling water reactor (ABWR) and submitted it
for design certification. The PRA uses a bounding seismic hazard curve (judged to bound a large number
of sites in the United States), specific fragilities for NSSS, and generic fragilities for structures and
equipment. The objective of this PRA is to show that the design meets advanced light water reactor
goals on core damage and release frequencies and to identify any vulnerabilities in the design that
should be fixed before completion of design and plant siting.
Brookhaven National Laboratory (Upton, New York), under the sponsorship of the NRC, performed
a review of the seismic PRA. It was judged that the seismic fragilities assigned on a generic basis are
achievable considering the evolutionary design of the ABWR and the seismic design basis. However,
the review developed specific interfacing requirements to ensure that the as-built design meets the
assumptions of the seismic PRA. The review (Ravindra and Nafday, 1991) focused on the seismic
hazard, systems analysis, fragilities, and seismic risk quantitation. Sensitivity studies were performed
to identify the importance of components and systems.
1. The perception of seismic hazard in the plant vicinity has changed since the design of the plant.
2. The seismic design criteria have been revised substantially.
3. The plant has experienced an earthquake exceeding the operating basis earthquake and a safety assessment
of the plant is planned.
In the above situations, a seismic margin study may need to be performed to estimate the realistic
seismic capacity of the plant that can be stated with a high degree of confidence. Although a properly
conducted seismic probabilistic risk assessment (PRA) would provide answers regarding the seismic
capacities of components, systems, and the plant, the large uncertainties in the seismic hazard curves
and the large number of systems and components to be considered in a PRA would limit the attention
paid to the more critical components and systems in the plant. The seismic margin review, on the other
hand, would focus on a few components and systems in the plant whose failure could lead to severe
core damage or prevent safe shutdown of the plant. The output of a seismic margin review would be
Seismic Risk Assessment 451
an estimate of the plant seismic capacity expressed with a high degree of confidence and an identification
of plant seismic vulnerabilities.
The procedures for performing the assessment of existing seismic margins in nuclear power plants
in the United States have been developed somewhat.independently in two research programs, one funded
by the NRC at the Lawrence Livermore National Laboratory (Livermore, California) and the other by
the Electric Power Research Institute (palo Alto, California). In these procedures, the seismic margin
for a component/system or plant is expressed in terms of a high confidence of low probability of failure
(HCLPF) capacity. The HCLPF capacity is a conservative representation of capacity and corresponds
to the earthquake level at which it is extremely unlikely that loss of shutdown capability or core damage
will occur. From a mathematical perspective it may be defined as the mean peak ground acceleration
value for which there is a 5% probability of failure at 95% confidence. This acceleration value may be
obtained from Eq. (19-4) by setting Q = 0.95 and f' = 0.05 as follows:
The HCLPF capacity is calculated for the components, systems, and the plant. Past seismic PRAs
have shown that the median capacities of these items exceed the HCLPF capacities by a wide margin
(factor of 2 to 4). Hence, the use of HCLPF capacity as an indicator of the seismic margin is deemed
appropriate.
The seismic margin review is performed using a series of screening procedures. A review earthquake
level that is typically much larger than the plant safe shutdown earthquake is selected. In the NRC
methodology, the first level of screening is done by focusing only on the plant system functions of
reactor subcriticality and early core cooling. On the basis of a review of all published seismic PRAs,
it was concluded that the seismic margin of a plant may be assessed by studying these system functions
only. In the EPRI methodology, the most viable success path for safe shutdown of the plant following
an earthquake is identified; the systems and components that are needed for this path are included in
the margin evaluation.
The second level of screening is done by sorting the components in the above systems into two
classes: those whose HCLPF capacities are generically higher than the review earthquake level and
those whose HCLPF capacities cannot be assumed to be higher than the review earthquake level without
further examination. The generic capacities of components have been established by reviewing the
fragility estimates in qualification analysis and test data.
The screening and margin review are aided by performing at least two plant walkdowns. The first
plant walkdown is aimed at confirming that no weaknesses exist in plant structures and equipment that
would make their HCLPF capacities lower than the generic values assumed in the second screening. It
is also intended to confirm the accuracy of system descriptions found in plant design documents and
to identify any system interactions, system dependencies, and plant unique features. The first plant
walkdown also provides an opportunity for gathering information on certain potentially weak compo-
nents for further HCLPF capacity calculations. The second plant walkdown is meant for collection of
specific data (e.g., size and other physical characteristics) on components requiring detailed analysis.
The components that require further margin evaluation, called the "screened-in" components, are
identified during the plant review and the two plant walkdowns. The next step in the NRC methodology
is to develop event trees and fault trees and derive the Boolean expression for the seismic-induced core
damage accident sequences. The final step is to evaluate the seismic margins for components and the
plant. This is accomplished by estimating the HCLPF capacities of components; the HCLPF capacity
of the plant is estimated by quantifying the core damage Boolean expression, using the component
fragilities. In the EPRI methodology, the HCLPF capacities of the components in the chosen success
452 Seismic Risk Assessment
path are estimated and the plant is considered to have a seismic capacity equal to the lowest of these
component capacities.
In addition to providing an estimate of the seismic capacity of the plant, a seismic margin review
should help in the identification of seismic vulnerabilities of the plant. These may include marginal
anchorage of equipment, unreinforced or lightly reinforced masonry walls, and inadequately braced
batteries and control room ceiling.
The NRC methodology as described above has been applied on a trial basis to estimate the seismic
margin of the Maine Yankee Atomic Power Station (Wiscasset, Maine). Maine Yankee is an 825-MW
pressurized water reactor (PWR) supplied by Combustion Engineering and has been in operation since
1972. The plant safety-related structures and equipment have been designed for an SSE of O.lOg an-
chored to a Housner spectrum. The plant structures and equipment have been subsequently reviewed
for an earthquake peak ground acceleration of 0.18g and certain modifications have been implemented.
The Maine Yankee utility and the U.S. NRC jointly agreed to evaluate the plant for a review earthquake
level of 0.30g anchored to a NUREG/CR-0098 median spectrum. On the basis of the plant review,
walkdown, and systems and fragility screening, two accident sequences were evaluated; no LOCA and
small LOCA (Ravindra et ai., 1987). It was concluded that the plant has an HCLPF capacity against
core damage of at least 0.21g anchored to the NUREG/CR-0098 response spectrum. On the basis of
the results and insights of this study, the utility carried out certain modifications to the plant (e.g.,
strengthening of anchorage of diesel day tank, transformer, and containment spray fans, replacement of
station batteries, and bracing of block walls).
performance of a detailed walkdown to verify as-built condition and assess relative seismic vulnerabil-
ities. For critical items of equipment with ARM inventory, seismic capacities are calculated using the
RMPP guidelines cited above and probabilistic procedures. If earthquake ground motions should occur
exceeding these capacities, the equipment is assumed to fail. Depending on the failure mode of the
particular equipment, the consequential release of ARM is estimated, using engineering judgment. In
the following, the seismic assessment approach is described through a case study (Ravindra and Tong,
1991; Ravindra, 1992).
Pacific Ocean
_~I N
.- .
Figure 19-5. Major earthquake faults in Southern California region.
454 Seismic Risk Assessment
and local sea floor physiography it was also concluded that the southern California coast is immune to
the tsunami waves generated from distant earthquakes (e.g., Chile).
The local soil profile consists of silty sand and sand. They are firm to very firm throughout the site;
soil settlement as a result of strong seismic shaking was considered to be unlikely.
0)
0
c0)
10.1
"0
0)
0)
0
X 10 -2
W
15
>-
0 10 -3
C
0)
::I
c-
....O) 10 -4
LL
CU
::I
C 10 -5
«
C
10 -0
0.00 0.20 0.40 0.60 0.80 1.00
5.4.1. Component evaluation. The steps involved in component evaluation for seismic adequacy
are as follows.
In the case study mentioned above, the equipment items that contain ARM were identified in the
systems analysis. A review or the design codes used and the design drawings was conducted to obtain
relevant information for calculating the seismic capacities. Detailed walkdowns of components and
systems in the facility were performed to obtain additional data and to identify any potential seismic
weaknesses. The components examined included vessels, reactors, heat exchangers, compressors,
pumps, piping, etc. The focus of the walkdown was on the anchorage of the equipment, lateral seismic
supports, and potential effects of failure of non-ARM components on the ARM components. The
experience data on the performance of industrial facilities during major earthquakes and the insights
gained in the seismic risk studies of critical facilities were used in this review and walkdown. The
experience database consists of information on the performance of structures and equipment in industrial
facilities, chemical plants, oil refineries, and power plants subjected to major earthquakes throughout
the world. Post-earthquake, on-site investigations have been conducted at the following sites (number
in parentheses indicates year of the earthquake): Chile (1985), Mexico (1985), Whittier, California
(1987), and Lorna Prieta, California (1989) (EQE, 1986a,b, 1988, 1990).
Some components (e.g., compressors and pumps) could be assigned high seismic capacities on the
basis of the experience database. For the remaining components, seismic capacity evaluation was con-
ducted. The relevant failure modes were identified and the capacity in the critical failure mode was
calculated. The median capacity Am and a logarithmic standard deviation reflecting total variability in
the capacity, [3, were estimated. Figure 19-7 shows the failure probability of a component obtained
0.9
Am = 1.249
0.8
0.7
B = 0.45
0.6
:0
ttl 0.5
.0
...o
a.. 0.4
ttl 0.3
c::
o
E 0.2
"C
c::
o 0.1
()
o
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
Median
Component capacity Failure Consequence
Item description (g) 13 mode of failure Remarks
G-201 Recycle gas 0.40 0.35 Support legs Tube rupture and pipe Vertical heater
heater failure vessel
A-231 Absorber 1.24 0.45 Anchor Severe failure of Tall columns
column bolts connected piping
HX2D1 Heat 0.32 0.38 Bolts Rupture of connecting Stacked heat
exchanger pipe exchanger
D-202 Low-pressure 0.72 0.38 Anchor Leakage at connected Horizontal
flash drum bolts piping vessel
P-234 Pump 0.80 0.40 Anchorage Severe failure of Ground mounted
connected piping equipment
C-256 Compressor 0.80 0.40 Anchorage Severe failure of Ground mounted
connected piping equipment
using Am and 13 as a function of the peak ground acceleration (called a fragility curve). For each
component, the consequences of failure in terms of the release size were estimated by examining the
attached piping, nozzles, and the connection details. Table 19-3 gives a condensed list of the component
name, description, critical failure mode, estimated seismic capacities, and a description of the conse-
quence of failure in terms of release. This table shows that there are some components with relatively
low seismic capacities, because of marginal anchorage, that could be upgraded with minimal cost. Any
decision on upgrading must come only after the importance of the upgrading is evaluated in the context
of the overall risk mitigation plan.
5.4.2. Sequence assessment (system analysis). An earthquake could cause damage to one or more
components in the facility. Each combination of component damage, herein called sequence or cutset,
may have different consequences in terms of different ARM releases, including their possible reactions
and synergistic effects. The consequence evaluation is typically done using a dispersion analysis con-
sidering the types and rates of ARMs released.
By combining the seismic fragility of components with the seismic hazard for the site, we could
obtain the annual frequency of occurrence of different sequences. Table 19-4 shows representative
sequences and their frequencies. Of the sequences composed of single component failures, the heat
exchanger HX2Dl failure is seen to dominate because its seismic capacity is relatively low. It is also
observed that HX2Dl failure dominates the frequency of sequences consisting of two component fail-
ures. However, any decision to upgrade HX2Dl should consider the consequence of its failure by itself
or in conjunction with other component failures. This upgrading decision is also made by comparing
the seismic sequence frequencies with those of internal and other external events. (Internal events
include random failures of equipment and structures and operator errors. External events, other than
earthquakes, include hurricanes, tornadoes, flood, and external explosions.)
The above discussion focuses on the seismic risk from a single chemical facility. But a large earth-
quake could affect a number of such facilities in a region, posing a severe challenge to the emergency
response systems.
Seismic Risk Assessment 457
5.5. Conclusions
From a number of seismic assessments conducted as part of RMPPs of oil refineries and chemical
plants, the following conclusions are drawn.
1. A detailed walkdown of the facility is an essential element of the seismic assessment. The walkdowns
identify potential seismic vulnerabilities that could not be seen on the design documents. For older facilities,
design information in terms of drawings and calculations is generally n~t available, making the data col-
lected in the walkdown invaluable for estimating the seismic capacities. Walkdown would identify the
"housekeeping" issues, for example, nuts on anchor bolts not replaced after maintenance, corroded anchor
bolts, and cracked concrete pedestals exposing the bolts. Some of these items could be fixed with minimum
cost.
2. The acceptable level of safety depends on the consequence of AHM release. If a quantitative analysis is
performed for seismic events along with internal and other external events, the relative contribution of the
seismic events to the overall AHM risk could be assessed.
6. PORTFOLIO ANALYSIS
An earthquake could affect a number of properties or facilities in a region. An insurer may be covering
the earthquake risk of such properties. A company could own manufacturing or storage facilities at
different locations within a region potentially affected by an earthquake. The concern for concurrent
damage to such facilities is addressed through a portfolio analysis. A portfolio is a set of properties or
facilities in a geographic region. The probability of damage to the portfolio as a result of earthquakes
is evaluated considering the seismic sources within the region and the characteristics of the portfolio.
for single facilities. The frequencies of occurrence of earthquakes of different sizes (e.g., magnitudes)
are calculated; the effect of these earthquakes in terms of ground shaking and the resulting damage is
considered in the pOitfolio vulnerability analysis.
used to perform the PML calculations. These programs are described in EQE (1992) and Dong et al.
(1987).
The annualized loss is calculated by taking into account the frequencies of occurrence of earthquakes
and the probabilities of damage to different properties. Descriptions such as mean annualized loss and
losses with different mean return periods could be estimated.
7. CONCLUSIONS
This chapter has described the methods of seismic risk assessment used in different areas: nuclear power
plants, chemical plants, and portfolio analysis.
40
Q)
C)
ctI
E
ctI
0
C
ctI 20
Q)
...c
~
Q)
....
(J
Q)
a..
o
VI VII VIII IX X XI XII
Systematic Approach
Seismic risk assessment is based on a systematic approach to all aspects of the risk: hazard, system
performance, and fragility/vulnerability evaluation. By studying each of these aspects separately and
together, one obtains a clear understanding of the importance of each in the overall risk framework.
The resources spent in improving our knowledge in one area (e.g., hazard or fragility) should be
commensurate with the overall risk reduction achieved.
Treatment of Uncertainty
There are major uncertainties in the seismic hazard, systems analysis, and fragility analysis. Any risk
assessment should acknowledge these uncertainties and treat them consistently. The risk assessment
applications in different areas discussed in this chapter approach this treatment of uncertainty differently;
in nuclear seismic PRAs, a full treatment of uncertainty has been practiced. In the portfolio analysis,
the use of PML permits only a partial treatment of uncertainty. Any identification of dominant risk
contributors and potential targets for seismic upgrading must consider the impact of such uncertainties.
REFERENCES
BANDYOPADHYAY, K K, and C. H. HOFMAYER (1986). Seismic Fragility of Nuclear Power Plant Components
(Phase I), Vol. 1. NUREG/CR-4659. Washington, D.C.: Nuclear Regulatory Commission.
BANDYOPADHYAY, K K, C. H. HOFMAYER, M. K KASSIR, and S. E. PEPPER (1987). Seismic Fragility of Nuclear
Power Plant Components (Phase II): Motor Control Center, Switchboard, Panelboard and Power Supply,
Vol. 2. NUREG/CR-4659. Washington, D.C.: Nuclear Regulatory Commission.
BANDYOPADHYAY, K K, C. H. HOFMAYER, M. K KASSIR, and S. E. PEPPER (1990). Seismic Fragility of Nuclear
Power Plant Components (Phase II): Switchgear, I&C Panels (NSSS) and Relays, Vol. 3. NUREG/CR-4659.
Washington, D.C.: Nuclear Regulatory Commission.
BENDER, B. (1984). Seismic hazard estimation using a finite fault rupture model. Bulletin of the Seismological
Society of America 74(5):1899-1923.
BERNREUTER, D. L., 1. B. SAVY, R. W. MENSING, H. C. CHEN, and B. C. DAVIS (1985). Seismic Hazard Char-
acterization of the Eastern United States, Vols. 1 and 2. Report No. UCID 20421. Livermore, California:
Lawrence Livermore National Laboratory.
BERNREUTER, D. L., J. B. SAVY, R. W. MENSING, J. C. CHEN, and B. C. DAVIS (1989). Seismic Hazard Charac-
terization of 69 Nuclear Plant Sites East of the Rocky Mountains. NUREG/CR-5250. Washington, D.C.:
Nuclear Regulatory Commission.
BOHN, M. P., and J. A LAMBRIGfIT (1990). Procedures for External Event Core Damage Frequency Analyses for
NUREG-1J50. NUREG/CR-4840. Washington, D.C.: Nuclear Regulatory Commission.
BOHN, M. P., et al. (1983). Application of the SSMRP Methodology to the Seismic Risk at the Zion Nuclear Power
Plant. NUREG/CR-3428. Washington, D.C.: Nuclear Regulatory Commission.
BUDNITZ R. J., AMICO, C. A CORNELL, W. J. HALL, R. P. KENNEDY, J. W. REED, and M. SHINOZUKA (1985). An
Approach to the Quantification of Seismic Margins in Nuclear Power Plants, NUREG/CR-4334 (UCID-
20444). Washington, D.C.: Nuclear Regulatory Commission.
California Department of Conservation (1988). Planning Scenario for a Major Earthquake on the Newport-
Inglewood Fault Zone. Special Publication No. 99. Sacramento, California: Division of Mines and Geology.
California Fire Chiefs Association (1990). Proposed Seismic Assessment Guidelines for RMPP Studies. Los
Angeles, California: California Fire Chiefs Association.
CAMPBELL, R. D., M. K. RAVINDRA, and R. C. MURRAY (1988). Compilation of Fragility Information from
Available Probabilistic Risk Assessments. UCID-20571 Rev. 1. Livermore, California: Lawrence Livermore
National Laboratory.
CASCIATI, F., and L. FARAVELLI (1991). Fragility Analysis of Complex Structural Systems. New York: John Wiley
& Sons.
CHEN, J. T., et al. (1991). Procedural and Submittal Guidance for the Individual Plant Examination of External
Events (IPEEE) for Severe Accident Vulnerabilities. NUREG-1407. Washington, D.C.: Nuclear Regulatory
Commission.
Commonwealth Edison Company (1981). Zion Probabilistic Safety Study. Chicago, Illinois: Commonwealth Edison
Company.
CORNELL, C. A (1968). Engineering seismic risk analysis. Bulletin of the Seismological Society of America 58:
1583-1606.
COVER, L. E., D. A WESLEY, and R. D. CAMPBELL (1985). Handbook of Nuclear Power Plant Seismic Fragilities.
NUREG/CR-3558. Washington, D.C.: Nuclear Regulatory Commission.
CRAMOND, W. R., et al. (1987). Shutdown Decay Heat Removal Analysis of a Babcock and Wilcox Pressurized
Water Reactor. NUREG/CR-4713. Washington, D.C.: Nuclear Regulatory Commission.
DER KIUREGHIAN, A, and A H. S. ANG (1977). A fault rupture model for seismic risk analysis. Bulletin of the
Seismological Society of America 67:1173-1194.
DONG, w., F. WONG, and H. C. SHAH (1987). Expert systems for earthquake insurance and the real estate investment
industry. In: Proceedings of International Synposium on AI, Expert Systems and Languages in Modeling and
Simulation, Barcelona, Spain, June 1987.
462 Seismic Risk Assessment
DONOVAN, N. C., and A. E. BERNSTEIN (1978). Uncertainties in seismic risk procedures. Journal of Geotechnical
Engineering Division, ASCE 104:869-887.
Electric Power Research Institute (1989). Probabilistic Seismic Hazard Evaluations at Nuclear Power Plant Sites
in the Central and Eastern United States: Resolution of the Charleston Earthquake Issue. EPRI NP-6395-D.
Palo Alto, California: Electric Power Research Institute.
ENGELBREKTSON, A. (1989). Characterization of seismic ground motions for probabilistic safety analyses of nuclear
facilities in Sweden. In: Transactions of the 10th Structural Mechanics in Reactor Technology (SMiRT)
Conference, Vol. K1. Amsterdam, the Netherlands: North-Holland Physics Publishing, pp. 37-42.
EPRI (1987). Generic Seismic Ruggedness of Power Plant Equipment. NP-5223. Palo Alto, California: Electric
Power Research Institute.
EPRI (1991). A Methodology for Assessment of Nuclear Power Plant Seismic Margin. EPRI NP-6041 SL, Revision
1. Palo Alto, California: Electric Power Research Institute.
EQE (1982). Program for the Development of an Alternative Approach to Seismic Equipment Qualification, Volume
I: Pilot Program Report: Volume II: Pilot Program Report Appendices. San Francisco, California: EQE, Inc.
EQE (1986a). Performance of Power and Industrial Facilities in the Epicentral Area of the 1985 Mexico Earth-
quake. San Francisco, California: EQE, Inc.
EQE (1986b). Summary of the March 3, 1986 Chile Earthquake. San Francisco, California: EQE, Inc.
EQE (1988). Summary of the October 1, 1987 Whittier, California Earthquake. San Francisco, California: EQE,
Inc.
EQE (1990). The October 17, 1989 Loma Prieta Earthquake. San Francisco, California: EQE, Inc.
EQE (1991). Summary of the Seismic Adequacy of Twenty Classes of Equipment Required for Safe Shutdown of
Nuclear Plants. San Francisco, California: EQE, Inc.
EQE (1992). EQEHAZARD-A Computer Program for Portfolio Analysis: User's Manual Version 3.1. San Fran-
cisco, California: EQE, Inc.
GEORGE, L. L., S. B. GUARRO, P. G. PRASSINOS and J. E. WELLS (1985). SEISIM-Systematic Evaluation of
Important Safety Improvement Measures. Users Manual. UCID-20496. Livermore, California: Lawrence
Livermore National Laboratory.
HART, E. H. (1988). Fault-Rupture Hazard Zones in California-Alquist-PrioloSpecial Studies Zones Act of 1972
with Index to Special Studies Zones Maps. Sacramento, California: Department of Conservation, Division
of Mines and Geology.
HOLMAN, G. S., and C. K. CHOU (1986a). Component Fragility Research Program: Phase I Component Prioriti-
zation. NUREG/CR-4899. Washington, D.C.: Nuclear Regulatory Commission.
HOLMAN, G. S., and C. K. CHOU (1986b). Component Fragility Research Program: Phase I Demonstration Tests.
NUREG/CR-4900. Washington, D.C.: Nuclear Regulatory Commission.
HOLMAN, G. S., and C. K. CHOU (1990). Using component test data to develop failure probabilities and improve
seismic performance. In: Proceedings of 3rd Symposium on Current Issues Related to Nuclear Power Plant
Structures, Equipment, and Piping. Raleigh, North Carolina: North Carolina State University.
International Conference of Building Officials (1991). Uniform Building Code. Whittier, California: International
Conference of Building Officials.
KAMEDA, H. (1991). Fundamentals of PRA for seismic hazard and ground motion definitions. In: Proceedings of
the 11th SMiRT Post-Seminar on Seismic PRA of Nuclear Power Plant Structures. Tokyo, Japan: Atomic
Energy Society of Japan.
KAPLAN, S., and J. C. LIN (1987). An improved condensation procedure in discrete probability distribution cal-
culations. Risk Analysis 7(1).
KARiMI, R. (1983). SEISMIC -A Computer Program for Seismic Risk Evaluation. Report NUS-4064. Gaithersburg,
Maryland: NUS Corporation.
KENNEDY, R. P., and M. K. RAVINDRA (1984). Seismic fragilities for nuclear power plant risk studies. Nuclear
Engineering and Design 79(1):47-68.
Seismic Risk Assessment 463
KENNEDY, R P., C. A CORNELL, R D. CAMPBELL, S. KAPLAN, and H. F. PERIA (1980). Probabilistic seismic
safety study of an existing nuclear power plant. Nuclear Engineering and Design 59(2):315-338.
KENNEDY, R P., B. E. SARKAR, and L. S. CLUFF (1988a). On some aspects of seismic fragility evaluation for
Diablo Canyon seismic PRA. In: Proceedings of 2nd Symposium on Current Issues Related to Nuclear
Power Plant Structures, Equipment, and Piping with Emphasis on Resolution of Seismic Issues in Low-
Seismicity Regions. EPRI NP-6437-D. Palo Alto, California: Electric Power Research Institute, pp. 3-27 to
3-54.
KENNEDY, R P., D. A WESLEY, and W. H. TONG (1988b). Probabilistic Evaluation of the Diablo Canyon Turbine
Building Seismic Capacity Using Nonlinear Tune-History Analysis. NTS Engineering Report No. 1643-0l.
San Francisco, California: Pacific Gas & Electric Company.
KJpP, T. R, D. A WESLEY, and D. K NAKAKI (1988). Seismic Fragilities of Civil Structures and Equipment
Components at the Diablo Canyon Power Plant. NTS Engineering Report No. 1643-02. San Francisco,
California: Pacific Gas & Electric Company.
LERCARI, F. A (1989). Guidance for the Preparation of a Risk Management and Prevention Program. Sacramento,
California:' California Office of Emergency Services.
McGmRE, R. K (1976). Fortran Computer Program for Seismic Risk Analysis. Open File Report 76-67. Golden,
Colorado: U.S. Geological Survey.
McGmRE, R K (1978). Seismic ground motion parameter relations. Journal of the Geotechnical Engineering
Division, ASCE 104:481-490.
MORTGAT, C. P., and H. C. SHAH (1979). A Bayesian model for seismic hazard mapping. Bulletin of the Seis-
mological Society of America 69(4):1237-125l.
NEHRP (National Earthquake Hazard Reduction Program) (1991). Recommended Provision for the Development
of Seismic Regulations for New Buildings. Earthquake Hazards Reduction Series-17. Washington, D.C.:
Federal Emergency Management Agency.
NEWMARK, N. M. (1977). Inelastic design of nuclear reactor structures and its applications on design of critical
equipment. In: Transactions of the Structural Mechanics in Reactor Technology (SMiRT) Conference, Vol.
K Amsterdam, the Netherlands: North-Holland Physics Publishing.
NRC (1983). PRA Procedures Guide, Vol. 2. NUREG/CR-2300. Washington, D.C.: Nuclear Regulatory
Commission.
NRC (1985). Probabilistic Safety Analysis Procedures Guide. NUREG/CR-2815. Washington, D.C.: Nuclear Reg-
ulatory Commission.
Pacific Gas and Electric Company (1988). Diablo Canyon Units 1 and 2 Long Term Seismic Program, Final
Report, Docket No. 50-275. San Francisco, California: Pacific Gas and Electric Company.
RAVINDRA, M. K (1988). Seismic probabilistic risk assessment and its impact on margin studies. Nuclear Engi-
neering and Design 107:51-59.
RAVINDRA, M. K (1992). Seismic assessment of chemical facilities under California Risk Management and Pre-
vention Program. In: Proceedings of the International Conference on Hazard Identification and RiskAnalysis,
Human Factors and Human Reliability in Process Safety. New York: Center for Chemical Process Safety
of the American Institute of Chemical Engineers.
RAVINDRA, M. K and J. J. JOIINSON (1991). Seismically induced common cause failures in PSA of nuclear power
plants. In: Transactions of the 11th Structural Mechanics in Reactor Technology (SMiRT) Conference, Vol.
M. Tokyo, Japan: Atomic Energy Society of Japan.
RAVINDRA, M. K, and R. P. KENNEDY (1983). Lessons learned from seismic PRA studies. In: Proceedings of the
7th Structural Mechanics in Reactor Technology (SMiRT) Conference. Amsterdam, the Netherlands: North-
Holland Physics Publishing.
RAVINDRA, M. K., and A M. NAFDAY (1991). A Review of Seismic PRA of ABWR Standard Plant. Costa Mesa,
California: EQE Engineering.
RAVINDRA, M. K, and W. H. TONG (1991). Seismic risk analysis of conventional and chemical facilities. In:
464 Seismic Risk Assessment
Proceedings of the International Conference on Probabilistic Safety Assessment and Management. G. Apos-
tolakis (Ed.). Beverly Hills, California: pp. 881-885.
RAVINDRA, M. K., G. S. HARDY, P. S. HASlDMOTO, and M. 1. GRIFFIN (1987). Seismic Margin Review of the
Maine Yankee Atomic Power Station, Vol. 3. NUREG/CR-4826. Washington, D.C.: Nuclear Regulatory
Commission.
RAVINDRA, M. K, W. H. TONG, M. 1. GRIFFIN and G. S. JOHNSON (1991). Seismic assessment under RMPP:
Recent applications. In Proceedings of HAZMACON 91, T. Bursztynsky (Ed.) Oakland, California: Acco-
ciation of Bay Area Governments, pp. 114-121.
REED, 1. w., M. W. MCCANN, 1. IIHARA, and H. HADIDI-TAMJED (1985). Analytical techniques for performing
probabilistic seismic risk assessment of nuclear power plants. In: Proceedings of the International Conference
on Structural Safety and Reliability (ICOSSAR '85), Vol. 3. New York: International Association for Struc-
tural Safety and Reliability, pp. 253-261.
REITER, L. (1991). Earthquake Hazard Analysis: Issues and Insights. New York: Columbia Unversity Press.
RiDDELL, R, and N. M. NEWMARK (1979). Statistical Analysis of the Response of Nonlinear Systems subjected to
Earthquakes. Report UILU 79-2016. Urbana, Illinois: University of Illinois.
SCHNABEL, P. B., and H. B. SEED (1973). Accelerations in rock for earthquakes in the Western United States.
Bulletin of the Seismological Society of America 63:501-516.
TuNG, A. T. Y., and A. S. KiREMIDJIAN (1992). Application of system reliability theory in the seismic analysis of
structures. Earthquake Spectra 8(3):471-494.
VENEZIANO, D., C. A. CORNELL, and T. O'HARA (1984). Historical Method of Seismic Hazard Analyses. EPRI
NP-3438. Palo Alto, California: Electric Power Research Institute.
WHITMAN, R V., 1. BRIGGS, H. BRENNAN, C. A. CORNELL, R DE NEUFUILLE, and E. VANMARCKE (1975). Seismic
design decision analysis. Journal of the Structural Division, ASCE 101:1067-1084.
Yankee Atomic Electric Company (1983). Maine Yankee Seismic Hazard Analysis. Report YAEC-1356. Framing-
ham, Massachusetts: Yankee Atomic Electric Company.
20
EXTREME-WIND RISk
ASSESSMENT
1. INTRODUCTION
Extreme winds constitute an important loading condition to all types of above-ground structures and
facilities. Extreme winds produce loads on structures as a result of the induced pressures, wind-propelled
missiles, atmospheric pressure change, and storm surge effects. Site-specific wind hazard risks vary
significantly across the United States, reflecting fundamental differences in topography and local cli-
matology. Extreme wind hazards are generally categorized according to the causative meteorological
system as extratropical cyclones, tropical cyclones (hurricanes), tornadoes, thunderstorms, and down-
bursts. In addition to these storm systems, topographical features may also result in localized, special
wind regions.
Probabilistic methods and statistical analysis are fundamental to any evaluation of structural per-
formance to extreme wind effects. First, both the wind climate and the turbulent structure of wind are
random processes and can be described only in statistical terms. Second, there are significant uncer-
tainties that influence our ability to accurately model the wind loading, including micrometeorology,
changes in exposure, internal pressure leakage, and aerodynamic considerations. Third, the structural
response models for wind effects are subject to prediction errors due to simplified and idealized rep-
resentations, damping uncertainties, and inelastic response.
The steps to perform structural reliability analysis and facility risk assessment for wind effects are
given in Fig. 20-1. The first step involves the development of the wind climatology at the site. One of
the main products of this step is the development of a windspeed versus frequency of exceedance curve
for each type of wind hazard. The methods that are used to predict the frequency and severity of
extreme winds depend on the specific wind-producing meteorological phenomena. Regions that expe-
rience several types of strong winds are referred to as mixed-wind climates and, consequently, an
analysis of extreme winds must take into account each potential storm type. In mixed-wind climates,
the most accurate wind loading frequency estimates are obtained from separate analysis of each phe-
nomenon. The results of the separate analyses are combined to form the site wind hazard curve, which
is used directly in the risk analysis or is transformed into load effects for structural reliability analysis.
The uncertainties in predicted windspeed increase substantially for annual probabilities of exceedance
465
Extreme-Wmd Risk Assessment
less than about 0.005. Section 3 summarizes databases, methodologies, and computer codes used to
develop wind hazard curves.
Wind loading effects include, principally, the aerodynamic forces produced by the dynamic pressure
component of the wind flow and impact forces produced by objects picked up and accelerated by the
wind field. A third load effect is the atmospheric pressure change (APC) associated with the low central
pressure region within tornadoes. However, this effect is of practical interest only for enclosed, airtight
structures that will also not vent due to the breaching produced by wind pressures and/or missile impact
forces. Offshore facilities are subject to combined wind and wave loads, and coastal facilities are subject
to storm surge. The wind loading effects considered herein include dynamic pressure, wind-borne mis-
siles, and APe. These effects are considered in the design criteria for critical land-based facilities, such
as nuclear power plants.
The structural response analysis to wind loads can be either dynamic analysis or static analysis. For
wind-sensitive structures, such as lattice structures, high-rise buildings, bridges, and chimneys, the dy-
namic response analysis is critical to the overall structural response and the load and response analysis
are closely coupled. Wind tunnel measurements are often used to develop the aerodynamic loading and,
occasionally, the aeroelastic response of the structure. For nondynamically sensitive structures, such as
reinforced concrete structures and low-rise buildings, the wind pressure loads are often treated statically,
on the basis of code design procedures. Wind-borne missile loads and response are dynamic phenomena,
with the response analysis based on analytic predictions of missile impact velocities coupled with
semiempirical penetration formulas or dynamic response analysis, depending on the missile type. Section
H I II
1. Extreme Wind Hazards
r-'-----,
I Ex~~cal Thunderstorms Hurricanes
I I
~adoes I
- --'
I
2. LoadIResponse Models I •
Micrometeorology
I
•
Wind Loading Effects
I Pressure
l 1Missi]
L-: es II Atmospheric Pressure Change J
•
I
Structural Response Model
DynamiC~ [ Static
-,
I
L L I -.l
I
Figure 20-1. Structural reliability analysis and facility risk assessment for extreme wind effects.
Extreme-Wind Risk Assessment 467
4 discusses wind loads and response analysis for several classes of structures, with emphasis on dynamic
pressure effects.
Step 3 in Fig. 20-1 produces the structural reliability information. Because wind effects cannot be
estimated or predicted with certainty, the proper measure of performance is probability of failure or
unserviceability. A prediction of the probability of unacceptable performance must consider each of the
stated wind effects, where appropriate, and each potential failure mode of the structure. Structural
reliability analysis is discussed in Section 5.
In some cases, a probabilistic risk analysis of the facility is performed for wind hazards, as depicted
in step 4 in Fig. 20-1. For facilities such as industrial plants, the system probability of failure is based
on the contribution of each structure or equipment component. The results of the structural reliability
analysis feed into the facility systems failure logic model, generally expressed in Boolean algebra and!
or fault trees. The effects of wind-induced failures are propagated through the plant systems to determine
the frequency of each plant damage state, using standard probabilistic risk assessment (PRA) methods.
The contribution of the wind hazard to critical plant damage states can be evaluated by comparison to
other external event hazards, such as earthquakes or floods. Section 6 summarizes extreme wind PRA
analysis methodology. Conclusions are presented in Section 7.
2.1. Notations
A Area
C1 Constant
Cd Drag coefficient
CL Lift coefficient
C1 Moment coefficient
Cm Moment coefficient
Cp Pressure coefficient, also moment coefficient
D Characteristic dimension (width)
d Diameter
f Frequency (Hz)
f(z, t) Time-dependent force per unit height
G Gust factor
g Peak factor; also, acceleration due to gravity
H Height of structure
lu Intensity of longitudinal turbulence
k von Karmen constant; also, Weibull distribution parameter
L Length scale
P(D) Probability of system damage or failure
P(di ) Probability of ith component damage or failure
p Pressure
R Return period
Rmax Radius to maximum winds
S Tornado subregion
S(f) Spectral density function
468 Extreme-Wind Risk Assessment
S, Strouhal number
Time
U Mode for Fischer-Tippet extreme value distribution
u* Friction velocity
V Random variable representing windspeed
Vd Design windspeed
VE Windspeeds produced by extra tropical storms
Vfin Fastest mile windspeed
VT Windspeeds produced by thunderstorms
~ Tornado windspeed
V Mean windspeed
V Median windspeed
v Specific value of the random variable V
Zd Displacement height
Zg Gradient height
Zo Surface roughness
1/0: Dispersion for Fischer-Tippet extreme value distribution
!3 Power law exponent
!3v Lognormal standard deviation
<I> Cumulative distribution function of standard normal variate
},. Mean occurrence rate
f.I. Mean value
e Wind direction
p Density of air
(J Standard deviation
To Surface shear stress
~ Cycling rate
2.2. Abbreviations
3.1. Overview
In structural reliability and facility risk assessments, wind hazard analysis consists of the development
of probabilistic statements on the long-term windspeeds. These statements can be represented by prob-
ability distribution functions or expressed as discrete probabilities for specified windspeeds. The results
are generally given as windspeed exceedance probabilities, P(V > v), for a specified time period, usually
1 year. A plot of windspeed versus annual frequency of exceedance (called the wind hazard curve) is
developed, as illustrated in Fig. 20-2. Uncertainties in the mean hazard functions or curve are generally
ignored in structural reliability analyses. However, in PRAs, the uncertainties are often represented
through a family of curves, accounting for uncertainties and modeling errors. (This concept is discussed
in more detail in Chapter 19, in reference to seismic hazard curves.) It is important to note that, for
wind-sensitive structures, the wind hazard analysis requires the development of joint probability func-
tions of windspeed and direction.
The development of the wind hazard function begins with the analysis of available wind data,
generally in the form of windspeed measurements at a fixed location. Wmdspeed data from anemometer
measurements are used directly to develop hazard curves for extratropical cyclones and thunderstorm
winds. Hurricane databases include storm position, time, size, and central pressure. Anemometer wind-
speed data are used to validate the windfield models used in hurricane simulation models. For tornadoes,
indirect databases that include damage path parameters provide the main source of data for hazard
analysis.
It is important that the data used for a given location be reliable and constitute a micrometeorological
homogeneous set (Simiu and Scanlan, 1986). Reliability refers to instrumentation calibration and unob-
structed flow in the mean vicinity of the instrument. Micrometeorological homogeneity includes con-
sideration of averaging time, height above ground, and roughness of surrounding terrain. All windspeed
470 Extreme-Wind Risk Assessment
versus probability of exceedance curves correspond to a specific averaging time, height above ground,
and terrain roughness.
Indirect wind data are used in the development of site-specific tornado hazard curves. Databases that
include tornado occurrences, path dimensions, and damage intensity are coupled with relations of dam-
age to windspeed to develop tornado hazard curves. Because of the limitations of these indirect data-
bases, tornado hazard curves are subject to considerable uncertainty, particularly for gust windspeeds
(2- to 3-sec averages) greater than about 125 mph.
Hazard curves are needed for critical facilities whose failure has significant economic or public health
consequences (used in the facility PRA). For typical buildings, the design windspeeds (Vd) are generally
based on either 50- or lOO-year mean return period winds. The relationship between exceedance prob-
ability during the design life, mean return period, and design life of a facility is given by
where R is the return period and y is the design life (in years). Hence, the probability (frequency) of
exceedance equals the inverse of the mean return period for wind events with V > Yd' For rare events,
the probability of a wind event with V > Vd occurring within its return period is "'" 0.63. Therefore,
for a building with a design life of 50 years, designed for a 50-year return period wind, the probability
of the building experiencing windspeeds greater than the 50-year wind is 0.63.
10-1r---~--~_~T--T--r-----------------~--------~~
IO.2--tJl11-----.--.-i--.-----J--,
·--j--jd--j·-I-~\------I-··-·-·j·-~
10-4 ···..·..·······-I-·······....···j·····..···.. .1=. ·:. F. ·~··~···~. :. . . . . . . . . . . ,.:. . . . . . . . . . _.. . . . . . . . . . . . . . . . ~ . . . . .~
10-5
--~--l-~t··t---~~~~~~~\t·-··-·l·-
- . - Tornado 1'\ I
10-6 ____ ExtratropicaI Cyclone ................................................................).\. ...:.~........................ ~ .............,
; '. ~
- Combined Winds
'~,
~-
(lO-m, 2-sec Gust,
Open Terrain)
Windspeeds corresponding to 50-year return period winds for the United States are published in
ASCE-7-88 (American Society of Civil Engineers [ASCE] , 1990). There are many limitations to the
ASCE-7-88 windspeeds and they cannot, in general, be extrapolated to represent a wind hazard curve
for a particular site. The following sections review wind characteristics and briefly summarize the
methods used to develop site-specific wind hazard functions.
(z < zJ (20-2)
where Vref is a reference velocity, Z,ef is a reference height, ~ is a power law exponent, and Zg is the
gradient height above which the windspeed is invariant with height. Both J3 and Zg are functions of
terrain roughness.
Using the logarithmic representation, V(z) is
V(z) - d) ;
= 2.5u* In (z-;;;- (20-3)
where d is the displacement height, Zo is the surface roughness length, Zd is a displacement height (below
which Eq. [20-3] is invalid), and u* is the friction velocity given as u* = (ToIp)l!2, in which To is the
surface shear stress and p is the density of air. The logarithmic law is an "exact" solution assuming
that the terrain is flat with constant surface roughness, and the atmosphere is neutrally stable (generation
of turbulence by mechanical mixing only, with no thermal mixing) and is valid only in the constant
stress region of the boundary layer (approximately the lower 50 m). Because the logarithmic law is
exact under ideal conditions, its use in wind engineering is favored by meteorologists and many re-
searchers (Simiu, 1976; Cook, 1982; Tielman, 1982) and provides a means for predicting intensities of
turbulence consistent with the surface roughness (Deaves and Harris, 1978; Engineering Sciences Data
Unit [ESDU] 1974, 1985, 1986). Despite the fact that the logarithmic law is more widely accepted than
the empirically-based power law, the power law is used to define the variation in mean windspeed with
height in North American building codes.
Figure 20-3 presents empirical relationships for the power law exponent ~, the gradient height Zg
472 Extreme-Wmd Risk Assessment
and zo, as a function of the terrain category. Figure 20-3 also shows the variation in mean windspeed
with height for four terrain categories. A key point evident in Fig. 20-3 is that the windspeed at gradient
height takes on the same value for all terrain types. This provides the means for estimating the wind-
speed in one terrain given the windspeed in another, and is inherently employed in all wind tunnel tests
and building codes whether they use a logarithmic law to define the velocity profile (Standards Asso-
ciation of Australia [SAA], 1989) or a power law.
The foregoing description of the atmospheric boundary layer profile provides a reasonable represen-
tation of the mean velocity profile for large-scale storms, but is not valid in the case of thunderstorm
winds or tomadic winds.
3.2.2. Spectrum of atmospheric turbulence. Key to either analytical or wind tunnel techniques
for estimating the wind-induced response of a structure is the correct representation of the distribution
of turbulent energy in the wind with frequency. A number of analytical models describing the spectrum
of atmospheric turbulence based on measurement and asymptotic similarity theory have been developed
and used for wind engineering purposes (Davenport, 1961a; Harris, 1971; Simiu, 1974; Kaimal et al.,
1972; Deaves and Harris, 1978; ESDU, 1974, 1985, 1986).
All forms of the velocity spectra, S(f), satisfy the Kolmogorov criteria that
for fz/V(z) > 1, where f is the frequency in hertz, V(z) is the mean windspeed at height z in meters,
and z is within the constant stress layer. All forms of the velocity spectra are characterized by a length
scale L, describing the overall size of the gusts. The various analytical representations of the spectrum
of horizontal gustiness differ in their treatment of the low-frequency portion of the spectrum or in their
400 100 90 80
'F
~
5300
'i'
S1 100 93
~~()O---~
loof----=-~I
TURBULENCE It/TENSITY AT 30 m 1u
0.4 (POWER LAW EXPONENT =< lu' fJ
0.3
0.2
0.11------
OPEN SEA OPEN TERRAIN
0.01
ROUGHNESS LENGTH Zo meters
Figure 20-3. The effect of surface roughness on the variation of the mean windspeed with height. (Source:
Davenport [1987]).
Extreme-Wmd Risk Assessment 473
representation of the length scale. The form of the spectrum of horizontal gustiness proposed by Dav-
enport (1961a) is
IS.(f) _ 0.67[fLJV(1O)]2
(20-5)
U*2 - {I + [IL/V(1O)t}4!3
where the length scale L is assigned a constant value of 1200 meters and Su is the spectrum of the horizontal
gust windspeed. The form of the spectrum of horizontal gustiness given by Kaimal et al. (1972) is
IS.(/) _ 105fzJV(z)
(20-6)
U*2 - [1 + 33(fz/V(z»]5/3
In the ESDU (1985, 1986) representation of the spectrum of atmospheric turbulence, the length scale
L is a function of height, surface roughness, and wind velocity, with L becoming larger as windspeed
and surface roughness increase. Expressions for the spectra describing the fluctuation in the lateral and
vertical directions are available in the literature (Kaimal et al., 1972; ESDU, 1985, 1986) and may be
important when estimating the vertical response of bridges or lateral buffeting of tall structures. The
uncertainties in the velocity spectra are significant, particularly in the low-frequency region and at large
heights; furthermore, the low-frequency pOl'~ion of the spectrum can be influenced by thermal effects
and complex terrain features such as hills and mountain ranges (Tielman, 1982). For additional infor-
mation on atmospheric turbulence see Panofsky and Dutton (1984).
the windspeed exceedance frequencies are predicted following the method used in Simiu et al. (1979).
This method forms the basis for the design windspeeds given in ASCE-7-88 (for areas not influenced
by hurricanes). The annual extremes were fit to a Fischer-Tippet type I extreme value distribution,
defined as
where V = windspeed, U is the mode of the distribution, and Va is the dispersion of the distribution.
The parameters U and a are determined using standard linear regression, the method of moments, or
the method of maximum likelihood. Simiu and Scanlan (1986) provide solutions for the method of
moments. Figure 20-4 illustrates the windspeed frequency curves obtained from an analysis of 30 years
of nonhurricane daily peak gust data from Grand Rapids, Michigan. The plot labeled "annual
extremes-standard analysis" corresponds to an extreme value fit of the Grand Rapids data, using Eq.
(20-7).
3.3.3. Upcrossing analysis: hourly data. The second main source of extratropical cyclone wind
data recorded at the airport stations in the United States is the hourly data. These hourly data consist
of I-min averages of windspeed and direction, usually taken once every hour of every day, although at
some locations windspeed measurements are made once every 3 hr. A'isuming that the windspeeds
recorded on an hourly basis are all members of the same population (e.g., extratropical storm systems),
then these hourly windspeed and directional data can be used to define the statistics of this parent
popUlation. The extremes can then be estimated from this parent population by the upcrossing technique
(Rice, 1945). The upcrossing method treats the windspeed as a continuous random process. If A(V) . t
is the mean number of times the windspeed v is crossed with a positive slope during the time period
t, then for large values of v the crossings become independent and can be treated as a Poisson process.
::-iT~~:=:
: : /- ).---'
l::=;;~t2~~1
Vl
~
;::0.
30. .. .....................
~
•
!
•
.
+........................~ ............········.. ··r························
•
: : :
20. . .. -- -- --. --. ---.. ---_. -~ ---_. ----- -_ .. -... ' -'. --~-- -- -.- ----. -- -------. -- ~ -. -- --- -- -.... -------_..
: ;
·T·....·. ·------......··r......·· .. --.. ·------·--~ .. --..··---.............
:
, ,
The expected number of crossings (or exceedances) of a velocity v per unit time can be determined
as described by Rice (1945), and is defined as
where p(v, v) is the joint probability density of the windspeed and the time derivative of windspeed. If
it is assumed that the wind is a stationary process, then v and v are uncorrelated and the joint probability
density function p(v, v) can be written as p(v) . p(v). With this simplification,
(20-10)
where
.: [f eS(f) df ]l!2
--=-00--
L
~- (20-11)
S(f) df
In Eqs. (20-10) and (20-11), <Tv is the standard deviation of the velocity v, S(f) is the long-term
spectrum of the wind velocity, and ~ is the mean windspeed cycling rate.
The constant C1 for a normally distributed V is equal to (21T)112 = 2.51. Gomes and Vickery (1977)
reported a value of 2.26 obtained from direct measurements. The cycling rate ~ is on the order of 800
to 1000 cycles/year. The parent probability density function of the velocity V is usually well defined
by a Weibull distribution, given as
(20-12)
where V(z) is the windspeed averaged over some time t at height z, V(z) is the mean hourly windspeed
at height z, <Tv(z) is the standard deviation of the windspeed at height z, and g(t) is a peak factor that
is a function of the averaging time. The averaging time t associated with a fastest-mile windspeed gust
is equal to 3600/V(z), and is typically on the order of a 40- to 120-sec average. A simple relationship
476 Extreme-Wind Risk Assessment
for crv(z) is
and the resulting gust factor [G = V(z)/V(z)], given in Simiu and Scanlan (1986), takes on a value
between 1.2 and 1.3 for the averaging times of typical interest. The errors associated with the use of
Eq. (20-13) are on the order of ±10%, assuming that wind is stationary for a period of about 1 hr.
The results of an upcrossing analysis for Grand Rapids are given as the "hourly data" plotted in
Fig. 20-4, where it is evident that the predicted fastest-mile windspeeds derived using the hourly wind-
speed data are notably lower than those derived from the peak gust data (noted as "standard analysis"
in Fig. 20-4).
(20-15)
where t is the time period (years), A. is the average annual number of thunderstorms, and VT is the
thunderstorm windspeed. P1=1 refers to the annual probability and pi refers to the probability given the
Extreme-Wmd Risk Assessment 477
occurrence of a thunderstorm. Then from Eq. (20-7) and Eq. (20-15) we obtain
U.=U+aloA. (20-17)
where U(O), lIn(O), and A(O) are the mode, dispersion, and mean number of thunderstorms per year as
a function of wind direction O.
The results of an analysis of annual thunderday maximum windspeeds for Grand Rapids are given
in Fig. 20-4. The combined results use the annual nonthunderstorm extremes and the annual extremes
produced by thunderstorm winds. This approach treats thunderstorm and non thunderstorm extratropical
winds as separate populations and combines the results, assuming independence,
where P(VT ::s; v) and P(VE ::s; v) are the cumulative Fischer-Tippet type I distributions for the thun-
derstorm and nonthunderstorm extratropical winds, respectively. The combined results therefore produce
estimates of windspeeds for each exceedance probability that can be directly compared to the traditional,
mixed-population results. The extreme windspeeds estimated using the hourly data of mixed population
are approximately equal to the nonthunderstorm extreme windspeeds, which are much lower than those
associated with thunderstorms. In general the prediction of extreme windspeeds using the hourly data
and the upcrossing technique will not include the influence of thunderstorms and, consequently, in
regions where thunderstorms are prevalent will produce unconservative estimates of the extreme
windspeeds.
3.5. Hurricanes
Hurricanes are tropical cyclones whose maximum sustained surface level (I-min average) winds
exceed a threshold windspeed of 74 mph. Because of the relatively infrequent occurrence of hurricanes
at any particular location, numerical simulation (Monte Carlo) techniques are used to quantify hurricane
hazard risks.
The use of mathematical simulation methods to estimate hurricane windspeeds was first implemented
by Russell (1968, 1971). Other studies that have used this approach include Russell and Schueller
(1974), Tryggvason et al. (1976), Batts et al. (1980), Georgiou et al. (1983), 1Wisdale and Dunn (1983a),
Georgiou (1985), Neumann (1991), and 1Wisdale and Vickery (1992). Although the simulation methods
used by these investigators are generally similar, there are significant differences in the detailed statistical
models, hurricane subregion identification, and hurricane windfield models. In addition, the hurricane
track databases used in these studies have evolved from using actual tracking charts in the 1970s to
computerized databases beginning in the early 1980s. The HURDAT database (Jarvinen et al., 1984) is
478 Extreme-Wind Risk Assessment
currently used for hurricane risk assessment in the United States. Validation of the windfield models
and the predicted windspeed frequencies remains a fundamental research issue.
The recommended 50-year return period windspeeds for the Gulf and Atlantic coasts given in ASCE-
7-88 (formerly American National Standards Institute [ANSI] A58.1-1982) are derived from the Monte
Carlo simulations described in Batts et al. (1980). Although the study by Batts et al. was clearly a
milestone toward developing improved predictions of hurricane wind frequencies for engineering design,
the model, data, and computational procedure used have some significant limitations, which are dis-
cussed in 1Wisdale and Vickery (1992). Although ASCE-7-88 is based on Batts et al. (1980), hurricane
windspeed frequency analyses by 1Wisdale and Dunn (1983a) and Georgiou (1985) demonstrate some
notable differences from the results presented in Batts et at. For example, comparison with the Georgiou
model shows significant discrepancies in the prediction of the 100-year return period windspeeds along
the Atlantic coast, north of Florida, where the Georgiou model apparently yields predicted 100-year
return period mean hourly windspeeds 10 to 15% lower than the Batts et al. model. The site-specific
hurricane risk analysis performed by 1Wisdale et al. (1981) also produced lower wind speeds than the
Batts et at. model for New York City, as did the Tryggvason model (1979). The directional windspeed
data provided by Batts et al. for coastal sites has also been in serious question, but continues to be
used as a part of wind tunnel studies to develop directional wind loading information for the design of
high-rise structures in the United States.
The results of 1Wisdale and Vickery (1992) and Vickery and 1Wisdale (1993) provide some useful
direct comparisons to ASCE-7-88 and the results of Batts et al. (1980), with emphasis on the importance
of the windfield model. Hurricane simulations made by using a modified version of the Shapiro windfield
model (1983), employing the numerical solution of the equations of motion, and gust factor curves
developed by Krayer and Marshall (1992) for hurricane winds, were compared to results obtained with
the windfield model employed by Batts et at. (1980). Identical probability distributions for the location-
dependent mode components (approach angle, translational velocity, distance of closest approach, central
pressure difference, and occurrence rate) were used in these comparisons to ascertain the impact of the
windfield model alone on the predicted windspeeds.
Windspeeds predicted using both the Shapiro-based windfield model and the Batts et al. windfield
model were evaluated through comparisons to windspeeds measured in Hurricanes Frederic (1979),
Alicia (1983), Elena (1985), and Hugo (1989). On average, at coastal stations the windfield model used
by Batts et al. (1980) underestimates the windspeeds near the region of maximum winds in a hurricane,
but overestimates windspeeds away from the radius to maximum winds (Rmax). The Shapiro-based model
provides better estimates for the maximum wind speeds in a hurricane, neither consistently overesti-
mating or underestimating the measured peak winds, and more accurately models the windspeeds away
from the region of maximum winds. At inland stations (50 km or more from the coast) the windfield
model used by Batts et al. consistently overestimated the measured fastest-mile windspeeds. A summary
of the differences between the simulated and measured windspeeds is given in Table 20-1. The results
given in Table 20-1 clearly indicate that the windfield model used by Batts et al. underestimates the
peak windspeeds and overestimates the lower windspeeds.
Results for New York City are given in Fig. 20-5, where the predicted fastest-mile windspeeds
derived using the windfield model used by Batts and the Shapiro-based windfield model are plotted
versus return period. Figure 20-5 clearly shows the significant difference in the directionality of the
predicted windspeeds, which can have a significant impact on the predicted loads and responses of
structures that have been the subject of site-specific wind tunnel tests. It is also noteworthy that the rate
of increase in the predicted windspeeds with return period obtained using the Shapiro-based windfield
model is more rapid than that obtained using the Batts et al. windfield model. These results are partic-
ularly important to the design of critical structures, such as nuclear power plants, for which return
periods of 1000 years or longer are used to define loads.
Extreme-Wind Risk Assessment 479
Table 20-1 Summary of Percentage Differences between Measured and Simulated Hurricane Windspeeds as a
Function of Windspeed Range
·1 - Ingalls Shipyard; 2 - Dauphin Island; 3 - USCGC Buttonwood; 4 - Exxon Baytown; 5 - Mobile WSO (Frederic); 6 -
WSO Galveston; 7 - Pensacola NAS (Frederic); 8 - WSO Alvin; 9 - Dow Chemical Plant "A"; 10 - Houston Intercontinental
Airport; 11 - Pensacola Regional Airport; 12 - Data Buoy 42007; 13 - Pensacola NAS (Elena); 14 - Mobile WSO (Elena);
15 - Pensacola Airport (Elena).
3.6. Tornadoes
A tornado is a violently rotating column of air whose circulation reaches the ground, and is often
observable as a condensation funnel attached to the cloud base or as a rotating dust cloud rising from
the ground. Horizontal windspeeds in the most intense 2% of all tornadoes may exceed 200 mph
(Davies-Jones, 1986), although over 50% of tornadoes have peak winds less than about 100 to 110
mph. The peak winds occur over an area generally less than 20% of the tornado damage path. The
probability of a tornado strike on any part of the United States is less than 1O- 3/year. Hence, tornado
winds generally do not influence the wind hazard curve for point targets and small buildings for annual
exceedance probabilities greater than 10-3 (i.e., mean return period less than 1000 years). There are
large uncertainties associated with tornado hazard analysis, particularly for gust windspeeds greater than
about 125 mph. Key aspects of tornado hazard analysis include databases, windspeed transformations
from observed damage, and the integration of these elements into a hazard model. Tornado windfields
are discussed in Section 4.2.2.
2'60 r - - - - - - , . . - - - - - - - , - - - - - - - - ,
5'·0
a.
.......................................................... .
.................. ------ .... ----!, .. -.................... .
o
~120
~ :
0 100 ............. __ ._-_ .... __ ....... :- ....... .
i 80 ································f·················
~
"j 60 ...•...... _.........•.... ······1··
:;;
~ 40 ...................... ./-- ...... ,•..........
«
... 20
Figure 20-5. Predicted fastest-mile windspeeds produced by hurricanes for New York City.
480 Extreme-Wind Risk Assessment
3.6.1. Tornado databases. There are two tornado databases in the United States: The DAPPLE
(damage area per path length) database at the University of Chicago (Fujita, 1978) and the NSSFC
database (Kelly et al., 1985). These data sets include information on the macrotornado variables (e.g.,
intensity, path length, and path width, using the Fujita-Pearson-Pearson [FPP] classification system
(Fujita, 1971a; Fujita and Pearson, 1973), as well as path direction, geographic location variables, and
time of occurrence. Both data sets contain similar information from basically the same NWS data
acquisition network. However, there are differences in the record formats and tornado classification.
For example, the NSSFC record contains reported but unrated storms, whereas all storms are rated in
the DAPPLE data. The DAPPLE data set contains information for tornadoes from 1916 through 1985,
whereas the NSSFC record covers 1950 through the present. Both databases are subject to significant
classification errors, although many tornado risk assessments have not formally recognized these
limitations.
An effort to reconcile differences between these two independent databases for violent tornadoes (F4
and F5 intensity) was undertaken by Grazulis (1984). Some significant biases were eliminated, partic-
ularly on a state-by-state basis. Grazulis (1990, 1991) addressed further classification differences and
documented significant tornadoes from 1880 through 1989. Further engineering analysis of these tornado
databases is needed, accounting for the random and systematic errors inherent in the classification
system.
3.6.2. Tornado F-scale windspeeds. A critical element in tornado hazard analysis is the assign-
ment of windspeeds to the tornado intensity classifications Fl through F5. Because Fujita's original
windspeeds (1971) were not based on measurements or structural response to tornado winds, the wind-
speeds have been subject to wide debate and controversy. Most engineering assessments (Mehta, 1976)
result in windspeeds that are often much less than the Fujita windspeeds. 1Wisdale (1978) used previous
engineering and photogrammetric analyses to update Fujita's windspeeds in a Bayesian analysis. The
resulting windspeed intervals for each F scale are given in Table 20-2, along with the original Fujita
windspeeds. Fujita assumed that his original windspeeds were the fastest quarter mile. To date, the
transformation from F-scale classification to windspeeds remains perhaps the largest source of uncer-
tainty in tornado hazard analysis. The NEXRAD Doppler radar may eventually provide improved meas-
urements of windspeeds and their spatial distribution in tornadoes. Many researchers believe that tornado
windspeeds for the most violent storms fall in the range of 225-275 mph (Golden, 1976).
Tornado hazard analyses based on the F or F' windspeeds must be assumed to be representative of
damage-producing gusts, averaged over 2 to 3 sec. Fujita (1978) gives the relation between fastest-mile
windspeeds (Vfm) and tornado windspeeds (Y,) as
Original Updated
Fujita (1971) 1Wisdale (1978)
F scale windspeeds (mph) windspeeds (mph)
0 40-72 40-73
1 72-112 73-103
2 112-157 103-135
3 157-206 135-168
4 206-260 168-209
5 260-318 209-277
Extreme-Wind Risk Assessment 481
The differences between 3-sec gust windspeeds and fastest quarter-mile windspeeds are insignificant
for windspeeds between 100 and 300 mph (Marshall et al., 1983).
3.6.3. Tornado hazard analysis. Tornado hazard analyses have been performed by a number of
investigators for a variety of purposes. Broad regionalizations of tornado windspeed risk have been
attempted and many conservative windspeed maps have been produced, notably WASH-1300 (Markee
and Beckerley, 1974), ANSI/ANS-2.3-1983 (ANSI/ANS, 1983), and NUREG/CR-4461 (Ramsdell and
Andrews, 1986). Because of unrealistic conservatisms in tornado windspeeds and damage path areas,
these tornado windspeed maps are not generally accepted as being true indicators of tornado risk.
Site-specific tornado hazard analyses have also been performed since the early 1970s. These analyses
often produce significantly lower tornado hazards than the broad regionalizations cited above. The
models that have been used to perform these analyses have used different approaches and levels of
sophistication to reflect the deficiencies in the tornado database. The initial tornado risk models, which
estimated tornado strike probabilities for point targets (Thom, 1963), have been improved through the
incorporation of stochastic models (Wen and Chu, 1973), intensity-path area relationships (Abbey and
Fujita, 1975; Garson et al., 1975a; McDonald et al., 1975), and the effect of structure size (Garson et
al., 1975b; Wen and Ang, 1975). 1Wisdale and Dunn (1983b) employed a Monte Carlo simulation that
accounts for a stochastic occurrence model, tornado-target interaction geometry, and probabilistic mod-
els of tornado windfields. For multiple-vortex tornadoes, 1Wisdale and Dunn (1983c) provide conditional
probabilities of secondary vortex strike given the tornado point hazard probabilities. A series of site-
specific tornado hazard analyses for Department of Energy (DOE) sites was performed by McDonald
and Fujita (Coats and Murray, 1985).
The hazard analysis methodologies listed above are typically based on the mean tornado occurrence
rate (per year) and the mean probability of windspeed exceedance. A detailed review and analysis of
stochastic models for tornado hazard analysis are given by 1Wisdale and Dunn (1983b). For a Poisson
tornado arrival process, the well-known approximation is used,
(20-21)
where PT(V > v) is the probability that V exceeds v during time period T (in years), P(V > v) is the
probability that V exceeds v in a year, and A is the mean annual occurrence rate. Equation (20-21) is
accurate to within 0.5% for AP(V > v)T ::5 0.01. Wen and Chu (1973) derived an expression for
PT(V> v) using the Polya distribution (for clustered tornado arrivals). Similar expressions for Weibull
arrivals, Bayesian-Poisson, and Bayesian-Weibull processes are summarized in Table 20-3. The Bay-
esian formulations allow for uncertainties in the occurrence rate A to be modeled as a gamma distribution
with parameters n = number of reported tornadoes in time period to and K = subjective parameter that
reflects the uncertainties in the local or regional tornado data record.
A fundamental expression in tornado hazard analysis is to compute P(V > v) by
where J-LA(V > v) is the mean tornado area over which V > v and S is the tornado subregion area used
to determine the occurrence rate, A. Equation (20-22) is based on uniformly random tornado occurrence
over S. To take into account the area of the facility in the tornado hazard analysis, addition area terms
enter Eq. (20-22) (Garson et al., 1975a,b; Wen and Ang, 1975; 1Wisdale and Dunn, 1983b). For lifeline
systems (transmission lines) and very large facilities, the geometry of the system or facility should be
taken into account in tornado hazard analysis (1Wisdale and Dunn, 1983b).
482 Extreme-Wind Risk Assessment
A set of characteristic tornado hazard curves, taken from three separate analyses of the Savannah
River site (Fujita, 1980; McDonald, 1982; 1Wisdale and Hardy, 1985), are given in Fig. 20-6. The
differences in these curves tend to illustrate some of the systematic uncertainties affecting tornado hazard
analysis. These curves are for point targets and neglect the size of the structure or facility.
3.7. Downbursts
Downbursts, introduced by Fujita (1971b, 1985) as distinct meteorological phenomena, are strong
downdrafts that produce an outburst of winds near the grounds. By the mid-1980s, the meteorological
community had accepted the downburst as distinct from the well-known large-scale downdrafts asso-
ciated with gust fronts and cold air mass fronts (National Research Council, 1983; Kessler, 1985). A
macroburst is a large downburst with winds greater than 4 km in horizontal dimension, whereas a
microburst is a small downburst with damaging winds extending less than 4 km. Downbursts propagate
outward very slowly and the wind directionality is more uniform in azimuth than thunderstorm or
extratropical cyclone winds.
Notable differences in microburst winds versus thunderstorm gust front winds are discussed by
Bedard and LeFebvre (1986) for events studied at Denver Airport. The vertical profiles of downbursts
are thought to be significantly distinct from other storms. In a full-penetration microburst, the horizon-
tally divergent winds will result in a thin layer of high windspeeds near the ground (generally with the
maximum winds within 300 to 500 ft of the ground).
Poisson
(AT)"
PT(N) = - - exp (-AT) 1 - exp [-AP(~)T] AP(~)T
N!
1 - [1 + ~AP(~)Tr/~ AP(~)T
Weibull
= -m exp (-
(AP)" 1 - exp [-AP(~)P] AP(t)T~
PT(N) AI)
1-[1+~{'
Bayesian-Poisson: Gamma prior
Co+< V n +K+l exp (- Vto)
f,,(V) = f(N + K) (n : K)p (t)P
Bayesian-WeibuU
!Lv
n+K 2
=--,
to
(1 ~
n+K
=-..2-,
c 0
TJ~
n+K-1
= to
1-[ ~aTT'
1+--
to
(n : K)P(~)T~
The scale of effect of downbursts is thought to be at least one order of magnitude less than thun-
derstorm gust fronts. Because of the relatively small size and frequency of occurrence, downbursts are
not well represented in NWS wind records. Databases of downbursts are limited to several airport sites
investigated by the Federal Aviation Administration (FAA). These include Oklahoma, Dulles Airport,
Chicago, Denver, Boulder, Memphis, Huntsville, Kansas City, and Orlando. Fujita (1985) has developed
microburst hazard curves for the Chicago and Denver areas, based on these measurements. These results
are preliminary, but indicate that downbursts probably do not contribute significantly to wind risks to
structures. Downbursts are, however, a significant risk to aircraft.
P(V> v) = 1 - n s
;=1
[1 - P(v, > v)] (20-23)
I
1O-3"---'--r:::~--'r-.--------"--------'
e.". i ' -e - Twisdale, 1985
._____ '""-J ' _____ Fujita, 1980
~, -+. - ~ - McDonald, 1982
~~ ........
10-41:-......$.l,..........+ .....i .........i'....... i.... ~_~
Point iarget; - 33 ft AGL
... "........ Damaging gust avg. time
.....
.... ....,
-f'. \.
'-. '- ."i
,-...
~10-5 .. _ \ :
~
:=
:s
III \ \
]
""
]'" 10-7 "\
~
~41\ :
10-81:-..........;..............; ............,..........,........;................................................................,....................................,.... \...-;j
1O-9L.-~60..---'------t:80-----''--I00'---------:2~OO:n---~300~.....J
where i denotes storm type (extratropical cyclone, thunderstorm, hurricane, and tornado). For small
exceedance probabilities, Eq. (20-23) can be accurately approximated by
P(V > v) = P(VJ > v) + P(V2 > v) + ... + P(Vn > v) (20-24)
The windspeeds associated with these curves must all correspond to the same anemometer height
(generally 10 meters) and windspeed averaging time (gust or fastest mile).
As illustrated in Fig. 20-1, extreme wind loading effects include pressure, missiles, and atmospheric
pressure change. Wind pressures are a principal loading mechanism for all windstorm types, whereas
wind-borne missiles are generally considered only in tornado-resistant design with windspeeds greater
than about 150 mph. Atmospheric pressure change loads are limited to tornadoes. The following sub-
sections discuss these principal loading effects with emphasis on probabilistic methods and reliability-
based analysis of structural performance.
1. Wind loads are due to the action of wind only and can be determined assuming the structure behaves as a
rigid body. Structures that fall into this class are typically low-rise (one to five stories) industrial buildings
and residential structures.
2. Wind loads are associated with a combination of the direct action of the wind on the structure and the
dynamic response of the structure itself. These dynamically sensitive structures can be further divided into
categories: structures where forces induced by the motion of the structure can be ignored (most tall build-
ings), and structures where the motion of the structure strongly influences the wind loads. The latter category
includes long-span bridges, tall slender buildings, chimneys, and offshore oil platforms.
The analysis methods and test data for wind pressure loads are based on extratropical cyclone wind
characteristics, as described in Section 3.2. The gust characteristics of tornadoes, hurricanes, and thun-
derstorms are not the same as for extratropical cyclones. However, because of limited data for nonex-
tratropical cyclone winds, wind pressure loads are based on the characteristics of extratropical cyclone
gusts. In analyzing the structural response, the mean velocity profile for the appropriate terrain roughness
should be used for each storm type considered. Wind directionality should be considered in structural
reliability and risk analyses for extratropical cyclones and hurricanes. For tornado wind pressure effects
on large facilities, the effects of size of the tornado core (radius to maximum winds) should also be
considered in a risk analysis. For single buildings, the tornado wind pressure effects can Ibe computed
in the structural reliability analysis, assuming the winds act over the entire structure.
4.1.1. Nondynamically sensitive structures. Low-rise buildings are the most prevalent category
of buildings in North America, and during any significant extreme wind event they contribute the most
to the total damage. For the designer of a low-rise building, building codes provide the only viable,
economical means to estimate wind loads. In the United States, most building code wind pressures are
given by an expression of the form
(20-25)
Extreme-Wind Risk Assessment 485
where q is a reference dynamic pressure, q = l/2pv 2, P is the air density, G is a gust factor, and Cp is
a pressure coefficient. The gust factor G and the pressure coefficient Cp are often combined into the
term GCp • The combined gust factor pressure coefficient terms given in building codes are derived from
wind tunnel test results.
The external wind loads acting on a low-rise building are composed of a mean component and a
dynamic or fluctuating component, where the mean component is primarily a function of the mean
windspeed and the shape of the structure. The dynamic portion of the load is produced by a complex
interaction of the turbulence in the approaching wind and fluctuations in the pressures produced by
separation and reattachment of the flow caused by the structure itself. Figure 20-7, for example, shows
comparisons of full-scale and wind tunnel-derived pressure spectra for fluctuating pressures acting on
the windward wall and at the leading edge of the roof of a low-rise building. Figure 20-7 clearly
indicates the difference in the characteristics of the pressures. The frequency content of the pressures
on the windward wall mirrors that of the approaching wind, whereas along the leading edge of the roof
much of the energy in the pressures is produced by the high-frequency, small-scale fluctuations caused
by the flow separation. Figure 20-7 also shows the ability of wind tunnel experiments to reproduce the
full-scale characteristics of the pressures.
In general, the high wind loads acting in regions of flow separation are weakly correlated with loads
acting at other locations on the structure and act on relatively small areas. These local pressures dominate
A32
0.5 lOCATIOH VIlA
~ fUll SCALE
io.• + IotODEl SCAL E
~
go.:s
~O.2
Q
l:I
:>
~o. ,
A32
-.[]
0.5 lOCATIOH 5YiWJ
fUll SCALE
t: + IotODEL SCALE
i
W
0••
Q
~ 0.3
t
~0.2
~
:>
~o.'
Figure 20-7. Full-scale and wind tunnel pressure spectra of selected locations on the Aylesbury Experimental
House. (a) Wmdward wall; (b) leading edge of roof. (Source: P. J. Vickery, Surry, and Davenport [1985]).
486 Extreme-Wind Risk Assessment
the loads acting on cladding, but contribute little to the overall structural loads. To account for the lack
of correlation of the wind loads acting on the structure and to account for their effects on the main
wind force-resisting system, building codes present two sets of wind loads-one for the design of small
components and cladding and one for the design of main structural components.
The main wind force-resisting system loads given in a number of North American building codes
are derived from a parametric study of wind loads on low buildings performed by Davenport et al.
(1977, 1978), in which wind loads acting on gabled structures having various height-to-width ratios and
a range of roof slopes were tested in turbulent boundary layer flow conditions. These wind tunnel tests
were performed using isolated buildings for a range of wind directions in both open country and
suburban type boundary layer flow fields.
Similar tests were performed to measure the local peak wind loads acting on various portions of the
exterior of the structure suitable for the design of the cladding. Until recently, the wind loads given in
building codes were for flat and gable roof buildings only. Wind tunnel tests performed by Meecham
(1988) showed that the external peak pressure coefficients acting on a hip roof building are significantly
different from those acting on a gable roof, and some building codes now include a separate set of
loading coefficients for hip roof buildings.
The true wind load acting on a low-rise structure is defined by the characteristics of the wind itself
(windspeed, direction, intensity of turbulence, size and organization of gusts), the immediate surround-
ings (nearby buildings, vegetation, hills, etc.), the geometry of the building (height, width, roof slope,
roof configuration, parapets, gutters, etc.), and building leakage. Clearly no single building code can
incorporate all of these effects and, consequently, there is significant uncertainty associated with the
code-supplied wind loads. The variability in the wind loads is shown in Fig. 20-8 (Ho et al., 1989).
This figure compares the range of peak negative pressures at a roof comer of a low-rise building tested
in a number of random cities (i.e., the same low-rise building was tested in a number of different
suburban environments) to the peak comer pressures measured on the same building tested as an isolated
building. Figure 20-8 clearly shows the variability expected in the peak external pressures for buildings
in a real environment.
The effective wind loads on low buildings can be strongly affected by the influence of internal
pressures. In buildings with large openings, the internal pressures represent a large portion of the net
loading, and are highly dynamic yet almost fully correlated over the internal volume of the structure
(Stathopoulos, 1979). In the case of buildings with no dominant opening, the internal pressure takes on
a value approaching the average of the external pressures. The internal pressure problem is further
complicated when considering potential openings that may occur during an extreme wind event (because
of missile damage or the sudden failure of a window or door). In the case of a sudden breakage, the
internal pressure takes on a value equal to the external pressure at the breakage location plus a short-
duration resonant amplification that may place heavy loads across walls not considered in the design
process (Vickery et al., 1984).
4.1.2. Wind-sensitive structures. Structural response to winds has two components: along-wind
response in the direction of wind flow, and across-wind response in the direction perpendicular to wind
flow.
4.1.2.1. ALONG-WIND RESPONSE. Because of the random nature of wind-induced pressure time
histories, frequency domain, random vibration analysis is the common method of structural response
analysis (Davenport, 1962a, 1967; Simiu, 1976, 1980). The power spectral densities and cross-spectral
densities necessary to define wind pressures are either obtained from wind tunnel tests or from published
data (Davenport, 1961a, 1962a, 1977; Simiu, 1974).
Building codes have utilized the random vibration procedure to develop wind load criteria. The
random vibration methodology combined with the spectrum of atmospheric turbulence developed by
Extreme-Wind Risk Assessment 487
Davenport (1961a) forms the basis for the overall windloads given in the National Building Code of
Canada (NBCC). The structural loads given in ASCE-7-88 are based on a modification of the above
procedure described in Simiu (1976, 1980) and use the spectrum of atmospheric turbulence described
in Simiu (1974).
Figure 20-9 (Loh and Isyumov, 1985) shows comparisons of the results of 24 wind tunnel tests on
Figure 20-8. Local pressure loads on a roof comer for a low-rise building in a random city. (Source: Ho, Surry,
and Davenport [1989]).
10~-----------------------. 101~-----------------------'
Figure 20-9. Comparison of wind tunnel and code-predicted peak base bending moments for 24 buildings. (Source:
Loh and Isyumov [1985]).
488 Extreme-Wind Risk Assessment
tall buildings, compared to base moment estimates obtained from ASCE-7-88 (ANSI A58-1-1982) and
the NBCC. The results clearly indicate the uncertainties that can be expected if code loads are used to
estimate wind-induced loads. In the case of the NBCC, the mean ratio of wind tunnel base moments
to code-predicted base moments is 0.81, with a coefficient of variation of 0.22. Using the ASCE-7-88
methodology, the mean ratio of wind tunnel-predicted base moments to code-predicted base moments
is 0.87, with a coefficient of variation of 0.32. Using the ASCE-7-88 provisions, in 25% of the cases
the code-estimated loads are less than those obtained from wind tunnel tests.
4.1.2.2. ACROSS-WIND RESPONSE. The across-wind response (response in the direction perpendic-
ular to that of the mean wind) can generally be divided into three mechanisms, namely, wake-induced
forces (e.g., vortex shedding), lateral buffeting, and motion-induced forces, or a combination of these
forces. The most common form of significant cross-wind excitation for modem tall buildings is wake
excitation produced by vortex shedding.
Estimates of the across-wind forces due to vortex shedding can be obtained analytically for simple
terrain by using published wind tunnel test results (Steckley, 1989). In general, for complex structural
shapes in complex environments, one must resort to boundary layer wind tunnel tests to obtain estimates
of the cross-wind forces.
Lateral buffeting forces are produced by changes in the wind direction produced by the lateral
fluctuations in the windspeed. The ability of the incident lateral turbulence to produce significant con-
tributions to the cross-wind response depends on the ability of the structural shape to generate lift as a
function of windspeed and angle of attack. In general this occurs only for shapes that have high lift
curves (acL/aa), where CL is the lift coefficient and a is the angle of attack. Such structural shapes
include airfoils, bridge decks, and flat plates.
In addition to the above-described across-wind forces, the process is complicated by displacement-
dependent excitations, commonly recognized as galloping, flutter, and lockin, all of which are a function
of atmospheric turbulence and wake excitation. For a detailed description of flutter, galloping, or lockin,
see Scanlan and Tomko (1971), Vickery (1971, 1982), Kwok and Melbourne (1980, 1981), Novak and
Davenport (1970), and Parkinson and Brooks (1961).
4.1.2.3. WIND TUNNEL TESTING. The most reliable method for estimating wind loads and re-
sponses of structures is through the use of wind tunnel testing techniques. The earth's boundary layer
is modeled to simulate conditions typical of extratropical storm systems, using the methodology de-
scribed in Cermak (1975) and Surry (1982). No known attempts to model (using wind tunnels) the
response of structures associated with thunderstorm or tornado winds have been made. Wind tunnel
testing is used to estimate wind-induced cladding loads on structures, the overall dynamic response of
tall buildings, the response of bridges, offshore structures, and other less common or unique structures.
The vast majority of modem wind tunnel tests are performed to obtain loads and responses for high-
rise buildings located in complex urban environments.
Wind loads and responses of tall buildings are derived from wind tunnel tests through the use of
aeroelastic tests (Isyumov, 1982), the force balance technique (Tschanz, 1982; Tschanz and Davenport,
1983; Vickery et al. 1985a) or on-line pressure integration methods (Steckley et at., 1991).
The results of a force balance test consist of a single set of mean base bending moment coefficients
and base moment spectra for a range of wind directions (typically 36 directions). The results from the
wind tunnel tests are then combined with the joint probability distribution of windspeed and direction
to provide predictions of structural response versus return period for the tested structure. Details of the
methodology are given in Davenport (1983).
4.1.2.4. OFFSHORE PLATFORMS AND BRIDGES. Although the general concepts of response analysis
are the same for any type of structure, the preceding discussions are primarily related to tall buildings.
Extreme-Wind Risk Assessment 489
Among offshore oil platforms, wind response is relatively small in rigid jacket type platforms but
can be significant in compliant platforms (tension lag platforms [TLPs], guyed towers, and Roseau-type
towers). Wind response analysis of TLPs is discussed by Kareem (1984, 1985), Simiu and Leigh (1984),
Li and Kareem (1990), Vickery (1990), and Kareem and Lee (1993). Wind response of guyed platforms
is investigated by Vickery and Pike (1985), and Roseau-type platforms by Ng and Vickery (1989).
Davenport and Hambly (1984) studied jack-up platforms.
Wind-induced bridge loads are produced by buffeting (both vertical and horizontal), vortex shedding,
flutter, galloping, and torsional divergence. The buffeting response of bridges was first described by
Davenport (1962b). Examples of the methodology used to evaluate the potential of flutter are given in
Scanlan and Sabzevari (1969), Scanlan and Tomko (1971), and Scanlan (1981). An overview of the
response of bridges to wind loads is given in Simiu and Scanlan (1986). With new bridges becoming
longer, more streamlined, and more lightweight, it is increasingly more common to perform wind tunnel
tests to determine the response of the bridge to wind loads. Wind tunnel tests for a single bridge may
involve the use of one or more section model tests (Wardlaw, 1978), taut strip model tests (Davenport
and Tanaka, 1974), or full-bridge aeroelastic models (Davenport et al., 1969).
sile, overall dynamic response analysis may also be required. The following sections discuss the key
elements of wind-borne missile analysis, with emphasis on probabilistic methods.
4.2.2. Tornado windfields. Tornado windfields are three dimensional, with tangential, radial, ver-
tical, and translational velocity components. The radial inflow and vertical velocity components are
significant inside the tornado core. Several theoretical models of tornado windfields have been developed
(Lewellen, 1976; Davies-Jones, 1986). However, simpler engineering models are generally used in
tornado wind, missile, and APC loading analyses. Those that have been most frequently used in the
nuclear power plant facility risk assessments include Rotz et al. (1974), McDonald et al. (1975), Simiu
and Cordes (1976), Fujita (1978), and a model with probabilistic parameters developed by Twisdale et
al. (1978, 1981).
A comparison of the velocity profiles for the radial, tangential, and vertical windfield components
for two of these models is given in Fig. 20-10. This comparison illustrates some of the basic features
of tornado windfields as well as differences from one model to another. The distinctive feature of the
Fujita model is the existence of an inner core (with no radial or vertical wind component) that extends
to a radius of 300 ft for the design basis tornado. Both models exhibit Rankin vortex flow for the
tangential velocity field. In these and other models, the vertical profiles of the total windspeed are steep,
generally reaching more than 90% of the peak winds within 50 ft above ground level. Just above ground
level, the horizontal winds are generally assumed to be from 65 to 80% of the horizontal winds at
33 ft.
A sensitivity analysis of tornado windfield model parameters in terms of tornado missile transport
sensitivity was performed by Twisdale et al. (1981), with the following basic conclusions.
1. Low values of translational velocity result in more missile injections and higher missile velocities for a
given maximum tornado windspeed.
2. For a given tornado intensity, an increase in the radial inflow component relative to the tangential component
increases the number of missiles injected and leads to higher average values of maximum missile velocities,
ranges, and altitudes.
3. Missiles injected and transported by large-core tornadoes generally attain higher maximum velocities but
lower peak altitudes than those predicted with smaller cores. The absolute number of missiles produced is
proportional to the radius of the core.
4. The slope of the core does not have an appreciable effect on missile transport, even for missiles injected
at high elevations.
The study showed that missile transport predictions are not heavily dependent on the particular
tornado windfield model, provided the translational velocity, radial inflow parameters, and core radius
are the same.
4.2.3. Trajectory models. A set of initial conditions is required for the trajectory analysis, includ-
ing the missile mass, geometry, initial velocity, and missile inertial orientation. Given these initial
conditions, the transport methodology consists of aerodynamic models of the missile shapes, the gov-
erning dynamic and kinematic relations, and the solution scheme for the equations of motion. Integration
of these equations yields the motion time history of the missile, which provides the means to predict
the free-flight motion, maximum velocity, and impact conditions.
Table 20-4 summarizes several basic transport models that are available for missile hazard analysis.
Trajectory models are most commonly distinguished by the type of motion they describe. Generally,
two degrees of freedom (2-D) refers to motion of a point in a plane; three degrees of freedom (3-D),
to motion of a point in space; and six degrees of freedom (6-D), to translational and rotational motion
of a rigid body in space. Another distinguishing feature is the number of aerodynamic force components,
including moments, that are considered.
Extreme-Wind Risk Assessment 491
The simplest model in Table 20-4 is the 2-D model for a particle mass subjected to gravity and
aerodynamic drag forces. The resulting coupled ordinary differential equations are integrated numeri-
cally to predict debris motion time history. This trajectory model can be used only for straight winds.
The 3-D model in Table 20-4 predicts the general motion of a particle mass in space. Its force parameter
is also the drag coefficient, which is often specified to account for random tumbling of the object. The
6-D models in Table 20-4 simulate the aerodynamics of rigid bodies that cannot be adequately treated
by the simpler 2-D and 3-D models. The random orientation 6-D model (RO 6-D) considers drag, lift,
and side forces and simulates missile tumbling by periodic reorientation (1\visdale, et al. 1979). Its
(a) (b)
end
- Fujita DBT-77 (1978)
-150
-;; --- Twisdale et ai, (1981)
Q
.§.-100 ~ ioo
N
-so so
(c) (d)
200 200 I
I
ISO ISO I
-;; I
-
c.
.§. 100 100 I
::,'D N I
SO /
/
V
300 600 900 00 -SO -100 -ISO -200
R (II) Us (mph)
(e) (f)
I
,
I
I
I
150 150 ,
I I
-;; Inner-J.OuterJ
/
E 100
Q
RUt' Uz (mph)
Figure 20-10. Tornado windfield profile comparison (a) Radial wind at z = 33 ft; (b) radial wind at r = 515 ft; (c)
tangential wind at z = 33 ft; (d) tangential wind at r = 515 ft; (e) vertical wind at z = 33 ft; (t) vertical wind at
515 ft. (Source: 1Wisdale et al. [1981]).
492 Extreme-Wind Risk Assessment
Models
Features 2-D 3-D RO 6-D 6-D
Parameters": g g g, v g
Aerodynamic forces: CD CD CD, CL , Cs CD' CL , Cs,
C1, Cm, Cp
Equations of motion: Two-coupled Three-coupled Three-coupled ODE; Six-coupled
ODEb ODE three-force equation ODE
Simulation efficiency: High High Moderate Low
Impact Speed Prediction: * * * Exact
Impact position prediction: # # * Exact
Impact disperson: # # * Exact
Impact orientation prediction: No No * Exact
Impact obliquity prediction: * * * Exact
Impact angular velocity prediction: No No No Exact
prediction capabilities are enhanced over the particle models with only a modest decrease in simulation
efficiency. Conventional 6-D models (Etkin, 1972; Redmann et aI., 1978) track missile translation and
rotation, using a system of six coupled, ordinary, nonlinear differential equations. Such models require
estimation of aerodynamic force and moment coefficients over all body orientations.
Trajectory models that have been used in wind-borne missile analysis include deterministic 3-D
particle models (Simiu and Cordes, 1976; McDonald, 1981), RO 6-D models (1\visdale et al., 1978,
1981), and deterministic 6-D models (Redmann et aI., 1976). The limitations of the 3-D models for
predicting the motion of missiles with high aspect ratios (beam, pipes, rods, poles, etc.) are discussed
by 1\visdale et al. (1979). Wind tunnel testing results of missile types used in nuclear power plant
safety studies are given by Redmann et al. (1976, 1978).
4.2.4. Characteristic tornado missile velocity statistics. Using the RO 6-D trajectory model and
a probabilistic tornado windfield model, Twisdale et al. (1981) performed a series of calculations to
develop statistics for maximum missile velocities for seven standard missiles (USNRC, 1981). The
missiles were positioned in the region of maximum winds (see Fig. 20-11) and were released to the
moving windfield at the time of maximum aerodynamic force. The maximum velocity statistics were
based on 1000 trajectory analyses for each missile, injection height, and windspeed combination. Figure
20-12 shows the mean value of maximum missile velocity versus the maximum tornado windfield
velocity. The ninetieth and ninety-ninth percentiles are shown for the 12-in. pipe missile. The results
also show that the maximum missile speed is strongly dependent on missile type and injection height.
Tables of the detailed statistics are given in Twisdale et al. (1981).
As a percentage of total tornado velocity, design velocities for wooden missiles are generally about
75% of the horizontal wind velocity. For steel pipe missiles, the maximum missile velocity is about 40
to 60% of the horizontal wind velocity. For automobile missiles the maximum missile velocity is about
18 to 20% of the horizontal wind velocity.
4.2.5. Missile impact effects. Missile impact effects include local response (penetration, perfo-
Extreme-Wind Risk Assessment 493
Maximum Velocities
V max
Translating (Vmax \1
Tornado (Vma')h
Wondfield
x
Trajectory
FLI
Optimal Release Criteria
v I
I
I
I
P X
ration, and spall) and overall response (such as dynamic shear effects at the edge supports of the
impacted wall). Local response effects are estimated by semiempirical formulas that take into account
the missile type and target materials. Overall response is analyzed through dynamic response analysis
considering deformation of the missile and the impact force time history. The velocity and orientation
of the missile are important input parameters to determine missile impact effects. In deterministic analy-
Leuend
;;: 1-ln Rod
§
200
• Utility pore
e
05
180 l> 3-ln Pipe
• i-In Pipe
.. .
140
~ / ,?
'"
~'"
120 \\\.~.\ /'
"...
.". O~..!/' 00,)"('1\\:;..-'"
•,:,/ .-ojoY
§ 100
/ ---
--
__ ...cr-"
.$ 80
--
g::::: .-- --
~ 80
~'i'33ft
40 'i- 20 ft
~'i"0ft
.......0 Ij_10ft
20
.-=
0
100 200 300
ses, the missile impact is assumed to have a velocity vector normal to the target surface and the missile
axis is colinear with the velocity vector. In probabilistic analyses, the velocity vector and missile obli-
quity are treated as random variables.
Although the literature on impact mechanics is large, there have been relatively few tests focused
on wind-borne debris impact (blunt shapes and low velocities). Penetration tests of utility poles, wood
beams, steel pipes, steel rods, and other missiles have been conducted (Stephenson, 1977; Berriaud et
ai., 1978; Vassallo, 1975; Rotz, 1975; McDonald, 1989). The following sections present penetration
equations for concrete and steel structures for wind-borne missile impact. Methods to treat overall
dynamic response are given in ASCE Manual No. 58 (Stevenson, 1980) and an ASCE Conference
Proceedings on Nuclear Power (ASCE, 1980).
4.2.5.1. PENETRATION OF REINFORCED CONCRETE. The modified National Defense Research Com-
mittee (NDRC) equations are used for penetration, perforation, and spall predictions in reinforced con-
crete by steel missiles (Kennedy, 1975; Sliter, 1980). The following formulation is taken from 1Wisdale
et ai. (1981) and includes an analysis of prediction error using the tornado missile steel pipe and rod
penetration database.
For penetration into infinite concrete, compute
(20-26)
where K is the concrete penetrability factor, N is the missile shape factor, W is the missile weight (lb),
d is the effective missile diameter (in.), and V; is the effective impact velocity (ft/sec). The penetration
depth Z (in.) is calculated using the following equations:
(20-29)
where j; is the design concrete compressive stress and the missile shape factor N is 0.72 for fiat-shaped
missiles. The outside diameter of the rod or pipe missile should be used in these penetration equations.
The prediction error of these equations is defined as
(20-30)
where Zm is the measured penetration depth and zp is the predicted penetration depth given by the
equations above. The statistics of ~ for the tornado missile database are f.L~ "" 1.0 and a ~ = 0.23 for
pipe missiles and f.L~ "" 1.0 and a~ = 0.1 for solid steel rod missiles, where f.L~ and a~ are the mean and
standard deviation, respectively. For rod penetrations into concrete with thickness-to-rod-diameter ratios
less than 3, the NDRC equations should not be used.
These statistics can be used to develop appropriate factors of safety for design because the prediction
error is normally distributed. The reliability (Ps) is the probability of the event that the actual penetration
depth is less than a design factor Ijs times the predicted penetration depth.
(20-31)
Extreme- Wind Risk Assessment 495
(20-32)
For normally distributed prediction error with f.L~ .... 1.0 the design factor", is given by
(20-33)
where <I> is the cumulative distribution function of a standard normal variate. For example, the design
factor for P s =0.90 for pipe missile penetration depth into concrete «(J~ =0.23) is '" = 1.29. Therefore
the predicted penetration depth, using Eqs. (20-26) through (20-29) should be multiplied by 1.29 to
achieve 90% reliability that the actual penetration depth will not exceed "'zp. These equations coupled
with the ~ statistics can be used to develop reliability-based designs for wind-borne missile effects.
The scabbing thickness for pipe missiles is given by
where de = 2(A./1r)1!2 is the effective diameter. These equations are unbiased (f.L~ .... 1.0) with (J~ =0.11.
For steel rods, the equations are
where d = de for pipes. There are not sufficient data to characterize concrete perforation prediction
error. A probabilistic analysis of concrete penetration effects is presented by Chang (1981).
4.2.5.2. PERFORATION OF STEEL PlATES. The Ballistic Research Laboratory (BRL) formula (Rus-
sell, 1962; Gwaltney, 1968) is generally used for perforation of steel plates by steel tornado missiles.
The perforation thickness (e) is given by
(20-40)
where the notation and units are the same as in Section 4.2.5.1. No analysis of prediction error has
been made. Ng et al. (1990) recommend that the calculated perforation thickness be factored by 1.25
for design.
significance only for tornadoes, with the combination of relatively high translational storm speed (gen-
erally greater than 30 mph) and maximum pressure drop in the center of a rapidly rotating vortex. For
a perfectly sealed structure, the APC produces outward-acting pressures across all surfaces of the struc-
ture. The estimation of APC loads requires a model of the tornado windfield and knowledge of the rate
at which the structure may vent. Because most buildings are not perfectly sealed, the actual pressures
resulting from APC may be much less and are often negligible for structures with typical venting
features.
4.3.1. Sealed buildings. The cyclostrophic equation (Simiu and Scanlan, 1986) is used to develop
the APC distribution:
where dpa /dr is the atmospheric pressure gradient at radius r from the center of the tornado vortex, p
is the air density, and Va is the tangential windspeed. The pressure drop Pa is obtained by integrating
Eq. (20-41) from infinity to r. The maximum value of Pa occurs at r = 0, whereas the maximum
windspeed occurs at r = R m"" the radius of maximum winds (generally 150 to 500 ft), for most severe
tornadoes. At Rm.., the APC is approximately one-half its maximum value.
A commonly used expression for the maximum value of Pa is (Patax = PV~. Although a limiting
value of APC is taken to be about 0.2 atm (about 3 psi) (Minor et al., 1977), most tornadoes will
produce a maximum APC of less than 0.5 psi. A 200-mph tornado produces a maximum value of P.
of about 0.85 psi (ANSI/ANS, 1983). In a facility risk assessment, exceedance probabilities for APC
(graph of probability of exceedance versus P.) should be developed from the tornado hazard curve and
the tornado windfield model, using the relationships given above.
The rate at which the pressure change (dpa /dt) occurs depends on the translational speed (VIr) of the
tornado and can be estimated by
(20-42)
Equation (20-42) can be used to develop the exceedance probabilities for dpa /dt from the tornado
wind hazard curve and windfield model parameters. Maximum deterministic design basis values are
given in ANSI/ANS (1983), USNRC (1974), and Kennedy et al. (1989).
4.3.2. Vented buildings. The APC loadings described in Section 4.3.1. are valid only for sealed
structures that have been designed to maintain a pressure differential under severe wind loadings, such
as nuclear power plant containment structures. Venting generally will occur for most other structures
as a result of breaching of the building envelope by wind or missile effects or because of the inherent
ventilation and leakage paths of the building. In addition, the slower the translational speed of the
tornado, the more time available for internal and external pressures to equalize. Further, if the tornado
core does not totally engulf the building, the APC loadings will apply only to the affected building
surfaces.
There have been only limited analyses of venting due to tornado and the amount of venting needed
such that APC loadings do not materialize. A preliminary analysis of the mechanical ventilation system
in nuclear fuel cycle facilities is reported by Gregory et al. (1976). Minor et al. (1977) estimated that
1 ft2 of venting per 1000 cu ft3 of interior volume was adequate to vent buildings effectively from
severe tornado APC loads. Most commercial structures have this amount of venting through the heating,
ventilating and air conditioning (HVAC) systems, exhaust fans, doors, and cladding leakage. Kennedy
et al. (1989) adopted this criterion for an interim guideline for DOE facilities. No probabilistic analyses
have been used to assess uncertainties or to quantify APC loadings for vented structures.
Extreme-Wind Risk Assessment 497
The structural reliability analysis estimates structural failure probability due to wind effects. Reliability
assessment methods discussed in Chapters 2 to 8 of this book are used. Failure probabilities are com-
puted for specified levels of windspeed (e.g., 100-mph wind, 50-year return period wind, etc.). Relia-
bility formulations and analyses have been performed by Davenport (1983), Wen (1983), Simiu and
Shaver (1979), and Marshall et al. (1983). In addition, research has been conducted to develop wind
loading factors for use with conventional building design (Ellingwood et aI., 1980; Ravindra and Gal-
ambos, 1976).
Failure probabilities are computed for each of the wind effects (pressure, missile impact, and APC).
These failure probabilities are combined by considering the three failure modes to be connected in series
(see Chapter 8 of this book for failure probabilities of series systems). Missile impact is considered
only for critical facilities such as nuclear plants and APC is considered only for sealed structures.
Facility or system reliability and risk analysis consists of the following steps.
6.2.1. Wind pressure. Fragility curves for wind pressure can be developed by performing detailed
reliability analysis for different windspeeds or by a simplified method. Detailed reliability analysis using
simulation techniques has been used for a few nuclear power plants. Because the detailed reliability
analysis approach is expensive, a simplified approach is used. This method is similar to the lognormal
format approach used in seismic risk assessment (see Chapter 19 of this book).
The simplified method uses a probabilistic safety factor approach to estimate the structural fragility
from the windspeed design criteria. Lognormal models are used. Although the mean fragility curve may
not be exactly lognormal, the central region is often well represented by the lognormal model. The
lognormal distribution for component fragility is defined by two parameters: the median windspeed V
and the logarithmic standard deviation 13", The equation for the failure probability Pf(v), corresponding
498 Extreme-Wind Risk Assessment
to windspeed v, is given by
(20-43)
where <I> is the standardized normal cumulative distribution function. The median capacity Vis estimated
using the design windspeed Vd adjusted to reflect the conservatisms in the design procedure. These
adjustments generally include considerations of the importance factor in ASCE-7-88, ratio of nominal
to actual yield strength, load combination factors, and design safety factors. The median capacity is
thus estimated by
(20-44)
where Vd is design windspeed and Eli are median values of nondimensional adjustment factors. The
lognormal standard deviation 13. is estimated by combining the lognormal standard deviations of the
individual factors Eli as follows:
(20-45)
all possible speeds) is obtained by convolving the fragility curve with the hazard curve.
Pf = Jof IdH(V)
~ Pf(V) dv I (20-46)
where H(v) is the mean hazard function (similar to Fig. 20-2) and Pf(v) is the mean fragility function.
This convolution integral may be computed by numerical integration (see Chapter 9 of this book) or
simulation (1Wisdale, 1988).
The simplified method was used by Ng et al. (1990) to establish preliminary wind design criteria
for the DOE New Production Program. Ravindra and Nafday (1990) review the simplified methods and
summarize results of nuclear power plant wind PRAs.
10,10
0 4 8 12 16 20
Concrete Thickness (in)
Fig\lre 20-13. Example tornado missile damage probabilities versus barrier thickness,
500 Extreme-Wind Risk Assessment
analysis, and consequence analysis, as described in Sections 6.1 to 6.3. A flow chart of the methodology
is shown in Fig. 20-15. (Note the similarity to the seismic PRA methodology flow chart in Chapter 19
of this book.)
A summary of results of extreme wind PRAs is given by Ravindra and Nafday (1990). The results
of a tornado PRA using the simulation method are described by Sues et aZ. (1993). Twisdale (1988)
discusses some of the key issues in nuclear plant risk assessment. In general, only some nuclear power
plant PRAs have treated wind effects in much detail, whereas most others have extrapolated results
from one or two studies. At old facilities, there are many problems with masonry block walls and steel
building frames exposing piping, tanks, and equipment. Both missile penetrations and wind pressures
contribute to the damage risk of structures. External piping and tanks are vulnerable to rod, pipe, and
structural element type missile penetrations. The new plants with reinforced concrete structures are
vulnerable primarily at vent areas, roll-up steel doors, diesel generator exhausts, etc. Plant stacks are
vulnerable to high wind conditions and potential collapse onto safety-related buildings and components.
Turbine building cladding is generally a primary source of elevated missiles at most plants.
7. SUMMARY
Extreme wind hazards include extratropical cyclones, tropical cyclones (hurricanes), thunderstorms,
tornadoes, downbursts, and special surface winds produced by topographical features. Most regions in
the United States experience several of these wind hazards and, consequently, an analysis of extreme
winds must taken into account each storm type. Extreme value analysis of annual extremes, with gen-
r------------------, r------------------,
: Site-Specific Frequency :
o 0
--------------------
o
---- -- --- ------ - -- --
o
r----- - - - - - --- - - - - -,
r------------------,
Failure Modes
o
o
-,: Site Characterization :
r- o
Identification o
-~ Structure/Component
r-----------,
--------------------
r-----------,
o
L..-----r---......:---------
Characterization
------ - - - -- 0
_ _ _ Model
_ _ _ _ _ _ _ _ :I :L _ _ _Models
________ :
o
L l
: • Dynamic Pressure
r------------------, : • APC
: System Failure Logic : ------- - --- - -- - - - - -- I
: (Booleans) ~-
r------------------,
: Aggregate Failure Modes :
--------------------
r------------------, -, for Each Component :
-- - - -- - - - - - - - - - - - - --
: Simulation Methodology : r - - - - - - - - - - - - - - - - - -.,
: (Variance Reduction) ~-
_ ~ Postprocessing :
--------------------
r------------------, o 0
Probabilistic Risk
Assessment
erally 20 or more years of records, is used for extratropical cyclones and thunderstorms. Simulation
methods are used for hurricanes and there is about a loo-year database for the Gulf and Atlantic coasts
of the United States. Research indicates that over a large portion of the central and southeastern United
States thunderstorm winds strongly influence the extreme wind hazard. A stochastic model that considers
all thunderstorms at a particular site produces the most robust hazard predictions. For tornadoes, because
direct windspeed measurements are not available, simulation methods based on 40 or more years of
damage area statistics are generally used. No climatological database exists for downbursts and prelim-
inary hazard analyses at several sites indicate that they are probably not significant for structures. In
addition, some of the damage induced by downbursts is undoubtedly included in the tornado damage
database.
The wind hazard for a particular site describes the frequency of occurrence in terms of basic wind-
speed for each type of extreme wind. Associated with each storm type are fundamental characteristics
of the wind that govern the subsequent analysis of wind loading and response phenomena. These
characteristics are the variation of the mean windspeed with height and a description of the gustiness
of the wind, including information on the frequency content and characteristic size and correlation
lengths of the gusts. These characteristics generally depend on the basic wind type (Le., extratropical
cyclone, thunderstorm, tornadoes, hurricanes). Most data and analyses are for the large-scale extratrop-
ical cyclones, which are important for 50- to loo-year return wind loads in many noncoastal areas of
the United States. Windspeeds are specified at the standard reference height of 10 meters above ground,
Windspeed
j
~
Release Category
and/or Weather data
Core Damage Atomospberie dUpersion
'Ci t
!I~j
to
I
2 Population
Evaculation
Health effects
lri1
Event trees Property damage
Fault trees Freqnency Damage
Containment analysis
Systems analysis Release frequency Consequence analysIs RIsk
Windspeed
Component fragility
evaluation
Figure 20-15. Extreme wind probabilistic risk assessment. (Source: Hickman et al. [1983]).
502 Extreme- Wind Risk Assessment
in open terrain (e.g., an airport). A power law representation is used to define the variation of mean
windspeed with height in North American building codes.
For most regions in the United States, extratropical cyclones, hurricanes, thunderstorms, or topo-
graphic winds control the combined wind hazard for annual probabilities of exceedance greater than
10- 3 • East of the Rocky Mountains, tornadoes will generally dominate the wind hazard for annual
frequencies less than 10-3 to 10- 4 • The uncertainties in predicted windspeed increase substantially for
annual probabilities less than about 5 X 10-3 •
Three types of wind effects should be considered, namely, wind pressure, wind-generated missile
impact, and atmospheric pressure change (APC). Missile impact is important only for tornadoes and is
considered usually in the design or risk assessment of critical facilities only. Atmospheric pressure
change is significant only for airtight structures. Structural response is computed by static or dynamic
analysis, depending on the type of loading and structure. Structural reliability with respect to wind
loadings can be computed by reliability analysis techniques discussed in Chapters 2 to 8 of this book.
Critical facilities such as nuclear power plants consider extreme-wind risks in the PRA. Wind hazard
analysis, structural reliability analysis, and system reliability analysis techniques are combined to assess
probability of facility damage (e.g., nuclear plant core damage) due to wind effects.
REFERENCES
ABBEY, R. F., and T. T. FUJITA (1975). Use of tornado path lengths and gradations of damage to assess tornado
intensity probabilities. In: Proceedings of the 9th Conference on Severe Local Storms. Boston, Massachusetts:
American Meteorology Society, pp. 286-293.
ANSI/ANS (American National Standards Institute/American Nuclear Society) (1983). Standard for Estimating
Tornado and Extreme Wind Characteristics at Nuclear Power Sites. ANSI/ANS-2.3-1983. La Grange Park,
Illinois: American Nuclear Society.
ASCE (American Society of Civil Engineers) (1980). Civil Engineering and Nuclear Power, Vols. IV and V. New
York: American Society of Civil Engineers.
ASCE (American Society of Civil Engineers) (1990). Minimum Design Loads for Buildings and Other Structures.
ASCE-7-88. New York: American Society of Civil Engineers.
BAITS, M. E., M. R. CORDES, C. R. RUSSELL, J. R. SHAVER, and E. SIMIU (1980). Hurricane Windspeeds in the
United States. National Bureau of Standards, Report Number BSS-124. Washington, D.C.: U.S. Department
of Commerce.
BEASON, W. L., and J. R. MORGAN (1984). Glass failure prediction model. Journal of Structural Engineering,
ASCE, 110(2):197-212.
BEDARD, A. J., and T. J. LEFEBVRE (1986). Surface Measurements of Gust Fronts and Microbursts during the
JAWS Project. Boulder, Colorado: National Oceanic and Atmospheric Administration (NOAA).
BERRIAUD, c., C. BERRIAUD, A. SOKOWVSKY, J. DULAC, R. GUERAUD, and R. LABROT (1978). Local behavior
of reinforced concrete walls under missile impact. Nuclear Engineering and Design, 45:457-469.
BROWNING, K. A. (1964). Airflow and precipitation trajectories within severe local storms which travel to the right
of the winds. Journal of the Atmospheric Sciences, 21:634-639.
CERMAK, J. E. (1975). Applications of fluid mechanics to wind engineering-A Freeman Scholar Lecture. Journal
of Fluids Engineering, 97:9-37.
CHANG, W. S. (1981). Impact of solid missiles on concrete barriers. Journal of the Structural Division, ASCE,
107(2):257-271.
CHANGERY, M. J. (1978). National Wind Data Index Final Report. HCO-Tl041-01. Washington, D.C.: U.S.
Department of Commerce.
CoATS, D. W., and R. C. MURRAY (1985). Natural Phenomena Hazards Modeling Project: Extreme Wind/Tornado
Extreme-Wind Risk Assessment 503
Hazard Models for Department of Energy Sites. UCRL-53526, Rev. 1. Livermore, California: Lawrence
Livermore National Laboratory.
COOK, N. 1. (1982). Simulation techniques for short test-section wind tunnels: Roughness, barrier and mixing-
device methods. In: Wind Tunnel Modeling for Civil Engineering Applications: Proceedings of the Inter-
national Workshop on Wind Tunnel Modeling Criteria and Techniques in Civil Engineering Applications.
T. A. Reinhold, Ed. Cambridge, England: Cambridge University Press.
DAVENPORT, A. G. (1961a). The spectrum of horizontal gustiness near the ground in high winds. Journal of the
Royal Meteorological Society, 87:194-211.
DAVENPORT, A. G. (1961b). The application of statistical concepts to the wind loading of structures. Proceedings
of the Institution of Civil Engineers, 19:449-472.
DAVENPORT, A. G. (1962a). The response of slender line-like structures to gusty wind. Proceedings of the Institution
of Civil Engineers, 23:449-472.
DAVENPORT, A. G. (1962b). Buffeting of a suspension bridge by storm winds. Journal of the Structural Division,
ASCE,88(3):223-269.
DAVENPORT, A. G. (1967). Gust loading factors. Journal of the Structural Division, ASCE, 93(3):11-34.
DAVENPORT, A. G. (1977). The prediction of risk under wind loading. In: Proceedings of the 2nd International
Conference on Structural Safety and Reliability, Munich, Germany, pp. 511-538.
DAVENPORT, A. G. (1983). The relationship of reliability to wind loading. Journal of Wind Engineering and
Industrial Aerodynamics, 13:3-27.
DAVENPORT, A. G. (1987). Proposed new international (ISO) wind load standard. In: WERC/NSF Mid-Term Sym-
posium on High Winds and Building Codes/Standards. Washington, D.C.: National Science Foundation.
DAVENPORT, A. G., and E. C. HAMBLY (1984). Turbulent wind loading of a jack-up platform. In: Proceedings of
the Offshore Technology Conference. Dallas, Texas: Offshore Technology Conference Publication.
DAVENPORT, A. G., N. ISYUMOV, D. 1. FADER, and C. F. P. BOWEN (1969). The Study of Wind Action on a
Suspension Bridge-The Narrows Bridge, Halifax. BLWT Report 3-69. London, Ontario, Canada: University
of Western Ontario.
DAVENPORT, A. G., D. SURRY, and T. STATIIOPOULOS (1977). Wind Loads on Low-Rise Buildings: Final Report
of Phases I and II, Parts 1 and 2. BLWT-SS8-1977. London, Ontario, Canada: The University of Western
Ontario.
DAVENPORT, A. G., D. SURRY, and T. STATIIOPOULOS (1978). Wind Loads on Low-Rise Buildings: Final Report
of Phase III, Parts 1 and 2. BLWT-SS4-1978. London, Ontario, Canada: The University of Western Ontario.
DAVIES-JONES, R. P. (1986). Tornado Dynamics. In: Thunderstorm Morphology and Dynamics. Norman, Oklahoma:
University of Oklahoma Press.
DEAVES, D. M., and R. I. HARRIS (1978). A Mathematical Model of the Structure of Strong Winds. Report 76.
London, England: Construction Industry Research and Information Association.
Ellingwood, B., T. V. GALAMBOS, J. G. MACGREGOR, and C. A. CORNELL (1980). Development of a Probability-
Based Load Criterion for American National Standard A58. Publication 577. Washington, D.C.: U.S.
National Bureau of Standards.
ESDU (Engineering Sciences Data Unit) (1974). Characteristics of Atmospheric Turbulence Near the Ground.
I. Definitions and General Information. Data Item Number 74030. London, England: Engineering Sciences
Data Unit.
ESDU (Engineering Sciences Data Unit) (1985). Characteristics of Atmospheric Turbulence Near the Ground.
II. Single Point Data for Strong Winds (Neutral Atmosphere). Data Item Number 85020. London, England:
Engineering Sciences Data Unit.
ESDU (Engineering Sciences Data Unit) (1986). Characteristics of Atmospheric Turbulence Near the Ground.
III. Variations in Space and Time for Strong Winds (Neutral Atmosphere). Data Item Number 86010. London,
England: Engineering Sciences Data Unit
ETKIN, B. (1972). Dynamics of Atmospheric Flight. New York: John Wiley & Sons.
S04 Extreme-Wind Risk Assessment
fuJITA, T. T. (1971a). Proposed Characterization of Tornadoes and Hurricanes by Area and Intensity. SMRP
Research Paper Number 91. Chicago, lllinois: University of Chicago.
fuJITA, T. T. (1971b). Spearhead Echo and Downburst Near the Approach End of a John F. Kennedy Airport
Runway, New York City. SMRP Research Paper 137. Chicago, Illinois: University of Chicago.
fuJITA, T T (1978). Workbook of Tornadoes and High Winds. SMRP Research Paper 165. Chicago, Illinois:
University of Chicago.
fuJITA, T. T. (1980). Tornado and High-Wind Hazards at Savannah River Plant, South Carolina. Livermore,
California: Lawrence Livermore National Laboratory.
fuJITA, T. T (1985). The Downburst: Microburst and Macroburst. Chicago, Illinois: Department of the Geophysical
Sciences, University of Chicago.
fuJITA, T T, and A. D. PEARSON (1973). Results of FPP classification of 1971 and 1972 tornadoes. In: Proceedings
of the Eighth Conference on Severe Local Storms. Boston, Massachusetts: American Meteorological Society.
GARSON, J. M. CATALAN, and C. A. CORNELL (1975a). Tornado risk evaluation using windspeed profiles. Journal
of the Structural Division, ASCE, 101(5):1167-1171.
GARSON, R. c., J. M. CATALAN, and C. A. CORNELL (1975b). Tornado design winds based on risk. Journal of the
Structural Division, ASCE 101(9):1883-1897.
GEORGIOU, P. N. (1985). Design Windspeeds in Tropical Cyclone-Prone Regions. Ph.D. Thesis. London, Ontario,
Canada: University of Western Ontario.
GEORGIOU, P. N., A. G. DAVENPORT, and B. J. VICKERY (1983). Design wind speeds in regions dominated by
tropical cyclones. In: Proceedings of the 6th International Conference on Wind Engineering. Gold Coast,
Austrial: Commonwealth Scientific and Industrial Research Organization, pp. 139-152.
GOLDEN, 1. H. (1975). An assessment of windspeed in tornadoes. In: Proceedings of the Symposium on Tornadoes.
Lubbock, Texas: Texas Tech University, p. 5-42.
GOMES, L., and B. J. VICKERY (1977). On prediction of extreme winds from the parent distribution. Journal of
Industrial Aerodynamics, 2(1):21-36.
GOMES, L., and B. 1. VICKERY (1978). Extreme wind speeds in mixed wind climates. Journal of Industrial
Aerodynamics, 2(4):331-344.
GOODMAN, 1., and 1. E. KOCH (1982). The probability of a tornado missile hitting a target. Nuclear Engineering
and Design, 75:125-155.
GRAZULIS, T. P. (1984). Violent Tornado Climatography, 1880-1982. NUREG/CR-3670. Washington, D.C.: U.S.
Nuclear Regulatory Commission.
GRAZULIS, T. P. (1990). Significant Tornadoes, 1880-1989, Vol. II: A Chronology of Events. St. Johnsburg, Vermont:
Environmental Films.
GRAZULIS, T. P. (1991). Significant Tornadoes, 1880-1989, Vol. I: Discussion and Analysis. St. Johnsburg, Vermont:
Environmental Films.
GREGORY, W. S., et al. (1976). Effect of tornadoes on mechanical systems. In: Proceedings of the Symposium on
Tornadoes. Lubbock, Texas: Texas Tech University.
GWALTNEY, R. C. (1968). Missile Generation and Protection in Light-Water-Cooled Power Reactor Plants. ORNL-
NSIC-22. Oak Ridge, Tennessee: Oak Ridge National Laboratory.
HARRIS, R. I. (1971). The nature of wind. In: The Modern Design of Wind-Sensitive Structures. London, England:
Construction Industry Research and Information Association.
HICKMAN, J. w.,
et al. (1983). PRA Procedures Guide. NUREG/CR-2300. Washington, D.C.: U.S. Nuclear Reg-
ulatory Commission.
Ho, T. C. E., D. SURRY, and A. G. DAVENPORT (1989). The variability of low building wind loads due to sur-
rounding obstructions. In: Proceedings of the 6th u.S. National Conference on Wind Engineering. Houston,
Texas: University of Houston, pp. B1-11-Bl-20.
ISYUMOV, N. (1982). The aeroelastic modeling of tall buildings. In: Wind Tunnel Modeling for Civil Engineering
Applications: Proceedings of the International Workshop on Wind Tunnel Modeling Criteria and Techniques
Extreme-Wind Risk Assessment 505
in Civil Engineering Applications. T. A Reinhold, Ed. Cambridge, England: Cambridge University Press,
pp. 373-407.
JARVINEN, B. R, C. J. NEUMANN, and M. A S. DAVIS (1984). A Tropical Cyclone Data Tape for the North Atlantic
Basin, 1886-1983: Contents, Limitations, and Uses. NOAA Technical Memorandum NWS NRC 22. Miami,
Florida: National Weather Service, National Hurricane Center.
JOHNSON, B., et al. (1985). Tornado Hazard to Production Reactors at Savannah River Plant. San Diego, California:
Science Applications International.
KAIMAL, J. c., et al. (1972). Spectral characteristics of surface-layer turbulence. Quarterly Journal of the Royal
Meteorological Society, 98:563-589.
KAREEM, A (1984). Nonlinear wind velocity term and response of compliant offshore structures. Journal of
Engineering Mechanics, ASCE, 110(10):1573-1578.
KAREEM, A (1985). Wind-induced response analysis of tension leg platforms. Journal of Structural Engineering,
ASCE 111(1):37-55.
KAREEM, A, and Y. LI (1993). Wind-excited surge response of tension-leg platform: Frequency-domain approach.
Journal of Engineering Mechanics. ASCE, 119(1):161-183.
KELLY, D. L., J. T. SCHAEFER, and C. A DOSWELL (1985). Climatology of nontornadic severe thunderstorm events
in the United States. Monthly Weather Review, 113:1997-2014.
KENNEDY, R P. (1975). A review of procedures for the analysis and design of concrete structures to resist missile
impact effects. In: Proceedings of the Structural Reactor Safeguards and Containment Structures Conference,
Berlin, Germany.
KENNEDY, R P., et al. (1989). Design and Evaluation Guidelines for Department of Energy Facilities Subjected
to Natural Phenomena Hazards. Washington, D.C.: Department of Energy.
KESSLER, E. (1985). Wind shear and aviation safety. Nature (London) 315.
KRAYER, W. R, and R. D. MARSHALL (1992). Gust factors applied to hurricane winds. Bulletin of the American
Meteorological Society, 73(5):613-717.
KWOK, L. C. S., and W. H. MELBOURNE (1980). Freestream turbulence effects on galloping. Proc. Paper 15356.
Journal of the Engineering Mechanics Division, ASCE, 96(2):273-288.
KWOK, L. C. S., and W. H. MELBOURNE (1981) Wind-induced lock-in excitation of tall structures. Journal of the
Structural Division, ASCE, 107(1):57-72.
LEWELLEN, W. S. (1976). Theoretical models of the Tornado Vortex. In: Proceedings of the Symposium on Tor-
nadoes. Lubbock, Texas: Texas Technological University, pp. 107-143.
LI, Y., and A KAREEM (1990). Stochastic response of tension leg platforms to wind and wave fields. Journal of
Wind Engineering and Industrial Aerodynamics 36:905-914.
LoH, P., and N. ISYUMOV (1985). Overall wind loads on tall buildings and comparisons with code values. In:
Proceedings of the 5th U.S. National Conference on Wind Engineering. Lubbock, Texas: Texas Technological
University, pp. 5A-5-5A-58.
MARKEE, E. H., and J. G. BECKERLEY (1974). Technical Basis for Interim Regional Tornado Criteria. WASH-
1300. Washington, D.C.: U.S. Atomic Energy Commission.
MARSHALL, T. P., J. R. McDONALD, and K. C. MEHTA (1983). Utilization of Load and Resistance Statistics in a
Windspeed Assessment. Lubbock, Texas: Texas Technological University.
McDoNALD, J. R (1981). Incredible tornado-generated missles. In: Proceedings of the 4th U.S. National Conference
on Wind Engineering. Seattle, Washington: University of Washington, pp. 29-36.
McDoNALD, J. R. (1982). Assessment of Tornado and Straight Wind Risks at the Savannah River Plant Site, Aiken,
South Carolina. Livermore, California: Lawrence Livermore National Laboratory.
McDONALD, J. R (1989). Impact resistance of common building materials to tornado missiles. In: Proceedings of
the 6th U.S. National Conference on Wind Engineering. Houston, Texas, pp. A539-A546.
McDONALD, J. R, K. C. MEHTA, J. E. MINOR, and L. BEASON (1975). Development of a Windspeed Risk Model
for the Argonne National Laboratory Site. Lubbock, Texas: Texas Technological University.
506 Extreme- Wind Risk Assessment
MEECHAM, D. (1988). Wind Action on Hip and Gable Roofs. M.E.Sc. Thesis. London, Ontario, Canada: University
of Western Ontario.
MEHTA, K C. (1976). Windspeed estimates: Engineering analysis. In: Proceedings of the Symposium on Tornadoes,
Lubbock, Texas, pp. 89-103.
MINOR, 1. E. (1981). Window glass design practices: A review. Journal of the Structural Division, ASCE 107(1}:
1-12.
MINOR, J. E., J. R. McDONALD, and K C. MEHTA (1977). The Tornado: An Engineering Oriented Perspective.
NOAA Technical Memorandum, ERL NSSL-82. Norman, Oklahoma: National Severe Storms Laboratory.
National Research Council (1983). Low Altitude Wind Shear and Its Hazard to Aviation. National Academy Press.
NEUMANN, C. J. (1991). The National Hurricane Center Risk Analysis Program (HURISK). NOAA Technical
Memorandum NWS NHC 38. Miami, Florida: National Weather Service, National Hurricane Center.
NG, J., and B. J. VICKERY (1989). A model study of the response of a compliant tower to wind and wave loads.
In: Proceedings of the Offshore Technology Conference. Dallas, Texas: Offshore Technology Conference
Publication.
NG, D. S., et al. (1990). Lawrence Livermore National Laboratory New Production Reactors Project: Preliminary
Title I Wind/Tornado Design Criteria for New Production Reactors. Livermore, California: Lawrence
Livermore National Laboratory.
NovAK, M., and A G. DAVENPORT (1970). Aeroelastic instability of prisms in turbulent flow. Journal of the
Engineering Mechanics Division, ASCE, 96(2}.
PANOFSKY, H. A, and J. A DUTTON (1984). Atmospheric Turbulence. New York: John Wiley & Sons.
PARKINSON, G. v., and N. P. H. BROOKS (1961). On the aeroelastic instability of bluff cylinders. Journal ofApplied
Mechanics, 28:252-258.
RAMSDELL, 1. v., and G. L. AANDREWS (1986). Tornado Climatology of the Contiguous United States. NUREG/
CR-4461. Washington, D.C.: U.S. Nuclear Regulatory Commission.
RAVINDRA, M. K, and T. V. GALAMBOS (1976). Load Factors for Wind and Snow Loads for Use in Load and
Resistance Factor Design Criteria. Research Report No. 34. St. Louis, Missouri: Washington University.
RAVINDRA, M. K, and AM. NAFDAY (1990). State-of-the-Art and Current Research Activities in Extreme Winds
Relating to Design and Evaluation of Nuclear Power Plants. NUREG/CR-5497. Washington, D.C.: U.s.
Nuclear Regulatory Commission.
REDMAN, G. H., et al. (1976). Wind Field and Trajectory Models for Tornado-Propelled Objects. EPRI-308. Palo
Alto, California: Electric Power Research Institute.
REDMAN, G. H., et al. (1978). Wind Field and Trajectory Models for Tornado-Propelled Objects. EPRI-NP748.
Palo Alto, California: Electric Power Research Institute.
REED, D. A, and E. SIMIU (1984). Wind loading and strength of cladding glass. Journal of Structural Engineering,
ASCE, 110(4}:715-729.
REED, 1. W., and W. L. FERRELL (1987). Extreme wind analysis for the Turkey Point Nuclear Plant. In: Appendix
G of Shutdown Decay Heat Removal Analysis of a Westinghouse 3-Loop PWR. NUREG/CR-4762. Wash-
ington, D.C.: U.S. Nuclear Regulatory Commission.
RICE, S. O. (1945). Mathematical analysis of random noise. Bell Technical Journal, 18:19.
ROTZ, J. V. (1975). Results of impact tests on reinforced concrete panels. In: Proceedings of the 2nd ASCE Specialty
Conference on Structural Design of Nuclear Power Plant Facilities. New York: American Society of Civil
Engineers.
ROTZ, 1. v., et al. (1974). Tornado and Extreme Wind Design Criteria for Nuclear Power Plants. BC-TOP-3A
San Francisco, California: Bechtel Power Corporation.
RUSSELL, L. R. (1962). Reactor Safeguards. New York: Pergamon Press.
RUSSELL, L. R. (1968). Probability Distribution for Texas Gulf Coast Hurricane Effects of Engineering Interest.
Ph.D. Thesis. Palo Alto, California: Stanford University.
Extreme-Wind Risk Assessment 507
RUSSELL, L. R (1971). Probability distributions for hurricane effects. Journal of the Waterways, Harbours and
Coastal Engineering Division, ASCE 97(JVW1):139-154.
RUSSELL, L. R, and G. F. SCHUELLER (1974). Probabilistic models for Texas gulf coast hurricane occurrences.
Journal of Petroleum Technology. 279-288.
SAA (Standards Assocation of Australia) (1989). Australian Standard: Minimum Design Loads on Structures
(Known as the SAA Loading Code). Part 2. Wind Loads. North Sydney, Australia: Standards Association of
Australia.
SCANLAN, R. R. (1981). State-of the Art Methods for Calculating Flutter, Vortex-Induced, and Buffeting Response
of Bridge Structures. Federal Highway Adminstration Report FHWNRU-80/050. Springfield, Virginia:
National Technical Information Service.
SCANlAN, R. R., and A SABZEVARI (1969). Experimental aerodynamic coefficients in the analytical study of
suspension bridge flutter. Journal of Mechanical Engineering Sciences 11(3):234-242.
SCANLAN, R. R., and 1. J. TOMKO (1971). Aerofoil and bridge deck flutter derivatives. Journal of the Engineering
Mechanics Division, ASCE, 97(6):1717-1737.
SHAPIRO, L. 1. (1983). The asymmetric boundary layer flow under as translating hurricane. Journal of the Atmo-
spheric Sciences, 40(8):1984-1988.
SIMIU, E. (1974). Wind spectra and dynamic alongwind response. Journal of the Structural Division, ASCE 100(9):
1897-1910.
SIMIU, E. (1976). Equivalent static wind loads for tall building design. Journal of the Structural Division, ASCE
102(4):719-737.
SIMIU, E. (1980). Revised procedure for estimating along-wind response. Journal of the Structural Division, ASCE
106(1):1-10.
SIMIU, E. and M. CORDES (1976). Tornado-Borne Missile Speeds. NBSIR T6-1050. Washington, D.C.: National
Bureau of Standards.
SIMIU, E., and S. D. LEIGH (1984). Turbulent wind and tension leg platform surge. Journal of Structural Engi-
neering, ASCE 110(4):785-802.
SIMIU, E., and R R. SCANlAN (1986). Wind Effects and Structures: An Introduction to Wind Engineering. New
York: John Wiley & Sons.
SIMIU, E., and 1. R SHAVER (1979). Wind loading and reliability-based design. In: Proceedings, 5th International
Conference on Wmd Engineering. Fort Collins, Colorado: Colorado State University.
SIMIU, E., M. J. CHANGERY, and J. FILLIBEN (1979). Extreme Wind Speeds at 129 Stations in the Continguous
United States. B55-118. Washington, D.C.: United States Department of Commerce, National Bureau of
Standards.
SUTER, G. E. (1980). Assessment of empirical concrete impact formulas. Journal of the Structural Division, ASCE,
106(5):1023-1045.
STATHOPOUWS, T. (1979). Turbulent Wind Action on Low Rise Buildings. Ph.D. Thesis. London, Ontario, Canada:
University of Western Ontario.
STECKLEY, A (1989). Motion-Induced Wind Forces on Chimneys and Tall Buildings. Ph.D. Thesis. London,
Ontario, Canada: University of Western Ontario.
STECKLEY, A, et al. (1991). The synchronous pressure acquisition network (SPAN). In: Structures Congress '91
Compact Papers, 9th Structures Congress, Indianapolis, Indiana, pp. 556-559.
STEPHENSON, A E. (1977). Full-Scale Tornado-Missile Impact Tests. EPRI NP-440. Palo Alto, California: Electric
Power Research Institute.
STEVENSON, J. D., Ed. (1980). Structural Analysis and Design of Nuclear Plant Facilities. ASCE Manual No. 58.
New York: American Society of Civil Engineering.
SUES, R. R., et al. (1993). Integrating internal events in an external event probabilistic risk assessment: Tornado
PRA case study. Reliability Engineering and System Safety 40:173-186.
SURRY, D. (1982). Consequences of distortions in the flow including mismatching scales and intensities of tur-
508 Extreme-Wind Risk Assessment
bulence. In: Wind Tunnel Modeling for Civil Engineering Applications: Proceedings of the International
Workshop on Wind Tunnel Modeling Criteria and Techniques in Civil Engineering Applications. T. A Rein-
hold, Ed. Cambridge, England: Cambridge University Press, pp. 137-185.
TROM, H. C. S. (1963). Tornado probability. Monthly Weather Review. 91:730-736.
TIELMAN, H. W. (1982). Simulation criteria based on meteorological or theoretical considerations. In: Wind Tunnel
Modeling for Civil Engineering Applications: Proceedings of the International Workshop on Wind Tunnel
Modeling Criteria and Techniques in Civil Engineering Applications. T. A Reinhold, Ed. Cambridge, En-
gland: Cambridge University Press, pp. 296-312.
TRYGGVASON, B. V. (1979). Defining the wind climate in regions affected by hurricanes. Preprints of the Fourth
US National Conference on Wind Engineering. Seattle, Washington: University of Washington.
TRYGGVASON, B. v.,SURRY, and A G. DAVENPORT (1976). Predicting wind-induced response in hurricane zones.
Journal of the Structural Division, ASCE, 102(12):2333-2350.
TSCHANz, T. (1982). Measurement of total dynamic loads using elastic models with high natural frequencies. In:
Wind Tunnel Modeling for Civil Engineering Applications: Proceedings of the International Workshop on
Wind Tunnel Modeling Criteria and Techniques in Civil Engineering Applications. T. A Reinhold, Ed.
Cambridge, England: Cambridge University Press.
TSCHANz, T., and A G. DAVENPORT (1983). The base balance technique for the determination of dynamic wind
loads. Journal of Wind Engineering and Industrial Aerodynamics, 13:429-439.
TwISDALE, L. A (1978). Tornado data characterization and windspeed risk. Journal of the Structural Division,
ASCE, 104(10):1611-1630.
TwISDALE, L. A (1988). Probability of facility damage from extreme wind effects. Journal of Structural Engi-
neering, ASCE, 114(10):2190-2209.
TwISDALE, L. A, and W. L. DUNN (1983a). Extreme Wind Risk Analysis of the Indian Point Nuclear Generation
Station. Final Report 44T-2491. Addendum to Report 44T-2171. Research Triangle Park, North Carolina:
Research Triangle Institute.
TwISDALE, L. A, and W. L. DUNN (1983b). Probabilistic analysis of tornado wind risks. Journal of Structural
Engineering, ASCE, 109(2):468-488.
TwISDALE, L. A, and W. L. DUNN (1983c). Wind loading risks from multivortex tornadoes. Journal of Structural
Engineering, ASCE, 109(8):2016-2022.
TwISDALE, L. A, and M. B. HARDY (1985). Tornado Windspeed Frequency Analysis of the Savannah River Plant.
Aiken, South Carolina: E. I. DuPont de Nemours and Company.
TwISDALE, L. A, and P. J. VICKERY (1992). Research on thunderstorm wind design parameters. Journal of Wind
Engineering and Industrial Aerodynamics, 41-44:545-556.
TwISDALE, L. A, and P. J. VICKERY (1993). Analysis of thunderstorm occurences and windspeed statistics. In:
Proceedings of the 7th U.S. National Conference on Wind Engineering. Los Angeles, California: University
of California.
TwISDALE, L. A, et al. (1978). Tornado Missile Risk Analysis. EPRI NP-769, Vols. I, and II. Palo Alto, California:
Electric Power Research Institute.
TwISDALE, L. A, W. L. DUNN, and T. L. DAVIS (1979). Tornado Missile Transport Analysis, Nuclear Engineering
and Design, 51:295-308.
TwISDALE, L. A, et al. (1981). Tornado Missile Simulation and Design Methodology. EPRI NP-2005, Vols. 1, and
2. Palo Alto, California: Electric Power Research Institute.
USNRC (U.S. Nuclear Regulatory Commission) (1974). Design Basis Tornado for Nuclear Power Plant Structures.
Regulatory Guide 1.76. Washington, D.C.: U.S. Nuclear Regulatory Commission.
USNRC (U.S. Nuclear Regulatory Commission) (1981). Missiles Generated by Natural Phenomena. Standard
Review Plan 3.5.1.4. NUREG-0800. Washington, D.C.: U.S. Nuclear Regulatory Commission.
VASSALLO, F. A (1975). Missile Impact Testing of Reinforced Concrete Panels. Report Number HC-5609-D-1
(prepared for Bechtel Power Corporation). Buffalo, New York: Calspan Corporation.
Extreme-Wind Risk Assessment S09
VICKERY, B. J. (1971). Wmd induced vibrations of towers, stacks and masts. In: Proceedings of the 3rd Interna-
tional Conference on Wind Effects on Buildings and Structures. Paper IV-2. Tokyo: Saikon Company.
VICKERY, B. J. (1982). The aeroelastic modeling of chimneys and towers. In: Wind Tunnel Modeling for Civil
Engineering Applications. T. A. Reinhold, Ed. Cambridge, England: Cambridge University Press, pp.
408-428.
VICKERY, B. J., and P. J. PIKE (1985). An investigation of dynamic wind loads on offshore platforms. In: Pro-
ceedings of the Offshore Technology Conference. Dallas, Texas: Offshore Technology Conference Publication,
pp.527-541.
VICKERY, B. J., A. DAVENPORT, and D. SURRY (1984). Internal pressures in low-rise buildings. In: Proceedings
of the 4th Canadian Workshop on Wind Engineering. Ottawa, Ontario, Canada: National Research Council
of Canada, pp. 43-64.
VICKERY, P. J. (1990). Wind and wave loads on a tension leg platform: Theory and experiment. Journal of Wind
Engineering and Industrial Aerodynamics 36:905-914.
VICKERY, P. J., and L. A. TwISDALE (1993). Prediction of hurricane windspeeds in the U.S. In: Proceedings of the
7th U.S. National Conference on Wind Engineering. Los Angeles, California: University of California.
VICKERY, P. J., A. STECKLEY, N. ISYUMov, and B. J. VICKERY (1985a). The effect of mode shape on the wind-
induced response of tall buildings. In: Proceedings of the 5th U.S. National Conference on Wind Engineering.
Lubbock, Texas: Texas Technological University, pp. lB-41-lB-48.
VICKERY, P. J., D. SURRY, and A. G. DAVENPORT (1985b). Aylesbury and ACE: Some interesting findings. In: 6th
Colloquium on Industrial Aerodynamics. Aachen, Germany: Fluid Mechanics Laboratory, Fachhochschule,
pp.1-17.
WARDrAW, R. L. (1978). Sectional versus full model wind tunnel testing of bridge road decks. DME/NAE Quartery
Bulletin (National Research Council, Ottawa, Ontario, Canada) 1978(4):25-47 [reprint January 1979].
WEN, Y.-K. (1983). Direction and structural reliability. Journal of Structural Engineering, ASCE, 109(4):1028-
1041.
WEN, Y.-K., and A. H. S. ANa (1975). Tornado risk and wind loading effect on structures. In: Proceedings 4th
International Conference on Wind Effects on Buildings and Structures. London, England, pp. 63-74.
WEN, Y.-K., and S.-L. CHU (1973). Tornado risks and design wind speed. Journal of the Structural Division, ASCE,
99(12):2409-2421.
21
APPLICATIONS IN NUCLEAR
POWER PLANT STRUCTURES
1. INTRODUCTION
The significant use of probability methods for the licensing and design of nuclear power plant facilities
has occurred only in the last 15 to 20 years. Prior to this time, deterministic procedures were primarily
used. In 1975 the U.S. Nuclear Regulatory Commission (USNRC) published a report of a reactor safety
study of U.S. commercial nuclear power plants that employed probabilistic risk assessment procedures
to assess accident risks. It was known as the WASH-1400 report (USNRC, 1975). This study considered
seismic events in only a rudimentary manner.
In the early 1970s, the U.S. Nuclear Regulatory Commission had concerns about some earthquake
related issues because of uncertainties in earthquake response and equipment/structure behavior. They
were as follows.
• The ability of licensed nuclear power plants to withstand earthquakes larger than the specified plant safe
shutdown earthquake (SSE)
• New interpretations of seismological information pertaining to the Charleston Earthquake and the New Bruns-
wick Earthquake and their impact on the existing East Coast nuclear plant seismic licensing basis
• Changes in the design criteria as they relate to "older" licensed nuclear plants
The USNRC established the Systematic Evaluation Program (SEP) to begin to address these con-
cerns, and to assess the safety adequacy of older operating plants that were licensed to older criteria.
Recognizing the importance of the seismic issue and the role that probabilistic methods could play in
their evaluation of the adequacy of existing licensing bases, and realizing that there are many interrelated
factors that must be considered with the seismic event to obtain the probability of radioactive release,
the USNRC funded the Seismic Safety Margin Research Program (SSMRP), and the program was
begun in 1978 (Wells et al., 1984). The major end products from this program are as follows.
510
Applications in Nuclear Power Plant Structures Sl1
Selected nuclear power plants were evaluated following the SSMRP methodology. The associated eval-
uation programs are documented in the literature (e.g., Bohn et al., 1984). The USNRC developed a
seismic safety research program plan to address the outstanding issues. This plan reflected the proba-
bilistic risk assessment (PRA) methods (USNRC, 1985).
On August 8, 1985 a Severe Accident Policy statement was passed by the USNRC commissioners.
It required limited scope PRA evaluations of all commercial nuclear power plants in the United States
for severe accident events. The USNRC was given the responsibility for establishing the methodology.
The seismic event was one of the primary concerns. In fact, the Mechanical-Structural Engineering
Branch of the USNRC allocated 25% of its funding for 1985 to 1987 (4.79 million dollars) to Seismic
Fragility and Seismic Margin programs (LaPay and Bohm, 1986). The USNRC wanted a means of
making regulatory decisions that did not result in unnecessary modifications or plant shutdown. Trial
guidelines for performing seismic margin reviews of nuclear power plants were developed and rec-
ommended to the USNRC (prassinos et al., 1986). A trial review using these guidelines was performed
for the Maine Yankee Atomic Power Station (prassinos et al., 1987; Moore et al., 1987; Ravindra et
ai., 1987).
On November 23, 1988 the USNRC issued Generic Letter 88-20 to nuclear power plant utilities and
operators, requesting that an individual plant examination (IPE) for internally initiated events be per-
formed (USNRC, 1989). This letter was written as part of the Severe Accident Policy. In 1990 the
USNRC issued Supplement 4 to Generic Letter 88-20 requesting an Individual Plant Examination of
External Events (IPEEE) for plant-specific external event-initiated severe accident vulnerabilities. Note
that "external events" include natural hazards such as earthquakes, tornadoes, and hurricanes. The
USNRC issued a procedural and submittal guidance document (USNRC, 1991) for IPEEE programs.
Probabilistic risk assessment procedures, seismic margin methodology, deterministic methods, and suc-
cess path processes are recognized for evaluation purposes.
With the recognition by the USNRC of probability methods, they are being used more as acceptable
procedures not only for earthquake-related issues but also for other loading conditions to address li-
censing issues, revise industry codes and practices related to the nuclear power industry, and define
maintenance and design upgrade programs. In this chapter, applications of probability methods in nu-
clear power plant design and qualification are discussed. In the sections that follow, probabilistic risk
assessment, seismic design, containment reliability, limit state analysis of reinforced concrete structures,
probability-based load combinations, risk-based inspection and maintenance, and pressure vessels and
piping reliability are addressed.
2.1. Notations
Modulus of elasticity
Load factor associated with earthquake
Load factor i
Load factor associated with pressure loading
Resistance factor associated with reinforced concrete containment limit state
Resistance factor associated with limit state j
Yield stress
Resistance factor associated with shear walllimit state
Live load
Li Load i
In Natural logarithm
m Bending moment
Pa Accident pressure loading
Pr Probability
Nominal structural resistance associated with reinforced concrete containment limit state
Nominal structural resistance associated with limit state j
Nominal structural resistance associated with shear walllimit state
Stress factor
Strength factor
Standard deviation associated with FSi
,. Membrane stress
2.2. Abbreviations
The PRA Procedures Guide issued by the USNRC (1983) is one of the primary references used for
defining performance procedures for PRA application to nuclear power plants.
Probabilistic risk assessment uses fault tree analysis and defined initiating events to quantify the
potential risk of specific nuclear power plants. Probabilistic risk assessments estimate the probability of
core damage, probability of radioactive releases, and/or the overall risk (financial, health, and fatalities)
due to a variety of internally initiated events (such as loss of coolant accidents [LOCAsD and external
events (such as earthquakes).
The use of PRA evaluations for nuclear power plants is not just limited to licensing issues, but may
be used to address other issues as well. Reasons for performing PRAs are as follows.
Probabilistic risk assessment methodology has been used, to give just a few examples, to study shutdown
decay heat removal vulnerabilities (Sanders et al., 1987), to develop systems and fragilities screening
guidelines (Budnitz et al., 1985; Amico, 1988), and to identify seismically risk-sensitive equipment
(Azarm, et al., 1983). A review of the probabilistic risk assessment evolution within the nuclear industry
is given in a paper by Apostolakis and Kafka (1992). The methodology that has been developed for
514 Applications in Nuclear Power Plant Structures
nuclear power plants has also been employed for evaluation of nonnuclear facilities (Cassidy et ai.,
1987).
Some of the PRAs require and use probabilistic structural mechanics extensively, whereas it is used
in only a limited way in others; it depends on the scope of the PRA and the importance of structures
in the accident sequences considered (examples are given in Chapter 9 of this handbook). Probabilistic
risk assessment of accidents initiated by earthquakes is one area in which probabilistic structural analysis
plays an important role. Probabilistic structural analysis is also required for hurricane- or tornado-
initiated accidents. Seismic risk assessment and tornado-hurricane risk assessment are discussed in
Chapters 19 and 20, respectively.
4. SEISMIC DESIGN
(21-1)
or
(21-2)
The above equations yield essentially the same results if Br and Bu are equal. Error in Eq. (21-2)
increases as the difference between Br and Bu increases. For further discussion of the methodology used
to calculate HCLPF values, the reader should consult Chapter 19.
Plant-specific and generic fragility data have been published and can be used to define fragility data
for similar types of equipment and structures (Cover et ai., 1985; Gergely, 1986; Brandyopadhyay
et aL, 1987, 1990, 1991). Additional references and tables of generic fragility data are given in
Chapter 19.
The HCLPF values can be calculated either by the conservative deterministic failure margin method,
or by using the fragility data and HCLPF Eq. (21-1) or (21-2). The HCLPF values are used to establish
the seismic ruggedness (integrity) of a component, system, or structure. The USNRC (1991) has estab-
lished seismic review levels ("bins") that are used to assess seismic integrity. The USNRC screening
levels are defined for two groups. Plants are assigned to one of the groups on the basis of plant site
seismic characteristics. Those plants that are not grouped with one of the two groups have special
Applications in Nuclear Power Plant Structures 515
evaluation requirements. The two groups are the 0.3g HCLPF Screening Level and the O.5g HCLPF
Screening Level. If the HCLPF value falls below the screening level an evaluation is made to determine
if the item should be upgraded.
Probability of failure obtained from fragility analysis can be defined as a measure in performing the
above activities.
In two of the cited references (LaPay et ai., 1985; LaPay and Chay, 1988) the probabilistic fragility
analysis methodology was applied to piping systems. It was found, as expected, that supports are the
most critical elements of a piping system. The probability of support failure can be more than 100 times
greater than the probability of piping failure. In the 1988 paper a viable method is given for establishing
a screening criterion based on probabilities for categorizing, prioritizing, and minimizing work scope
for seismic upgrade programs. For the example given in that paper, a piping system does not have to
be reanalyzed if the annual failure probabilities of the piping and supports are less than or equal to the
following respective Level A limits (computed using the screening criteria):
The limit at which reanalysis is required is defined using the probability of occurrence of the seismic
event (hazard probability). These levels of probability are obtained by dividing the probabilities defined
above (Level A limits) by the hazard probability. For the example given in the 1988 paper that uses a
seismic hazard probability of 4 X 10-4 event/year, the following Level B limits are defined:
If the annual failure probabilities of the piping and supports are equal to or above Level B limits for
either the piping or supports, reanalysis is required with modifications performed as soon as possible.
If the probability falls between Levels A and B, but does not exceed Level B limits, then reanalysis
and modifications can be postponed.
An example application of the use of structural reliability methods is the evaluation of the effects
of adding flexibility through snubber elimination on the probability of pipe breaks in a typical nuclear
plant piping system. The results of this probabilistic study (Lu, 1984) of a lO-in. safety injection piping
system with one snubber show that the reliability of a more flexible system (i.e., a system with less
snubbers) could be higher than that of the original stiff system if the snubber failure rate is on the order
of 10% or more. For a 0 or 10% rate of snubber failure in either the locked mode, when it is supposed
to be free, or in the free mode, when it is supposed to be locked, the reported break probabilities are
given in Table 21-1. This work is important because it establishes a relationship between snubber failure
rate and piping failure probability. It also shows the possibility of improving piping reliability by
removing snubbers in some situations.
5. CONTAINMENT RELIABILITY
During the incident at the Three Mile Island (TMI) Nuclear Power Plant, high pressures (approximately
28 psi) were recorded within the containment structure. This high pressure, although within design
limits, initiated government programs to obtain a better understanding and quantification of containment
structural strength. Reliability analyses are used to quantify the risk of containment structural failure.
It is noted in USNRC (1987) " ... that early containment failure cannot be ruled out with high confi-
dence for any of the plants." Containments have been evaluated for different loading conditions, with
overpressure and seismic events being the primary loading investigated in containment reliability eval-
uations. In essence, three steps are involved in a reliability analysis of a containment (Greimann and
Fanous, 1985). These steps are as follows.
1. Describe structural parameters and loads in statistical terms: The structural parameters include yield,
ultimate stress, Young's modulus, and structural dimensions. The statistical terms of interest are mean,
standard deviation, and type of distribution. Statistical data associated with concrete and steel have been
published (Greimann and Fanous, 1985; Healey et al., 1980; Hwang et al., 1985a,b,c, 1986). Statistical
data on uncertainties associated with structural dimensions may be found in Greimann and Fanous (1985).
Statistics of loading conditions are also required; see Hwang et al. (1983b) for some typical data.
2. Perform structural analysis: Structural analyses are performed in order to define the load levels at which
failure occurs. Studies have been performed using different failure criteria (Greimann et al., 1982a,b; Grei-
mann and Fanous, 1985; Kawakami et al., 1984). For example, failure can be defined as when leakage is
initiated due to failure of the shell, penetration, and/or anchor bolts, buckling of the shell, or gross defor-
mation of the containment shell defined by strain ductility of deformation reaching a specified limit. These
structural analyses are used in the reliability analysis (third step), in conjunction with Monte Carlo simu-
lation or other reliability analysis techniques.
3. Perform probabilistic analysis: Probability methods are used to reflect the statistical characteristics of struc-
tural parameters and loads. Statistical parameters are also used to reflect uncertainties (unknowns), known
errors or limitations, and assumptions in the analyses performed. Examples of areas in which statistical
Table 21·1 Piping Failure Probabilities with and without Snubber Removal
Type of supports removed 0% snubber failure rate 10% snubber failure rate
parameters would be considered are those related to model, analysis method, and boundary conditions.
Examples of the types of methods that may be used are advanced first-order second-moment methods and
Monte Carlo simulation methods. These methods are discussed in the containment reliability reports (Grei-
mann and Fanous, 1985; Greimann et al., 1982b). These and other methods of reliability analysis are also
described in Chapters 2 through 5 of this handbook.
Some of the early containment reliability work is discussed by Greimann and Fanous (1985). It was
noted that only a limited number of analyses had been performed. These analyses are summarized as
follows.
• A prestressed concrete containment (Zion) reliability analysis was performed by Sargent and Lundy Engineers
(1980). The only random parameter considered in this evaluation is material strength.
• Gou and Love (1983) performed the reliability analysis of a standard Mark III steel containment, in which
the variability of the leakage pressure was calculated from the coefficient of variations of steel yield and
ultimate strengths.
• 1\vo approaches to define flexural stress limit state surfaces are described by Chang et al. (1983) for reinforced
concrete containments; these limit states can be used in reliability analyses.
• Fardis and Nacar (1984) describe a best estimate analysis that defines the ultimate capacity of a reinforced
concrete containment. The dominant random variables in this study are the reinforcing bar strengths and the
mechanical splices.
In 1986 Pepper et al. prepared a report of containment reliability assessment, using the latest de-
velopments associated with concrete containments. The reliability analysis method used a tangential
shear limit state for reinforced concrete containments and a flexure limit state that included strain limits
on the tensile reinforcements. The material strength variations are included in the analysis using the
Latin hypercube sampling technique. Results from containment reliability analyses are reported consid-
ering dead and seismic loading. One of the containments evaluated was located at Indian Point Unit 3.
It was found that the containment structure could experience an earthquake four times the design basis
earthquake (DBE = O.15g) and have an annual probability of 1.9 x 10- 7 for tangential shear failure,
and of 1.3 X 10-5 for exceeding the flexure limit state. The results reported by Kawakami et al. (1984)
from a reliability assessment of the Indian Point Unit 3 reinforced concrete containment structure give
lower failure probabilities (flexure limit state). The Brookhaven National Laboratory reliability analysis
method (Hwang et al., 1983a) was applied. Dead load (D), accidental pressure (P), and seismic loading
(E) were considered. Results were given for two different failure modes (limit states):
• Structural failure based on the onset of yielding in tension or compression of the reinforcement, and/or on
the attainment of crushing strength by the extreme fiber of the cross-section; it is assumed that the stress-
strain relationship is linear
• Failure based on reinforced concrete ultimate strength theory: The extreme fiber of the cross-section has a
maximum compressive strain equal to 0.003, with yielding of the reinforcement (rebars) permitted. A nonlinear
stress distribution is allowed
The results are presented in Table 21-2 for the different load combinations. The results are presented
in terms of unconditional limit state probabilities representative of the total containment life. As seen
from Table 21-2, the failure state defined by reinforced concrete ultimate strength theory results in
failure probabilities two order magnitudes lower than the limit state defined by the onset of rebar
yielding or concrete crushing. Accident pressure is controlling.
Results from steel containment pressure reliability analyses have been published. In Table 21-3 is
provided a summary of containment mean pressure (static) capacities for different plants. Also provided
518 Applications in Nuclear Power Plant Structures
are the coefficients of variation associated with the pressure capacity. The results are given for general
information only and care should be taken when interpreting their meaning. The results reported by
Greimann et al. (1982b) are considered preliminary and do not represent dynamic resistance. The
pressure loading was considered to be uniform and static. The mean pressures represent the mean
resistance for the shell only. Failure modes associated with the penetration, anchorages, and other types
were not analyzed. Gou and Love (1983) investigated the integrity of the steel containment with failure
defined by buckling, ultimate tensile strength, or the development of a crack. It was found that plastic
yielding would occur before buckling. The critical region of the Mark III containment was determined
to be the dome/knuckle area. Greimann et al. (1982a) performed a best estimate and uncertainty as-
sessment using second-moment reliability methods. Loading was assumed to be applied as a uniform
static internal pressure. Gross deformation of the containment shell defined failure.
Greimann et al. (1982a) studied the overall reliability of the containment structure system that in-
cluded the stiffened shell, penetration, and anchor bolts. 1\vo ice condenser containment vessels were
studied (Sequoyah and McGuire Nuclear Power Plants). These containments had design pressures be-
tween 12 and 15 psi. The Sequoyah containment vessel is not as thick as the McGuire vessel (9/16 and
1/2 in. versus 11/16 in.). Safety indices (defined as the ratio of the standard deviation and mean asso-
ciated with the failure function for the second-moment method; see Chapter 3 for a more detailed
discussion of safety indices), and failure probabilities were obtained for a pressure of 28 psi, which is
equivalent to the maximum pressure experienced at TMI. These results are summarized in Table 21-4
for information.
Table 21·3 Steel Containment Mean Failure Pressure and Coefficient of Variation
In the deterministic design and evaluation of concrete structures, allowable values are defined that cannot
be exceeded for the design loading conditions to which the structures are subjected. Using these allow-
able values, the safety factors, the actual material properties, and the structure load resistance charac-
teristics, the fragility limits can be defined that represent the boundary beyond which unacceptable
structural behavior will occur. These boundaries define limit states. Brookhaven National Laboratory
has done an extensive amount of work in developing probability-based limit state analysis for appli-
cations to concrete structures in nuclear plants. As examples, this method has been applied to the safety
evaluation of reinforced concrete containment and shear wall structures (e.g., Hwang et ai., 1986; Wang
et ai., 1986).
Limit states can be defined by analysis formulations or from test results. Generally, analysis methods
are employed that are based on limits defined from tests. For most nuclear power plant structures more
than one limit state is defined. The limit states are dependent on the construction, as well as on the
loading to which the structure is subjected. Examples of failure modes that can be used to define limit
states are flexure, shear, buckling, and any other limiting behavior state. In general, for the concrete
structures the limit states are defined by flexure and shear. A typical flexure limit state surface for a
containment is shown in Fig. 21-1 (Hwang et ai., 1984). It is defined in terms of a membrane stress
(or) and a bending moment (m) that are in relation to the cross-section center. Figure 21-1 is discussed
in the cited reference:
In this figure, point "a" is determined from a stress state of uniform compression and point "e" from uniform
tension. Points "c" and "c'" are the so-called "balanced point" at which a concrete compression strain of
0.003 and a steel tension strain of fy/E, are reached simultaneously. Furthermore, lines abc and ab'c' [in Fig.
(21-1)] represent compression failure and lines cde and c'd'e represent tension failure.
Other typical limit stress state surfaces are given in the literature for different conditions and struc-
tures (Hwang et al., 1987). They basically have the shape of a polygon.
The method of analyses used with the defined limit states is dependent on the characteristics of the
loading conditions, and the complexity of the structures. The methods can be static methods, dynamic
methods, or even nonlinear techniques (Takada and Shinozuka, 1989). Simple beam models are some-
Plant
Sequoyah McGuire
Safety Failure Safety Failure
Description index probability index probability
Stiffened shell
Upper bound 4.5 5.0 X 10-6 6.1 5.6 X 10-9
Lower bound 4.4 3.7 X 10-6 5.7 5.1 X 10- 10
Penetrations
Upper bound 5.1 6.1 X 10- 7 6.1 6.9 X 10-8
Lower bound 4.9 1.3 X 10-7 5.2 7.0 X 10- 10
Anchor bolts 5.4 4.0 X 10- 8 10.1 2.3 X 10- 24
j=3
c'
j=7
j=l
Figure 21-1. 1Ypical flexure limit state surface for a containment structure. (Source: Hwang et al. [1984]).
time used as well as dynamic "stick models"; at other times complex finite element models are
employed.
Given in Tables 21-5 and 21-6 are some typical statistics of loadings and material properties used
in the evaluation of concrete structures. Information is obtained from different sources and is so noted.
Different methods such as Monte Carlo simulation, direct integration, and numerical integration are
used in the reliability analysis. Failure probabilities associated with limit states are used as a measure
of safety. For general information, typical failure probabilities associated with reinforced concrete con-
tainments are summarized for different limit states in Table 21-7. The probabilities are given in terms
of conditional limit state probabilities (on the condition that a certain loading combination has occurred).
The probabilities given do not include the probabilities of loading occurrence. Differences in the limit
state probabilities are apparent, due to different limit state assumptions. The lower probabilities are due
Coefficient of
Loading Probability distribution variation Ref."
Dead load Deterministic 0.0 1-5
Normal about nominal 0.07 6
value, time invariant
Live load Deterministic 0.0 1,5
Gamma with mean equal 0.54 6
to 0.36 to nominal
design value
Prestressing Normal 0.04 3
Accident pressure (LOCA) Gaussian 0.12 3
Gaussian 0.2 1,3
Accident pressure Gaussian 0.2 1
(hydrogen explosion)
Earthquake Guassian process 1-6
"References: 1 - Shinozuka et al. (1984); 2 - Hwang et al. (1985a); 3 - Hwang et al. (1986); 4 - Pepper et al. (1986);
5 - Hwang et al. (1984); 6 - Wang et al. (1986).
Applications in Nuclear Power Plant Structures 521
Coefficient of
Material property Probability distribution variation Ref:
Concrete
Compressive strength Normal 0.1 to 0.2 1
Normal 0.14 2,3,6
Normal 0.11 to 0.13 4
Normal 0.14 to 0.20 5
Modulus of elasticity Normal 1
Deterministic 2,4
Reinforcing bars
Yield strength Beta 0.09 to 0.11 1
Lognormal 0.11 2,3,6
Lognormal 0.04 to 0.07 4
Ultimate strength Beta 0.09 to 0.11 1
Modulus of elasticity Normal 0.03 1
Deterministic 2,4
"References: 1 - Healey et al. (1980); 2 - Wang et al. (1986); 3 - Hwang et al. (1986); 4 - Pepper et al (1986); 5 - Ellingwood
and Hwang (1985); 6 - Hwang et al. (1985a).
to limit states that approximate ultimate strength behavior, whereas the higher probabilities represent
limit states defined by the onset of yielding. These probabilities are given to provide orders of magnitude
as well as relative relationships between different loading conditions. The work reported by Pepper et
al. (1986) found that the ultimate flexure limit state is controlling; the tangential shear limit state has
limit state probabilities several orders of magnitude lower. It is noted that the information provided in
Table 21-7 should be used with caution when applied to a specific case. The results reported are related
to typical containment structures as well as plant-specific containments and the specific source should
be consulted to determine if the results can be applied to the specific case in question.
Conditional Limit
Loading limit state probability state" Reference
Dead, live, hydrogen 1.72 X 10- 1 1 Shinozuka et al. (1984)
bum
Dead, LOCA 4.49 X 10- 5 Hwang et al. (1986)
Dead, live, earthquake 6.55 X 10- 4 to 1.20 X 10- 3 2 Shinozuka et al. (1984)
or dead, earthquake 1.3 X 10- 6 3 Hwang et al. (1984)
3.16 X 10- 8 3 and 4 Hwang et al. (1986)
2.29 X 10-8 and 3.75 X 10-8 3 Pepper et al. (1986)
7.33 X 10- 10 and 5.22 X 10- 14 4 Pepper et al. (1986)
Dead, live, 5.09 X 10-4 to 1.14 X 10- 3 2 Shinozuka et al. (1984)
earthquake, LOCA 2.26 X 10-4 3 and 4 Hwang et al. (1986)
Limit state probabilities have been determined for other concrete structures as well. For shear walls,
unconditional probabilities associated with 40-year nuclear plant life have been calculated (Hwang et
al., 1986) for flexure (6.06 X 10- 11) and shear (9.86 X 10- 10). Limit state probabilities have been used
to provide recommended load factors for use in future code revisions, and for load combination criteria
for Korean nuclear concrete containment structures (Cho and Han, 1989); see also Section 7.3.
Studies have been made on the effect of soil-structure interaction on limit state probabilities. The
following conclusions are from work performed by Pires et al. (1985).
• There is a reduction in limit state probabilities with the consideration of soil-structure interaction using mean
value soil and structural material properties.
• Consideration of structural material property variations (uncertainty) in the soil-structure interaction provides
limit state probabilities similar to those derived without consideration of the soil-structure interaction effect.
• Large uncertainties in soil properties can have a large effect on limit state probabilities.
1984). As part of this study the combination of SSE and LOCA was investigated (Lu et al., 1981;
Harris et al., 1982).
A major LOCA load contributor is the double-ended guillotine break (DEGB, total circumferential
severance) of a primary coolant pipe. Studies have concentrated on the DEGB of the primary coolant,
main steam, and feedwater lines at the location of the steam generator. The direct DEGB due to crack
growth, or the indirect DEGB due to component support seismic failure, were studied. In Tables 21-9
and 21-10 are given summaries of results from different studies. These studies considered various
aspects:
Table 21-9 gives representative DEGB and crack probabilities of reactor coolant loop piping. These
results by Woo et al. (1984) for Westinghouse pressurized water reactor (PWR) plants east of the Rocky
mountains show that the probability of a direct DEGB is small when compared to pipe leak. They also
Table 21-10 Indirect Double-Ended Guillotine Break Probabilities per Plant Year
Probability
determined that earthquakes contribute very little to the probabilities of a direct DEGB for these plants
studied. Studies of west coast Westinghouse PWR plants are reported by Chinn et al. (1985). The best
estimate leak and DEGB probabilities (l0-o8/plant year for leak, lO- 11/plant year for DEGB) are similar
to those determined for east coast Westinghouse PWR plants (see Table 21-9). Holman and Chou (1985)
have similarly found that the probability of a direct, as well as indirect, DEGB occurring in Westing-
house PWR reactor coolant loop piping is very low and should be eliminated from the design basis
events for these types of plants. The USNRC (1984) reports that the probability of a direct DEGB in
a Combustion Engineering PWR plant is equally low. Further, it is reported in this reference that the
probability of a seismic event causing a direct DEGB in Combustion Engineering reactor coolant loop
piping is negligible. It has also been found that the probability of an indirectly induced DEGB occurring
in a Babcox and Wilcox reactor coolant piping system as a result of earthquakes is very small (Ravindra
et al., 1985c).
Table 21-10 summarizes typical 90% confidence and median probabilities of indirect DEGB due to
seismically induced structural failure. Results given in the table reflect BWR plants as well as Babcock
and Wilcox (B&W), Westinghouse, and Combustion Engineering (CE) PWR plants. The cited references
should be consulted to determine applicability to specific cases.
In these studies it has been found that, in the design and evaluation of reactor coolant loop piping,
the DEGB is not credible for certain plants (e.g., Westinghouse pressurizer water reactors). For these
cases, the USNRC has approved the combination of SSE and LOCA peak loads during the evaluation
of the primary coolant loop piping by the square-root-sum-of-squares method. In future it is anticipated
that there will be no combination of SSE with LOCA loads because of the very low probability of their
simultaneous occurrence.
Normal loads
Dead weight
Live load
Operational loads, including thermal effects
Snow, rain, ice
Extreme loads
Seismic
Wind from tornado or hurricane
Missiles from tornado
Pressure due to accident
Emergency and faulted condition loadings
Airplane crash
A large amount of structural behavior data is available for the normal type of loading, whereas this
is not true for structural response due to extreme loading because of lack of actual data, and lack of
public domain documentation. As seen from the above listing, many loads that are to be considered in
the design of nuclear power plants are associated with extreme loadings. Therefore, additional conser-
vatism is introduced into the design criteria to address uncertainties. However, it has been recognized
that better load combination criteria can be obtained using probabilistic methods (Hwang et al., 1983a;
Ellingwood, 1983; Ravindra et at., 1985d).
Applications in Nuclear Power Plant Structures 525
Probabilistic methods could provide approximately uniform reliabilities for different types of struc-
tures and loading conditions. Probabilistic methods could also be used to eliminate load combinations
that are unrealistic or of extremely low probability.
It is pointed out by Ellingwood (1983) that working stress design and limit state design are not
consistent philosophically. Working stress methods are not usually relevant to determine structural re-
sponse near safety-related limit states because of the nonlinear behavior. Further, the use of multilevel
stress checks is not desirable to control deformation and the designer will not have a full appreciation
of structural behavior. Nonuniform limit states can lead to overdesign as well as unsafe conditions when
safety factors are applied to small differences between two large loads.
Work is proceeding to achieve better design criteria to address the shortcomings discussed above. In
defining proposed loading criteria, emphasis is placed on maintaining the present form that is familiar
to designers, and which is based on deterministic methodology as used in various codes by the American
Concrete Institute (ACI), American Institute of Steel Construction (AIsq, and American National
Standards Institute (ANSI). While keeping the present form of the loading criteria, load factors in those
criteria are defined from probability methods.
A procedure for establishing load combination criteria is given by Hwang et at. (1985a), using limit
state design with a consistent measure of reliability. This procedure is representative of the general
approach that is being used to recommend new criteria. The procedure is summarized as follows.
The load factors are determined by mathematical optimization procedures (e.g., maximum descent
method) and an established objective (merit) function that measures the difference between target and
computed limit state probabilities.
Generally a load combination format that follows the load and resistance factor design (LRFD) is
employed. It is as follows:
t
where Fi is load factor i, Li is load i, is the resistance factor associated with limit state j, and Rj is
the nominal structural resistance associated with limit state j.
Because the solution of the proposed optimization problem is based on objective functions that
measure probability, it is necessary to introduce sources of uncertainty. Examples in which variability
(uncertainty) may be introduced are material strengths, loadings, geometry, support characteristics, in-
elastic effects, and construction tolerances. Some of the loads are sometimes considered deterministic
(e.g., dead load and live load) depending on how well they are known. This is done to simplify the
optimization procedure without having an impact on the accuracy of the results. Areas of uncertainty
are discussed by Hwang et al. (1985a, 1985b) and Rodabaugh (1984). In the report by Hwang et al.
(1985a), containment geometry is considered deterministic, concrete strength is considered to have a
normal distribution, and the steel reinforcement strength is defined to have a lognormal distribution.
Rodabaugh (1984) identifies sources of uncertainty associated with piping systems on the basis of their
S26 Applications in Nuclear Power Plant Structures
significance (small, medium, large, and uncertain). Healey et al. (1980) discuss uncertainties in concrete
and steel properties, dimensions of concrete members, dynamic characteristics, and structural modeling.
In Hwang et al. (1985a), ultimate strength theory is used to define limit states for reinforced concrete
containments. The limit state is defined as a function of membrane stress and bending moments. A
maximum concrete compressive strain at the extreme fiber of a cross-section equal to 0.003, with rebar
yielding permitted, defines the limit state. In Hwang et al. (1985b) flexure and shear stress conditions
are used to define limit states for evaluation of shear wall criteria. flexure follows the ACI ultimate
strength methodology, whereas shear behavior is based on experimental results. The dead and live load
factors are preset to simplify optimization.
The probabilistic nature of the loading is also represented. In Hwang et al. (1985a) accident pressure
is considered as a rectangular pulse following Poisson's law, the earthquake is defined by seismic hazard
curves with the ground acceleration "idealized as a segment of a zero-mean stationary Gaussian process,
described in the frequency domain by a Kanai-Tajimi power spectral density," and the dead load is
treated as deterministic.
For general reference, Table 21-11 gives some results as defined by Hwang et al. (1985a,b). The
loading combinations considered dead load (D), live load (L), earthquake loading (E), accident pressure
t,
(P.), load factors Fio resistance factors and nominal structural resistance Rj •
Shear wall
The proposed criteria as given in Table 21-11 have been investigated by comparing designs by
existing codes and by the proposed code. The findings from some of these studies are summarized
below.
Shear wall
Example A (Eq. [21-4]) 1.0 X 10- 6 1.4 Hwang et al. (1985b)
Example B (Eq. [21-5]) 1.0 X 10- 5 1.2 Hwang et al. (1985b)
Concrete containment
(Eqs. [21-6] to [21-8])
Accidental pressure (pa) 1.0 X 10-· 1.2 Hwang et al. (1985a)
Seismic· (E) 1.0 X 10-(' 1.7 Hwang et al. (1985a)
"The maximum seismic event at the site is assumed to be two times the SSE.
Applications in Nuclear Power Plant Structures 527
The proposed criteria by Hwang et al. (1985b) for shear walls are more stringent than the criteria
given in the ACI Code ACI 349 or the USNRC Standard Review Plan (SRP) 3.8.4. 1\vo example shear
walls were evaluated for comparison to the ACI and SRP design requirements. It was found that the
wall thickness and reinforcement ratio for flexure were larger than that required by the existing ACI
and SRP design criteria, whereas the reinforcement ratio required for shear loading was similar.
The proposed criteria for concrete containment structures (Hwang et al., 1985a) were compared to
the ASME code criteria. It was found that the proposed criteria for concrete containments resulted in
less reinforcement for one case. Further, the load factor for accidental pressure (1.2) is smaller than
those used in the current ASME design criteria (1.5 and 1.25).
Load factors can be generated for other loadings and structures. Hwang et al. (1985a) discuss pre-
stress loadings, operating and design basis accident temperatures, equipment loads, operating live loads,
impact loading, tornado loads, wind, and snow loads.
New reliability measures are being employed to study load combination methods (Katukura et al.,
1991). They are based on safety domain concepts, where the reliability measures introduce probability
and occurrence number domains. Using these concepts, load combination methods can be compared,
and recommendations made to improve the existing criteria.
In recent years there has been considerable activity in developing and using risk-based inspection and
maintenance procedures. This subject is discussed in Chapter 17 of this handbook.
Nuclear power plants contain a number of pressure vessels and piping. Some probabilistic structural
mechanics applications in containment reliability and piping reliability are provided in the preceding
sections of this chapter. More applications, especially those relating to probabilistic fracture mechanics,
are discussed in Chapter 22 of this handbook.
The importance of probability-based methods is recognized in the nuclear industry. Currently, these
methods are being used to address seismic issues, improve maintenance programs, evaluate structural
reliability, modify existing industry codes, and define plant upgrade programs. It is anticipated that it
will be used to a greater extent in future, to address the plant aging question, design deficiencies, and
licensing issues affecting plant operation.
REFERENCES
AMICO, P. J. (1988). An Approach to the Quantification of Seismic Margins in Nuclear Power Plants: The Impor-
tance of BWR Plant Systems and Functions to Seismic Margins. Report No. NUREG/CR-5076 (UCRL-
15985 RD, RM). Washington, D.C.: U.S. Nuclear Regulatory Commission.
ANG, A H.-S., and W. H. TANG (1975). Probability Concepts in Engineering Planning and Design, Vol. I: Basic
Principles. New York: John Wiley & Sons.
528 Applications in Nuclear Power Plant Structures
APOSTOlAKIS, G., and P. KAFKA (1992). Advances in probabilistic safety assessment. Nuclear Engineering and
Design 134(1):141-148.
AzARM, M., 1. BOCCIO, and P. FARAHZAD (1983). Identification of Seismically Risk Sensitive Systems and Com-
ponents in Nuclear Power Plants, Feasibility Study. Report No. NUREG/CR-3357 (BNL-NUREG-51683).
Washington, D.C.: U.S. Nuclear Regulatory Commission.
BANDYOPADHYAY, K K, C. H. HOFMAYER, M. K KASSIR, and S. E. PEPPER (1987). Seismic Fragility of Nuclear
Power Plant Components (Phase II), Motor Control Center, Switchboard, Panelboard and Power Supply,
Vol. 2. Report No. NUREG/CR-4659 (BNL-NUREG-52007). Washington, D.C.: U.S. Nuclear Regulatory
Commission.
BANDYOPADHYAY, K K, C. H. HOFMAYER, M. K KASSIR, and S. E. PEPPER (1990). Seismic Fragility of Nuclear
Power Plant Components (Phase II), Switchgear, I&C Panels (NSSS) and Relays, 3. Report No. NUREG/
CR-4659 (BNL-NUREG-52007). Washington, D.C.: U.S. Nuclear Regulatory Commission.
BANDYOPADHYAY, K K, C. H. HOFMAYER, M. K KASSIR, and S. SHTEYNAGART (1991). Seismic Fragility of
Nuclear Power Plant Components (Phase II), A Fragility Handbook of Eighteen Components, 4. Report No.
NUREG/CR-4659 (BNL-NUREG-52007). Washington, D.C.: U.S. Nuclear Regulatory Commission.
BENJAMIN, J. R, and C. A. CORNELL (1970). Probability, Statistics, and Decision for Civil Engineers. New York:
McGraw-Hill.
BOGARD, W. T., and T. C. ESSELMAN (1978). Combination of Safe Shutdown Earthquake and Loss-of-Coolant
Accident Responses for Faulted Condition Evaluation of Nuclear Power Plants. Report No. WCAP-9279.
Pittsburgh, Pennsylvania: Westinghouse Electric Corporation.
BOlIN, M. P., L. C. SHIEH, 1. E. WELLS, L. C. COVER, D. L. BERNREUTER, and 1. C. CHEN (1984). Application of
the SSMRP Methodology to the Seismic Risk at the Zion Nuclear Power Plant. Report No. NUREG/CR-
3428 (UCRL-55483 RD & RM). Washington, D.C.: U.S. Nuclear Regulatory Commission.
BUDNITZ, R J., P. J. AMIco, C. A. CORNELL, W. J. HALL, R P. KENNEDY,1. W. REED, and M. SHINOZUKA (1985).
An Approach to the Quantification of Seismic Margins in Nuclear Power Plants. Report No. NUREG/CR-
4334 (UCID-20444). Washington, D.C.: U.S. Nuclear Regulatory Commission.
CASSIDY, B. G., W. S. LAPAY, and D. F. PADDLEFORD (1987). Probabilistic seismic risk to non-nuclear facilities.
In: Transactions of topical papers presented in Houston Texas. McLean, VIrginia: Society for Risk Analysis,
pp. 195-203.
CHANG, M., P. BROWN, H. HWANG, and T. TAKO (1983). Structural modeling and limit state identification for
reliability analysis of RC containment structure. In: Transactions of the 7th International Conference on
Structural Mechanics in Reactor Technology, Vol. M. Amsterdam: North-Holland Physics Publishing.
CHINN, D. J., G. S. HOLMAN, T. Y. Lo, and R W. MENSING (1985). Probability of Pipe Failure in the Reactor
Coolant Loops of Westinghouse PWR Plants. Report No. NUREG/CR-3660 (UCID-19988). 4, Pipe failure
induced by crack growth in west coast plants. Washington, D.C.: U.S. Nuclear Regulatory Commission.
CHo, H. N. and B. K HAN. 1989. A practical reliability-based design code calibration for containment structures.
In: Transactions of the 10th International Conference on Structural Mechanics in Reactor Technology, Vol.
M, Structural Reliability. A. H. Hadjian (Ed.). Los Angeles: American Association for Structural Mechanics
in Reactor Technology, pp. 85-90.
COVER, L. E., M. P. BOHN, R D. CAMPBELL, and D. A. WESLEY (1985). Handbook of Nuclear Power Plant Seismic
Fragilities. Report No. NUREG/CR-3559 (UCRL-53455 RD & RM). Washington, D.C.: U.S. Nuclear
Regulatory Commission.
ELLINGWOOD, B. (1983). Probability Based Safety Checking of Nuclear Plant Structures. Report No. NUREG/CR-
3628 (BNL-NUREG-51737). Washington, D.C.: U.S. Nuclear Regulatory Commission.
ELLINGWOOD, B., and H. HWANG (1985). Probabilistic descriptions of resistance of safety-related structures in
nuclear plants. Nuclear Engineering and Design 88:169-178.
EPRI (Electric Power Research Institute) (1988). A Methodology for Assessment of Nuclear Power Plant Seismic
Margin. Report No. NP-6041. Palo Alto, California: Electric Power Research Institute.
Applications in Nuclear Power Plant Structures 529
FARDIS, M. N., and A. NACAR (1984). Static ultimate capacity of RIC containment. Journal of Structural Engi-
neering, ASCE 110 (ST5):961-977.
GERGELY, P. (1986). Seismic fragility of reinforced concrete structures in nuclear facilities. Nuclear Engineering
and Design 94:9-24.
GoRMAN, M. R., L. A. BERGMAN, and J. D. STEVENSON (1980). Probability of failure of piping designed to seis-
mically induced upset, emergency and faulted condition (service conditions B, C and D) ASME code limits.
Nuclear Engineering and Design 57:215-220.
Gou, P. E, and J. E. LoVE (1983). Determination of pressure carrying capability of the containment structural
system for the Mark III standard plant. In: Transactions of the 7th International Conference on Structural
Mechanics in Reactor Technology, Vol. J. Amsterdam: North-Holland Physics Publishing, pp. 81-88.
GREIMANN, L., and E FANOUS (1985). Reliability of containments under overpressure. In: Pressure Vessel and
Piping Technology-A Decade of Progress. C. Sundararajan (Ed.). New York: American Society of Me-
chanical Engineers, pp. 821-834.
GREIMANN, L., F. FANOUS, A. SABRI, D. KETELAAR, A. WOLDE-TINSAE, and D. BLUHM (1982a). Reliability
Analysis of Containment Strength, Sequoyah and McGuire Ice Condenser Containments. Report No.
NUREG/CR-1891 (lS-4753). Washington, D.C.: U.S. Nuclear Regulatory Commission.
GREIMANN, L. G., E FANOUS, A. WOLD-TINSAE, D. KETALAAR, T. LIN, and D. BLUHM (1982b). Reliability Analysis
of Steel-Containment Strength. Report No. NUREG/CR-2442. Washington, D.C.: U.S. Nuclear Regulatory
Commission.
GUZY, D. J., and J. E. RICHARDSON (1988). Seismic margin issues. Nucelar Engineering and Design 107:77-81.
HARDY, G. S., R. D. CAMPBELL, and M. K. RAVINDRA (1986). Probability of Failure in BWR Reactor Coolant
Piping. Report No. NUREG/CR-4792 (UCID-20914). Vol. 4, Guillotine Break Indirectly Induced by Earth-
quakes. Washington, D.C.: U.S. Nuclear Regulatory Commission.
HARRIS, D.O., E. Y. LIM, and D. D. DEDHIA (1981). Probability of Pipe Fracture in the Primary Coolant Loop
of a PWR Plant. Report No. NUREG/CR-2189 (UCID-18967). Vol. 5, Probabilistic Fracture Mechanics
Analysis, Load Combination Program, Project I Final Report. Washington, D.C.: U.S. Nuclear Regulatory
Commission.
HARRIS, D.O., E. Y. LIM, D. D. DEDHIA, H. H. WOO, and C. K. CHOU (1982). Fracture Mechanics Models De-
veloped for Piping Reliability Assessment in Light Water Reactors. Report No. NUREG/CR-2301 (UCRL-
15490). Washington, D.C.: U.S. Nuclear Regulatory Commission.
HEALEy, J. J., S. T. Wu, and M. MURGA (1980). Structural Building Response Review. Report No. NUREG/CR-
1423. Vol. 1, Washington, D.C.: U.S. Nuclear Regulatory Commission.
HOLMAN, G. S., and C. K. CHOU (1985). Probability of Pipe Failure in the Reactor Coolant Loops of Westinghouse
PWR Plants. Report No. NUREG/CR-3660 (UCID-19988), Vol. 1, Summary report. Washington, D.C.: U.S.
Nuclear Regulatory Commission.
HOLMAN, G. S., T. Lo, and C. K. CHOU (1985). Probability of Pipe Failure in the Reactor Coolant Loops of
Combustion Engineering PWR Plants. Report No. NUREG/CR-3663 (UCRL-53500), Vol. 1, Summary Re-
port. Washington, D.C.: U.S. Nuclear Regulatory Commission.
HWANG, H., P. C. WANG, M. SHOOMAN, and M. REICH (1983a). A Consensus Estimation Study of Nuclear Power
Plant Structural Loads. Report No. NUREG/CR-51678. Washington, D.C.: U.S. Nuclear Regulatory
Commission.
HWANG, H., P. C. WANG, and M. REICH (1983b). Probabilistic Models for Operational and Accidental Loads on
Seismic Category I Structures. Report No. NUREG/CR-3342. Washington, D.C.: U.S. Nuclear Regulatory
Commission.
HWANG, H., M. REICH, and M. SHINOZUKA (1984). Structural reliability analysis and seismic risk assessment. In:
Seismic Events Probabilistic Risk Assessments. P.- Y. Chen and C. I. Grimes (Eds.). New York: American
Society of Mechanical Engineers, pp. 39-44.
HWANG, H., S. KAGAMI, M. REICH, B. ELliNGWOOD, M. SHINOZUKA, and C. S. KAo (1985a). Probability Based
530 Applications in Nuclear Power Plant Structures
Load Combination Criteria for Design of Concrete Containment Structures. Report No. NUREG/CR-3876
(BNL-NUREG-51795). Washington, D.C.: U.S. Nuclear Regulatory Commission.
HWANG, H., K. NAKAI, M. REICH, B. ELLINGWOOD, and M. SHINOZUKA (1985b). Probability Based Load Com-
bination Criteria for Design of Shear Wall Structure. Report No. NUREG/CR-4328 (BNL-NUREG-51905
An, RD). Washington, D.C.: U.S. Nuclear Regulatory Commission.
HWANG, H., S. KAGAMI, M. REICH, B. ELLINGWOOD, and M. SHINOZUKA (1985c). Probability-based load com-
binations for the design of concrete containments. Nuclear Engineering and Design 86:327-339.
HWANG, H., M. REICH, B. ELLINGWOOD, and M. SHINOZUKA (1986). Reliability Assessment and Probability Based
Design of Reinforced Concrete Containments and Shear Walls. Report No. NUREG/CR-3957 (BNL-
NUREG-51956 AN, RD). Washington, D.C.: U.S. Nuclear Regulatory Commission.
HWANG, H., S. E. PEPPER, and N. C. CHOKSHI (1987). Fragility assessment of containment tangential shear failure.
In: Transactions of the 9th International Conference on Structural Mechanics in Reactor Technology, Vol.
M. Rotterdam, the Netherlands: A. A. Balkema, pp. 237-242.
KATUKURA, H., H. MORISHITA. M. MIZUTANI, S. OGAWA. and T. TAKADA (1991). A study on the applicability of
load combination methods. In: Transactions of the 11th International Conference on Structural Mechanics
in Reactor Technology, Vol. M. Tokyo: Atomic Energy Society of Japan, pp. 187-192.
KAWAKAMI, l, H. HWANG, M. T. CHANG, and M. REICH (1984). Reliability Assessment of Indian Point Unit 3
Containment Structure. Report No. NUREG/CR-3641 (BNL-NUREG-51740). Washington, D.C.: U.S. Nu-
clear Regulatory Commission.
KENNEDY, R. P., R. D. CAMPBELL, and R. P. KASSAWARA (1988). A seismic margin assessment procedure. Nuclear
Engineering and Design 107:61-75.
KENNEDY, R. P., R. C. MURRAY, M. K. RAVINDRA, J. W. REED, and J. D. STEVENSON (1989). Assessment of Seismic
Margin Calculation Method. Report No. NUREG/CR-5270 (UCID-21572). Washington, D.C.: U.S. Nuclear
Regulatory Commission.
KOLONAY, J. F., and H. T. MAGUIRE, JR. (1991). The Westinghouse approach to reliability-centered maintenance.
In: Proceedings of 1991 Nuclear Power Plant & Facility Maintenance Topical Meeting (Salt Lake City,
Utah), Vol. 2, April 7-11. La Grange Park, Illinois: American Nuclear Society, pp. 72-79.
LAPAY, w.,and G. BOHM (1986). Seismic requalification advancements for nuclear power plants. In: Proceedings
of the American Power Conference. Chicago, Illinois: Illinois Institute of Technology.
LAPAY, W. S., and S. C. CHAY (1988). Application of fragility analysis methods in a seismic upgrade program. In:
Seismic Engineering-1988. T. H. Liu, L. H. Geraets, Y. K. Tang, and S. Mirga (Eds.). New York: The
American Society of Mechanical Engineers.
LAPAY, W. S., B. A. BISHOP, and S. C. CHAY (1985). Reserve strength as a measure of fragility. In: Proceedings
of the Workshop on Seismic and Dynamic Fragility of Nuclear Power Plant Components. C. H. Hofmayer
and K. K. Bandyopadhyay (Eds.). Report No. NUREG/CP-0070 (BNL-NUREG-51924). Washington, D.C.:
U.S. Nuclear Regulatory Commission.
Lo, T. Y., R. W. MENSING, H. H. WOO, and G. S. HOLMAN (1984a). Probability of Pipe Failure in the Reactor
Coolant Loops of Combustion Engineering PWR Plants. Report No. NUREG/CR-3663 (UCRL-53500). Vol.
2, Pipe Failure Induced by Crack Growth. Washington, D.C.: U.S. Nuclear Regulatory Commission.
Lo, T., H. H. Woo, G. S. HOLMAN, and C. K. CHOU (1984b). Failure probability of PWR reactor coolant loop
piping. In: Seismic Events Probabilistic Risk Assessments. P.-y' Chen and c.1. Grimes (Eds.). New York:
American Society of Mechanical Engineers, pp. 11-25.
Lu, S. C. (1984). Failure reliability analysis for stiff versus flexible piping. In: Probabilistic Structural Analysis.
New York: American Society of Mechanical Engineers, pp. 101,...108.
Lu, S., R. D. STREIT, and C. K. CHOU (1981). Probability of Pipe Fracture in the Primary Coolant Loop of a PWR
Plant. Report No. NUREG/CR-2189 (UCID-18967). Vol. 1, Summary Load Combination Program Project
I Final Report. Washington, D.C.: U.S. Nuclear Regulatory Commission.
MATTU, R. K. (1980). Methodology for Combining Dynamic Responses. Report No. NUREG-0484, Rev. 1. Wash-
ington, D.C.: U.S. Nuclear Regulatory Commission.
Applications in Nuclear Power Plant Structures 531
MENSING, R., and L. GEORGE (1981). Probability of Pipe Fracture in the Primary Coolant Loop of a PWR Plant.
Report No. NUREG/CR-2189 (UCID-18967). Vol. 7, System Failure Probability Analysis, Load Combination
Program Project I Final Report. Washington, D.C.: U.S. Nuclear Regulatory Commission.
MOORE, D. L., et al. (1987). Seismic Margin Review of the Maine Yankee Atomic Power Station. Report No.
NUREG/CR-4826 (UCID-20948). Washington, D.C.: U.S. Nuclear Regulatory Commission.
PEPPER, S., H. HWANG, and 1. PIRES (1986). Reliability Assessment of Containment Tangential Shear Failure.
Report No. NUREG/CR-4366 (BNL-NUREG-51913). Washington, D.C.: U.S. Nuclear Regulatory
Commission.
PIRES, J., H. HWANG, and M. REICH (1985). Reliability Evaluation of Containments Soil-Structure Interaction.
Report No. NUREG/CR-4329 (BNL-NUREG-51906). Washington, D.C.: U.S. Nuclear Regulatory
Commission.
PRASSINOS, P. G., M. K RAVINDRA, and J. B. SAVY (1986). Recommendations to the Nuclear Regulatory Commis-
sion on Trial Guidelines for Seismic Margin Reviews of Nuclear Power Plants. Draft Report for comments.
Report No. NUREG/CR-4482 (UCID-20579). Washington, D.C.: U.S. Nuclear Regulatory Commission.
PRASSINOS, P. G., R. C. MURRAY, and G. E. CuMMINGS (1987). Seismic Margin Review of the Maine Yankee Atomic
Power Station, Summary Report. Report No. NUREG/CR-4826 (UCID-20948), Vol. 1. Washington, D.C.:
U.S. Nuclear Regulatory Commission.
RAVINDRA, M. K (1988). Seismic probabilistic risk assessment and its impact on margin studies. Nuclear Engi-
neering and Design 107:51-59.
RAVINDRA, M. K, R. D. CAMPBEll, R. P. KENNEDY, and H. BANON (1984). Assessment of seismic-induced pipe
break probability in PWR reactor coolant loop. In: Seismic Events Probabilistic Risk Assessments. P.-y' Chen
and C. I. Grimes (Eds.). New York: American Society of Mechanical Engineers, pp. 1-10.
RAVINDRA, M. K, R. D. CAMPBELL, R. P. KENNEDY, and H. BANON (1985a). Probability of Pipe Failure in the
Reactor Coolant Loops of Combustion Engineering PWR Plants. Report No. NUREG/CR-3663, Vol. 3.
Washington, D.C.: U.S. Nuclear Regulatory Commission.
RAVINDRA, M. K, R. D. CAMPBELL, R. P. KENNEDY, and H. BANON (1985b). Probability of Pipe Failure in the
Reactor Coolant Loop of Westinghouse PWR Plants. Report No. NUREG/CR-3660 (UCID-19988). Vol. 3.
Washington, D.C.: U.S. Nuclear Regulatory Commission.
RAVINDRA, M. K, R. D. CAMPBELL, R. R. KIpp, and R. H. SUES (1985c). Probability of Pipe Failure in the Reactor
Coolant Loops of Babcock and Wilcox PWR Plants. Report No. NUREG/CR-4290 (UCRL-53644). Vol. 2.
Washington, D.C.: U.S. Nuclear Regulatory Commission.
RAVINDRA, M. K, C. K CHOU, T. Y. Lo, and M. W. SCHWARTZ (1985d). Probability-based load combinations. In:
Pressure Vessel and Piping Technology, 1985: A Decade of Progress. C. (Raj) Sundararajan (Ed.). New York:
American Society of Mechanical Engineers, pp. 821-834.
RAVINDRA, M. K, G. S. HARDY, P. S. HASHIMOTO, and M. J. GRIFFIN (1987). Seismic Margin Review of the Maine
Yankee Atomic Power Station. Report No. NUREG/CR-4826 (UCID-20948), Vol. 3. Washington, D.C.: U.S.
Nuclear Regulatory Commission.
REICH, M., P. C. WANG, 1. CURREI, S. Hou, and H. GoRADIA (1980). Review of Methods and Criteria for Dynamic
Combination in Piping Systems. Report No. NUREG/CR-1330. Washington, D.C.: U.S. Nuclear Regulatory
Commission.
REICH, M., H. HWANG, M. SHINOZUKA, B. ELLINGWOOD, and P. C. WANG (1982). Probability based load combi-
nations for design of category I structures. In: Proceedings of the 10th Water Reactor Safety Research
Information Meeting. Report No. NUREG/CP-0041, Vol. 5. Washington, D.C.: U.S. Nuclear Regulatory
Commission, pp. 107-108.
RODABAUGH, E. C. (1984). Sources of Uncertainty in the Calculation of Loads on Supports of Piping Systems.
Report No. NUREG/CR-3599 (ORNUsub/82-22252/2). Washington, D.C.: U.S. Nuclear Regulatory
Commission.
SANCAKTAR, S., and D. R. SHARP (1989). Use of probabilistic risk assessment and economic risk at the plant design
stage: An application. Nuclear Technology 84:315-318.
532 Applications in Nuclear Power Plant Structures
SANCAKTAR, S., and T. VAN DE VENNE (1990). Probabilistic risk assessment insights from new Westinghouse
pressurized water reactor design studies in 1982-1987. Nuclear Technology 91:112-117.
SANDERS, G. A, D. M. ERICSON, JR., and W. R. CRAMOND (1987). Shutdown Decay Heat Removal Analysis of a
Westinghouse 3-Loop Pressurized Water Reactor, Case Study. Report No. NUREG/CR-4762 (SAND86-
2377). Washington, D.C.: U.S. Nuclear Regulatory Commission.
SARGENT and LUNDY ENGINEERS (1980). Zion probabilistic safety study, Appendix 4.4.1: Primary Containment
Ultimate Capacity of Zion Nuclear Power Plant for Internal Pressure Load. Chicago, Illinois: Sargent and
Lundy Engineers for Commonwealth Edison Company.
SCHUELLER, G.!', and A H.-S. ANG (1992). Advances in structural reliability. Nuclear Engineering and Design
134(1):121-140.
SCHWARTZ, M. w., M. K. RAVINDRA, C. A CORNELL, and C. K. CHOU (1981). Load Combination Methodology
Development. Load Combination Program Project II Final Report. Report No. NUREG/CR-2087. Washing-
ton, D.C.: U.S. Nuclear Regulatory Commission.
SHINOZUKA, M., B. R. ELLINGWOOD, P. C. WANG, C. MEYER, Y. K. WEN, S. KAo, M. L. SHOOMAN, and A F.
PHILIPPACOPOULOS (1981). Probability Based Load Criteria for the Design of Nuclear Structures: A Critical
Review of the State-of-the-Art. Report No. NUREG/CR-1979 (BNL-NUREG-51356 RD). Washington, D.C.:
U.S. Nuclear Regulatory Commission.
SHINOZUKA, M., H. HWANG, and M. REICH (1984). Reliability assessment of reinforced concrete containment
structures. Nuclear Engineering and Design 80:247-267.
SINGH, A K., S. W. TAGART, and C. V. SUBRAMANIAN (1977). Technical Bases for the Use of the Square Root of
the Sum of Squares (SRSS) Method for Combining Dynamic Loads for Mark II Plants. Report No. NEDO-
24010. San Jose, California: General Electric.
STREIT, R. D. (1981a). Probability of Pipe Fracture in the Primary Coolant Loop of a PWR Plant. Report No.
NUREG/CR-2189. Vol. 6, Failure Mode Analysis Load Combination Program Project I Final Report. Wash-
ington, D.C.: U.S. Nuclear Regulatory Commission.
STREIT, R. D. (1981b). Probability of Pipe Fracture in the Primary Coolant Loop of a PWR Plant. Report No.
NUREG/CR-2189. Vol. 8, Pipe Fracture Indirectly Induced by an Earthquake, Load Combination Program,
Project I Final Report. Washington, D.C.: U.S. Nuclear Regulatory Commission.
TAKADA, T., and M. SHINOZUKA (1989). Reliability analysis of nonlinear MDOF dynamic systems. In: Transactions
of the 10th International Conference on Structural Mechanics in Reactor Technology, Vol. M. A H. Hadjian
(Ed.). Los Angeles, California: American Association for Structural Mechanics in Reactor Technology, pp.
7-12.
USNRC (U.S. NUCLEAR REGULATORY COMMISSION) (1975). Reactor Safety Study: An Assessment of Accident
Risks in U.S. Commercial Nuclear Power Plants. Report No. WASH-1400 (NUREG-75/014). Washington,
D.C.: Nuclear Regulatory Commission.
USNRC (U.S. NUCLEAR REGULATORY COMMISSION) (1976). Combining Modal Responses and Spatial Components
in Seismic Response Analysis. Regulatory Guide 1.92, Revision 1. Washington, D.C.: U.S. Nuclear Regu-
latory Commission.
USNRC (U.S. NUCLEAR REGULATORY COMMISSION) (1983). PRA Procedures Guide, Vols. 1 and 2. Report No.
NUREG/CR-2300. Washington, D.C.: U.S. Nuclear Regulatory Commission.
USNRC (U.S. Nuclear Regulatory Commission) (1984). Report of the U.S. Nuclear Regulatory Commission Piping
Review Committee. Report No. NUREG-1061. Vol. 3, Evaluation of Potential for Pipe Breaks. Washington,
D.C.: U.S. Nuclear Regulatory Commission.
USNRC (U.S. Nuclear Regulatory Commission) (1985). Seismic Safety Research Program Plan. Report No.
NUREG-1147. Washington, D.C.: U.S. Nuclear Regulatory Commission.
USNRC (U.S. Nuclear Regulatory Commission) COMMISSION) (1987). Reactor Risk Reference Document. Draft
Report No. NUREG-1150. Washington, D.C.: U.S. Nuclear Regulatory Commission.
USNRC (U.S. Nuclear Regulatory Commission) (1989). Individual Plant Examination: Submittal Guidance. Report
No. NUREG-1335. Washington, D.C.: U.S. Nuclear Regulatory Commission.
Applications in Nuclear Power Plant Structures 533
USNRC (U.S. Nuclear Regulatory Commission) (1991). Procedural and Submittal Guidance for Individual Plant
Examination of External Event (IPEEE) for Severe Accident Vulnerabilities. Report No. NUREG-1407.
Washington, D.C.: U.S. Nuclear Regulatory Commission.
WANG, P. c., 1. CURRERI, M. SHOOMAN, Y. K. WANG, A. J. PHiLIPPACOPOULOS, M. REICH, and M. SUBUDHI (1982).
Evaluation of Concurrent Peak Responses. Report No. NUREG/CR-2685. Washington, D.C.: U.S. Nuclear
Regulatory Commission.
WANG, P. C., H. HWANG,1. PIRES, K. NAKAI, and M. REICH (1986). Reliability Analysis of Shear Wall Structures.
Report No. NUREG/CR-4293 (BNL-NUREG-51900 AN-RD). Washington, D.C.: U.S. Nuclear Regulatory
Commission.
WELLS, 1. E., L. L. GEORGE, and G. E. CUMMINGS (1984). Seismic Safety Margins Research Program. Phase I
Final Report-Systems Analysis (Project VII). Report No. NUREG/CR-2015, Vol. 8 (UCRL-53021, Vol. 8).
Washington, D.C.: U.S. Nuclear Regulatory Commission.
Woo, H. H., R. W. MENSING, and B. J. BENDA (1984). Probability of Pipe Failure in the Reactor Coolant Loops
of Westinghouse PWR Plants. Report No. NUREG/CR-3660, Vol. 2. Washington, D.C.: U.S. Nuclear Reg-
ulatory Commission.
22
APPLICATIONS IN PRESSURE
VESSELS AND PIPING
1. INTRODUCTION
There can be numerous uncertainties involved in performing an assessment of vessel or piping per-
formance. Moreover, sufficient data may not be available to address all these uncertainties adequately,
so that meaningful predictions of performance can be made. Some of the key uncertainties in this
category include the following.
• Definition of design-limiting failure modes, such as loss of function or loss of structural integrity
• Design versus fabrication differences and variations in material properties
• Determination of the various degradation mechanisms that could be present
• Variations in environmental conditions, and in normal and transient loadings
• Availability of inspection and maintenance program data
• Accuracy of the inspection methods and interpetation of the data
• Accuracy of the methods and models to predict performance
There are two general methods of addressing these uncertainties when predicting the performance
of vessels and piping: deterministic and probabilistic. In the deterministic method, conservative data
and conservative assumptions in mechanistic degradation models and algorithms are employed to predict
typically one very conservative performance attribute. Sometimes this result could be unacceptable
relative to design or performance improvement goals. In this case, or when there are insufficient data
to make even a meaningful deterministic prediction, the probabilistic methods provide an attractive
alternative or supplement to the more conventional deterministic methods. By considering the range
and effects of key uncertainties, a more realistic assessment of vessel or piping performance can be
made. Moreover, the effects of individual uncertainties can be quantified and used to identify which
mitigative actions or additional information would be most beneficial in reducing the probability of
unacceptable component performance.
At the plant level, probabilistic risk assessment (PRA) can be used to quantify and manage the risk
534
Applications in Pressure Vessels and Piping 535
of adverse effects of pressure vessel or piping failures either as isolated events or in combination with
other component or subsystem failures. Probabilistic risk assessment is the scientific process of evalu-
ating the likelihood of adverse effects, such as injury, environmental damage, or financial loss at the
system or plant level (see Chapter 9 of this handbook). It is used primarily for loss prevention and to
identify where design or procedural changes are required to reduce the risk of unacceptable conse-
quences to a tolerable level. Probabilistic risk assessment can also be used to reduce the uncertainty in
defining which components and failure modes are of most concern relative to their potentially adverse
consequences. However, once the critical component failure modes have been identified, PRA cannot
address which alternatives would be most effective in reducing the subject failure probability.
To address the above need for realistic component failure probability values for input to a PRA
evaluation, the probability of each component structural failure mode is calculated using probabilistic
structural mechanics methods. This is especially important when the historical database for failures is
small or when the estimated range of failure probabilities is highly dependent on the assumed uncer-
tainties. Chapters 2 through 20 of this handbook describe the various methods and techniques that can
be used to evaluate structural failure probabilities of pressure vessels and piping. Several examples of
the use of probabilistic structural analysis to assess pressure vessel and piping reliabilities are discussed
in this chapter. Much of the early research, development, and applications were in the nuclear power
industry. For convenience of discussion, the applications are separated as pressure vessel and piping
applications and as nuclear and nonnuclear applications in the following sections.
2.1. Notations
a Crack length
b Material constant
c, Creep crack driving force
q Material constant
RTNOT Reference nil-ductility transition temperature
Time
2.2. Abbreviations
Pressurized thermal shock (PTS) events in a pressurized water reactor (PWR) are a class of short-lived,
time-varying events (transients) that result in a rapid and severe cooldown of temperature and high or
increasing pressure in the reactor pressure vessel (RPV). A concern arises if the PTS transient produces
additional stresses in the beltline region of the RPY, where there is a reduced fracture resistance due to
neutron-induced irradiation embrittlement. If flaws are postulated to exist near the inner wall surface of
the vessel beltline region, where PTS-induced stresses are highest, a PTS event may produce propagation
of this flaw and potentially jeopardize the pressure boundary integrity of the reactor vessel. Figure
22-1 shows typical RPV beltline locations that are of concern with respect to the existence of potential
surface flaws and their propagation during postulated PTS transient loading.
As a result of developments in the early 1980s, reactor vessel integrity for PTS can be evaluated
using probabilistic structural mechanics (pSM) methods in combination with traditional deterministic
methods (Balkey and Furchi, 1984; Balkey et ai., 1986). Pressurized thermal shock is of concern
Circumferential
Weld
Intermediate
Shell Plate
....1I1mJ1I-+-----1-----++--t-180o
Reactor Circumferential
Core Weld
Longitudinal
Circumferential Weld
Weld
Figure 22-1. Critical locations of the reactor vessel beltline region. (Source: Balkey and Furchi [1984]. Reprinted
with permission from the American Society of Mechanical Engineers.)
Applications in Pressure Vessels and Piping 537
primarily in the RPV beltline region because this portion of the vessel can be subjected to both signif-
icant neutron irradiation and the sudden cool-down temperatures coincident with high-pressure loadings,
which produce high tensile stresses on the inside surface. In general, the low-alloy ferritic materials
used as pressure vessel steels show an increase in hardness and tensile properties and a decrease in
ductility and fracture toughness with significant amounts of neutron irradiation. The toughness of the
material (embrittlement) is characterized by the reference nil-ductility transition temperature (RTNDT). It
is defined as the temperature at which the material undergoes a transition from ductile to brittle behavior.
During irradiation the RTNDT increases from its initial value, which is determined at the time of vessel
fabrication by a destructive specimen testing procedure. The value of this RTNDT shift is characterized
by a trend curve derived from toughness measurements of irradiated materials that is a function of
neutron fluence and the chemical composition of key residual elements. In the vessel, the shift in RTNOT
will vary in the longitudinal and circumferential directions and through the vessel wall because of the
respective variations in vessel neutron fluence.
The value of RTNDT at a given time and given location in the vessel is used in fracture mechanics
calculations to determine whether an assumed flaw would propagate or arrest during a PTS event. In
the fracture mechanics calculations, flaws are conservatively postulated to exist at the inner surface in
the vessel belt1ine region. This is conservative because surface flaws are more limiting than internal
(buried) flaws and the beltline inner surface sees both the highest level of neutron irradiation as well
as the highest thermal stresses. Moreover, the flaws are conservatively oriented in the direction giving
the highest stresses, due to pressure and bending loads.
To evaluate reactor vessel integrity during a postulated severe PTS transient and to demonstrate the
capability of continued operation, an RPV risk assessment is performed using PSM methods. Figure
22-2 shows schematically the various steps and interactions in the vessel PTS risk assessment process
(Balkey et at., 1986). The first step in the assessment is to use event tree analysis to identify the events
ri----i1 rr----j1
I I
fu
Obtain plant Construct Develop model
data event trees cal. p. T VS t Thermo-
I II hydraulic
analyls
I T I
I t I
I
I
LL
I Est. prob. I Est. condo prob. I
I of events of vessel fail
PRA I 1 I Probabilistic
event I I fracture
:~
sequence I P(F/E) I mecha~ics
analysis o analYSIS
I 0 E FPy32 I
L..: _ _ L! __ :..J
Est. frequency
of vessel fail
Figure 22-2. Flow chart for evaluation of vessel failure risk due to pressurized thermal shock. (Source: Balkey et
al. [1986]. Reprinted with permission from the American Society of Mechanical Engineers.)
538 Applications in Pressure Vessels and Piping
that could lead to a severe pressurized thermal shock of the RPV beltline and calculate their associated
frequencies. Event tree analysis is a system reliability analysis technique, which is usually performed
by systems engineers. (A discussion of event tree analysis and other system reliability analysis tech-
niques may be found in Chapter 9 of this handbook.) The next step is to determine the changes in
temperature and pressure with time during the PTS transient associated with the different event se-
quences identified in the previous step. The time histories for temperature and pressure are conserva-
tively characterized by a final temperature, an exponential decay time constant reflecting the rate of
cool down, and a characteristic (maximum) pressure. (This transient thermal-hydraulic analysis is usually
performed by fluid systems engineers.) The cool down reduces the fracture toughness in the embrittled
beltline region and with the pressure produces tensile stresses in that area of the vessel. If a crack exists
near the inner surface, a possibility exists that the crack could propagate through the wall, which is the
vessel failure mode of concern. The third step is to calculate the conditional probability of vessel failure
associated with each event sequence, given that the event sequence and resulting PTS transient occurs.
This conditional failure probability is calculated as a function of inner surface RTNOT for a family of
PTS transients, using probabilistic fracture mechanics (PFM) analyses with Monte Carlo simulation
techniques (U.S. Nuclear Regulatory Commission [NRC], 1982; Balkey and Furchi, 1984; Turner et
al., 1984). The vessel properties treated as random variables include initial crack depth, initial RTNOT,
copper content, fluence, and the critical stress intensity values for flaw initiation and arrest. For these
calculations, the failure criterion is through-wall crack propagation for nonarresting flaws.
The final step in the PTS risk assessment is to calculate the total yearly probability of vessel failure
by multiplying the annual frequency (probability per year) of each significant PTS transient by the
conditional probability of vessel failure, given that the transient occurs. Figure 22-3 shows a plot of
total PTS failure frequency for a typical Westinghouse PWR vessel as a function of vessel RTNOT and
ExceSSIve
FeeClWllet \
t1'
10·'0 L--....L..--'__.......- - ' _................_ .............._..''__-'
260 270 280 290 300 310 320 330 340 350 360
Surface RT NOT
Figure 22-3. Total frequency of significant flaw extension for various PTS transients in the Westinghouse Owner's
Group (WOG) vessel studies. (Source: Turner, Balkey and Phillips [1984]. Reprinted with permission from the
American Society of Mechanical Engineers.)
Applications in Pressure Vessels and Piping 539
the types of PTS transients that are significant risk contributors. If needed, these results can be used to
define what mitigative actions are required to reduce overall PTS risk. Changes to operating procedures
and system modifications to reduce the frequency and severity of the most dominating PTS events, fuel
management, and vessel shielding to reduce neutron flux and lower the vessel RTNDT or thermal an-
nealing to restore the fracture toughness can all be evaluated using this probabilistic methodology to
determine their effect on overall PTS risk.
When combined with cost-benefit considerations, decisions can be made as to which measures are
most effective in maintaining the risk of reactor vessel failure due to PTS within acceptable levels
(NRC, 1987). A specific example of this type of application of PSM methods to nuclear reactor vessel
integrity involves a study (Moylan et al., 1987) to identify appropriate means to obtain 20 additional
years of operation beyond the planned 40 years. In this study, flux reduction by alternative fuel man-
agement schemes and reactor-internals modifications or replacement is first evaluated relative to its
effectiveness in meeting neutron embrittlement limitations. For the vessels studied, the primary concerns
are about exceeding the screening criteria on RTNDT for PTS (NRC, 1982) and the drop of the upper
shelf Charpy toughness below the 50-ft'lb limit, both during the additional 20 years of operation. A
scoping cost-benefit risk study, employing the previously described structural risk techniques, is used
to compare the effectiveness of the various flux reduction options with other options for maintaining
the desired level of vessel integrity during the extended operating period. The other options in this
study are reduction in the frequency and/or severity of the controlling PTS transients and a flux mon-
itoring program to reduce the uncertainty in accumulated vessel fluence.
A convenient graphical way of showing the effects of the proposed changes on vessel failure risk
that was used in a previous plant-specific study (Turner et al., 1984) is also used in this study of
extended operating potential. Results of the risk analysis are plotted in Fig. 22-4. Here the total risk is
represented by a plot of the frequency of occurrence of the limiting PTS transient with the level of
Surface RTNOT • OF
Figure 22-4. Effects of plant-specific measures on risk of vessel failure. (Source: Moylan et al. [1987]. Reprinted
with permission from the American Society of Mechanical Engineers.)
540 Applications in Pressure Vessels and Piping
embrittlement when significant flaw extension would be predicted. The most convenient indication of
plant-specific embrittlement is RTNDT> as used in Fig. 22-4. In Fig. 22-4 the location of the X represents
the lowest value of RTNDT for which vessel failure is predicted to occur during the specific PTS event
being evaluated. The effectiveness of the proposed improvements is measured by the frequency margin
relative to (distance away from) the area of concern in Fig. 22-4. This area is the upper left-hand region,
which is above the appropriate safety goal and below the embrittlement corresonding to 20 additional
years of operation. If needed, this risk-based approach can be used to help identify and evaluate im-
provements in plant systems, instrumentation, materials, procedures, and training programs that reduce
the likelihood and consequences of PTS events. When evaluated with the flux reduction options and
implementation costs, decisions can be made as to the most cost-beneficial measures for maximizing
the time of reactor vessel operation.
Another unique feature of this plant-specific scoping risk study is the assessment of the potential for
low upper shelf toughness and its effects on vessel failure risk during postulated PTS events. The results
of this study as well as additional parametric sensitivity studies are reported separately (Bamford et al.,
1988). Both deterministic and probabilistic fracture mechanics analyses are performed to determine the
parameters that are most effective in maintaining the desired level of reactor pressure vessel integrity.
Upper shelf toughness values in the range from 75 to 200 ksHn.O. 5 are evaluated for both longitudinal
and circumferential flaws with mean surface RTNDT values of 190 and 250°F,
In the probabilistic evaluation of upper shelf toughness, initiation of a semielliptical surface flaw is
considered along with evaluation of the potential for crack arrest and subsequent reinitiation. Because
of this more detailed evaluation, updated correlations of more recent data are used relative to those
used in the original PTS evaluations (NRC, 1982). A revised flaw size distribution, through-wall fluence
attenuation based on a displacement per atom damage function, and a failure criterion of crack extension
to 75% instead of 100% of the vessel wall thickness are used. The probability of vessel failure, given
that the PTS event occurs, is shown in Fig. 22-5 primarily as a function of upper shelf toughness. The
10 -3 (190°F) ®
10 -4
10 -5
o
~~------~o~---------o
~ >RT NDT =250 F 0
10 -6 ----~O~~------~O
10 -7
10 -8
10 -9
Legend
- Long. Flaw
10 -10
- - Cire. Flaw
10 -11 ® NRC Result - Long. Flaw
10- 12 ~--~~--~----~----~----~----~--~
Upper Shelf Toughness
Figure 22-5. Effect of upper shelf toughness on vessel failure due to PTS. (Source: Bamford et al. [1988]. Reprinted
with permission from the American Society of Mechanical Engineers.)
Applications in Pressure Vessels and Piping 541
probabilities for both longitudinal and circumferential flaws are shown. As a point of reference, previous
results (NRC, 1982) for a longitudinal flaw are also shown in Fig. 22-5. Results of this study also
indicate that the statistical characterization of the upper shelf toughness and the probabilistic character-
ization of the initial flaw size distribution have the greatest effect on the calculated PTS failure prob-
abilities of the vessel. Therefore, great care must be exercised in specifying these reactor vessel char-
acteristics as accurately as possible.
The VISA (vessel integrity simulation analysis) computer code was originally developed to provide
a more precise quantitation of the probability of vessel failure for the NRC staff evaluation of pressurized
thermal shock (PTS) in reactor pressure vessels (NRC, 1982). Johnson and others (1986) describe
several new features incorporated into the second generation of this code, VISA-II. This code first
performs deterministic calculations of heat transfer, stress, and fracture mechanics analyses for a vessel
subjected to a user-specified temperature and pressure transient. Probabilistic Monte Carlo simulation
is then used to compare sampled values of irradiated vessel toughness relative to stored stress intensity
factors at a sampled initial flaw depth. This is done to see if the flaw will grow, arrest, and possibly
reinitiate and thus to estimate the probability of crack growth through the wall (failure) for a large
number of simulated vessels. The new features include the effects of vessel cladding on the heat-transfer,
stress, and fracture mechanics solutions, the probabilistic distribution of flaw length as well as flaw
depth, and the optional statistical correlation of the arrest toughness with the initiation toughness when
checking for crack arrest and reinitiation. The flaw simulation algorithm considers the entire vessel
instead of just one flaw in one weld and several alternative correlations for predicting the shift in RTNDT
with fluence are also available. In addition to the evaluation of PTS-type transients, this new version
of the VISA code has also been used to estimate vessel failure probabilities during heat-up and cool-
down operations at limiting pressures as well as during limiting hydrostatic tests of boiling water reactor
vessels.
Using these and other new capabilities, sensitivity studies of eight different types of parameters have
been performed. The results of the sensitivity studies are summarized in Table 22-1. This table gives
the effects of the different parametric changes from the base case on the conditional probability of
vessel failure at a mean surface fluence of 6 x 1019 neutrons/cm2 • This artificially high surface fluence
value is used in the sensitivity studies to ensure that relatively high values of failure probability (greater
than 1 in 10,000) are used to compare results; meaningful comparisons may not be possible with very
low probabilities. For the postulated PTS transient (1978 Rancho Seco accident) used in the studies,
the base case conditional failure probability is approximately 0.01, given that the initial flaw exists and
that the postulated PTS transient occurs. As can be seen in Table 22-1, randomly buried flaws and in-
service inspection have the greatest potential for significantly reducing the predicted vessel failure
probability. Probabilities calculated for buried flaws randomly positioned through the thickness of the
vessel wall are a factor of 100 less than the base case probabilities for all flaws located at the inside
surface. Likewise, calculated failure probabilities can be reduced by factors from 10 to 100 if credit is
taken for an effective (90 to 99% detection reliability) in-service inspection and subsequent repair of
the initial flaw before it has a chance to grow in response to the postulated PTS transient.
The significance of the assumed flaw size distribution and flaw density is verified in a sensitivity
study of vessel beltline failure due to PTS (Rosinski et aI., 1990). As shown in Table 22-2 for a
simulated small-break loss of coolant accident (LOCA) transient with loss of natural circulation, the
vessel failure probability for only one initial flaw varies by more than three orders of magnitude for
the six different distributions studied. The Marshall distribution predicts the largest number of flaw
initiations and failures whereas the optimistic Dufresne and Lucia distribution predicts the least number.
For the OCTAVIA and Marshall distributions, three-quarters of the initiated flaws lead to failure whereas
only one-half of the initiated flaws from the Dufresne and Lucia distribution lead to failure.
The development of a less uncertain (more accurate) flaw distribution would require vessel inspection
542 Applications in Pressure Vessels and Piping
"Conditional vessel failure probability at a mean inside surface ftuence of 6 X 1019 neutrons/cm2 •
Source: Adapted from Johnson et at (1986). Permission granted by the American Society of Mechanical
Engineers.
data on the reliability of flaw detection and the accuracy of locating and sizing the flaw. In this as-
sessment of current vessel nondestructive examination (NDE) capability, the size of the smallest reliably
detectable flaw and the system flaw sizing accuracy are evaluated against two criteria. These numerical
values are (1) the smallest flaw size in the sample reportedly used to develop the Marshall distribution
and (2) the critical crack size used in the probabilistic study of Table 22-2. On the basis of this eval-
uation, it is concluded that it appears feasible to use current state-of-the-art NDE techniques with results
from vessel in-service inspection (lSI) to develop "more representative" flaw size distributions and in
tum more realistic prediction of reactor vessel failure probability. Note that the study in Table 22-1
predicts a probability reduction factor of 20 for reliable vessel lSI.
In addition to PTS transients in the vessel beltline region, the effects of a number of other transients
Table 22-2 Effect of Flaw Size Distribution on Conditional Vessel Failure Probabilities for a Small-Break Loss
of Coolant Accident Transient.·
"Number of initiations and failures and the conditional vessel failure probability calculated using VISA-II (Johnson et at. 1986)
simulations; 100,000 trials are used in each simulation.
"Source: Adapted from Rosinski et at. (1990). Permission granted by Sandia National Laboratory.
Applications in Pressure Vessels and Piping 543
on the failure probabilities in six other regions of a Swedish boiling water reactor (BWR) vessel are
calculated using PSM methods (Dillstrom et ai., 1992). In the analysis, the transient loading is deter-
ministic while the fracture properties (initiation and arrest toghnesses) and initial (preservice) flaw size
are random variables. The effects of stress corrosion cracking, fatigue crack growth, preservice (initial)
flaw size distribution (OCfAVIA), and inservice defects are studied. As shown in Table 22-3, the type
of transient, toughness distribution (lognormal and Weibull), and failure mode (initiation, leakage, or
fracture) all strongly influence the calculated conditional failure probability in the core (beltline) region
of the vessel. The Weibull toughness distribution is shown to be the most conservative whereas the
lognormal distribution is least conservative. In this region, the cold overpressurization transient is the
worst transient. However, for both preservice and in-service defects (50% probability of one crack after
20 years), the total failure probability, considering the probability of crack existence and transient
occurrence, in the less embrittled regions of the vessel is highest for the reactor isolation transient
because of its higher primary stresses. Table 22-4 shows that the highest fracture probabilities are in
the core region and inside the feedwater nozzle. Although the predicted probabilities vary significantly
with these parameters and with degree of embrittlement (RTNDT), the relative ranking of the vessel
regions for lSI priority is fairly insensitive to the assumed conditions used in the analysis. However,
the choice of failure modes is crucial for ranking the order of priority of lSI. For example, ranking the
vessel regions according to crack initiation gives much different results than ranking according to
fracture (unstable crack growth without any arrest).
A number of other applications of PSM techniques to the reliability of nuclear reactor pressure vessels
are described in Section 6.1 of an American Society of Mechanical Engineers (ASME) survey paper
(Sundararajan, 1986) and in Section 4.2 of a trend review paper (Vitale, 1989). 1\vo recent examples
of probabilistic fracture mechanics (PFM) evaluations of nuclear reactor vessel reliability for postulated
pressurized thermal shock (PTS) events have been published in the open literature. Dickson and Si-
monen (1992) discuss how results of PFM analyses can be compared with acceptable failure probabil-
ities for PTS to estimate the residual capability of the vessel. Moreover, the potential benefits of plant-
specific mitigating actions are demonstrated. Examples of reducing either the PTS transient frequency
or the transient severity are provided and their effectiveness is discussed. Cheverton and Selby (1992)
Table 22-3 Conditional Vessel Failure Probabilities in the Core Region for 1\vo Types of
Toughness Distribution"
Table 22-4 Maximum Total Fracture Probabilities for Each Region of the Reactor Vessel"
"Total vessel fracture probability for preservice (initial) and in-service flaws and the worst toughness distribution
(Weibull).
Source: Adapted from Dillstrom et al. (1992). Permission granted by the American Society of Mechanical
Engineers.
provide a summary of the integrated probabilistic approach for the evaluation of the PTS issue as it
was applied to three plant-specific vessel analyses. The integrated approach includes the postulation of
PTS transients, frequency estimation, systems thermal-hydraulic analyses of pressure, fluid-film heat
transfer, and temperature as well as PFM analysis. A review of this work (Cheverton and Selby, 1992)
indicates that a number of areas exist where the PTS methodology can and should be updated. Areas
of particular concern are lower probability of flow stagnation, effects of flow and thermal plumes on
circumferential flaw stresses, the effects of shallow flaws, effects of plane-strain conditions and cladding
on the vessel fracture toughness, evidence of mixed-mode crack propagation due to ductile tearing,
incorporation of the latest radiation damage trend curves, and data extending the arrest toughness curve
beyond its previous limits. It is not clear whether consideration of the combined effects of all these
areas will increase or decrease the calculated vessel failure probabilities for PTS.
Stress corrosion cracking (SCC) in carbon steel vessels used for ammonia storage can be a concern for
the chemical process industries. Because of the potentially catastrophic consequences of an ammonia
release, a rational basis for evaluating the safety of ammonia vessels is needed. The evaluation must
consider the current condition of the vessel due to its design and fabrication, its operating history, and
the results from inspections. Any corrective actions required to keep the vessel at an allowable safety
level must also be identified and evaluated. A key uncertainty in the evaluation is the estimate of the
probable size of the largest crack that was not found by inspection, because this crack will remain in
the vessel following the inspection. Other parameters, such as material properties, residual stresses, and
the effects of vessel repair procedures also contain significant uncertainties that need to be considered.
Vessels without postweld heat treatment appear to be more prone to cracking than vessels with it,
especially if high-strength materials are involved. Other features that cause locally high stresses, such
as weld imperfections, also seem to promote SCC in the vessels.
Using this information with existing models for crack growth and a probabilistic approach to account
for uncertainties, the analytical scheme (Angelsen and Sangerud, 1991) shown in Fig. 22-6 can be used
for evaluation of vessel operation. In this scheme, the damage development models from a deterministic
Applications in Pressure Vessels and Piping 545
computer program are linked to a general probabilistic program for the vessel analyses. The probabilistic
analysis of an example vessel uses a first-order reliability method (FORM) with inspection updating to
evaluate the uncertainties in input parameters and model constants. Key variables, such as pressure,
base metal and weld yield strength, toughness, and stress corrosion and fatigue crack growth rates, are
represented by normal, lognormal, Weibull, and beta distributions that best fit the available data or basic
knowledge of the damage process. The scatter and uncertainty in the key parameters are represented
by coefficients of variation in the range from 0.05 to 0.44. Transverse cracking of a circumferential
weld is the limiting case for the example application and a leak is the most probable failure mode.
The results of this evaluation are the predicted probabilities of failure of longitudinal and transverse
cracks as a function of operating time, as shown in Fig. 22-7. As can be seen, transverse cracks have
an order of magnitude higher failure probability than longitudinal cracks. This same probabilistic ap-
proach also allows inspection results to be used to update the knowledge about the actual condition of
the vessel and to ensure continued safe operation. For example, if a probability of failure of 2 X 10-5
is set as the limit, then Fig. 22-7 shows that only 5 years of operation would be allowable for the
limiting traverse crack and no inspection. If a 5-year inspection is pedormed and no crack is found,
then the safe operational interval (time to the next inspection) can be calculated. By using the maximum
size of a traverse crack, based on the detection probability for acoustic emission with external ultrasonic
L4
..... ..
,
I I
Figure 22-6. Method for ammonia vessel probabilistic analysis. (Source: Angelsen and Saugerud [1991]. Reprinted
with permission from the American Society of Mechanical Engineers.)
S46 Applications in Pressure Vessels and Piping
inspection, then the time for the crack to grow and reach the limiting probability is calculated to be 9
years. The safe operating interval to the next inspection is then 4 years. Because of the higher probability
of detection for magnetic particle inspection, a smaller crack size would be used at 5 years and a longer
time of 7 years would be required for the crack to grow to the size for the limiting failure probability.
If the limiting probability of failure is reduced to 3 X 10- 6 , then the safe operating interval to the next
inspection is approximately one-third the previous values for the same inspection method and maximum
undetectable flaw size. If a crack is detected, its maximum possible size is calculated considering the
uncertainty in its sizing, which depends on the inspection method. It is then evaluated to determine: 1)
if and when the probability limit will be exceeded, 2) if a more accurate inspection is required, or 3)
if and when the crack should be repaired.
Another area of application is in aerospace pressure vessels; a high level of reliability is required
for these vessels also because of the severe consequences of failure. However, they are also subject to
severe weight restrictions. Probabilistic structural mechanics methods, such as probabilistic fracture
mechanics, are ideally suited to addressing conflicting constraints on minimum weight and a given
reliability goal. An example of such an application (Harris, 1992) concerns a group of cyclically pres-
surized cylindrical pressure vessels that are to be designed to satisfy a given reliability goal of 4.17 X
10- 3 in 40,000 cycles. The weight of the vessel, which is to be minimized, is controlled by the thickness
of the wall with the inside diameter fixed. To support the vessel internals, an internal ring is required
to be welded to the vessel. This weld is the primary area of concern because this is where the stresses
due to constraints imposed by the ring are highest and where defects are most likely to be present. The
defects are conservatively assumed to be semielliptical interior surface cracks of circumferential ori-
entation that give the highest range of stress intensity factors for crack growth.
In the probabilistic analysis, the initial crack depth is lognormally distributed with a median crack
depth of 2 mils for the 20 to 40-mil thickness range of interest in this example. The fatigue crack
10-2
w
a:
:::> 10-3
..J
<
LL
LL 10-4
0
~
..J 10-5
CD
<
CD
0 10~
a:
a.
10-7
0 5 10 15 20
TIME (YRS)
Figure 22-7. Ammonia vessel failure probability with operating time. (Source: Angelson and Saugerud [1991].
Reprinted with permission from the American Society of Mechanical Engineers.)
Applications in Pressure Vessels and Piping 547
growth rate of nickel alloy 718 in hydrogen is a lognormally distributed random variable set to give
an adequate characterization of the scatter in the intermediate crack growth rate regime. Monte Carlo
simulation is used to calculate the probability of failure, which is the existence of a through-wall crack
(leak). The example results calculated for the candidate vessel thicknesses are given in Fig. 22-8. From
this information, the minimum thickness that will satisfy the target reliability and provide the minimum
weight can be determined. As can be seen by this example application to an aerospace pressure vessel,
PSM methods can be used in the initial design as well as in the more typical application to evaluation
of operating concerns.
In another example, the fracture failure probability of a welded spherical tank with time is calculated
using an Edgeworth's series approximation (Tianjie, 1989). The approximated results are shown to
compare favorably to those from a Monte Carlo simulation. In the cited example, the initial size of the
sudace crack and its probability of detection are exponential distributions, the mean and cyclic stress
ranges are normal, the fatigue crack growth coefficient is lognormal, and the stress intensity coefficients
and the critical crack tip opening displacement are Weibull distributions. The ease of calculating the
change in failure probability with time for a nonstandard detection probability distribution is also dem-
onstrated for this method of approximation.
Finally, PSM methods are used to address the potential for SCC attack and failure of a large pressure
vessel in a Chinese ammonia plant (Wang and Dai, 1990). Because of a high calculated risk of failure,
the reliability of a metal spray protective coating of the internal sudace is studied. Statistically derived
distributions for the coating corrosion rate (exponential) and coating thickness (lognormal) are used in
the evaluation. The beneficial effects of nondestructive examinations (NDE) and on-line monitoring are
shown to be significant in this example vessel application .
.1
OJ
...5
-..
.01
...o
I>.
.,..
......
10-'
....c
.
.c
~ 10-'
a.
---
--- ---
10~0L---~-----1~O----~----~20~--~-----3~O----~----~40
Figure 22-8. Aerospace vessel failure probability for different wall thicknesses. (Source: Harris [1992]. Reprinted
with permission from the American Society of Mechanical Engineers.)
S48 Applications in Pressure Vessels and Piping
A potential safety issue in nuclear plant piping involves the performance of the emergency core cooling
system (ECCS) during a loss of coolant accident (LOCA) postulated to occur during shutdown opera-
tions. Because the alignment of ECes equipment during normal power operation (mode 1) is changed
during shutdown operations, quicker action of the operators could be required to mitigate the conse-
quences of the LOCA. The shutdown modes of concern are mode 3 (hot standby), which is a subcritical
condition in which the reactor coolant system (RCS) temperature is greater than 350°F, and mode 4
(hot shutdown), which is also a subcritical condition in which the RCS temperature is between 200 and
350°F.
To partially address this potential issue, a probabilistic risk assessment approach is used to compare
the core damage frequencies of a large LOCA in modes 3 and 4 to those of a large LOCA in mode 1
(Gresham et ai., 1989). In this approach, a large LOCA includes breaks in the RCS piping larger than
6 in. in diameter. The core damage frequency in each mode is composed of three components: (1) the
probability that a large pipe failure will occur at the applicable conditions, (2) the time spent in the
operating mode, and (3) the probability of core damage given that the large LOCA has occurred. The
rate of large pipe failure (a double-ended guillotine pipe break) for mode 1, 3, and 4 conditions is
predicted using PSM methods.
Because the greatest potential for failure would likely be in the pressurizer surge line (high American
Society of Mechanical Engineers [AS ME] Boiler and Pressure Vessel Code fatigue usage factor), this
line is used as the design-limiting piping system for PSM evaluation. To bound all possible pressurizer
surge line weld stresses, the maximum stress components are set equal to their corresponding ASME
Code limits. Moreover, the safe shutdown earthquake (SSE) is selected as the bounding loading con-
dition because it has relatively high stresses to cause failure and a fairly high number of cycles for
additional fatigue crack growth during the SSE.
For the calculation of the probabilities of break as a function of operating time for both normal
and shutdown conditions, the input and models of the PSM methodology are similar to those used
for previous evaluations of pressurized water reactor (PWR) primary loop piping (Lo et ai., 1984).
First, the bounding geometry and pressures and maximum ASME Code limit loads are calculated.
Next, the pressure- and thermal expansion-induced stresses and limiting flow stress for the pipe break
criteria are calculated. The values of these parameters change because of the reduced temperature and
pressure for the shutdown conditions relative to the normal operating condition. Finally, the para-
meters defining the nondetection probability and frequency of in-service inspection (lSI), which are
the same for both the normal and shutdown analyses, are selected. For this analysis, a conservative
crack detection accuracy provides a high degree of confidence that a detected crack would be big
enough to require repair or replacement (per ASME Code Section XI), which is an implicit assumption
in the probabilistic analyses. The probabilities of pipe break are estimated by the PRAISE computer
code (Harris et ai., 1986) developed for the Nuclear Regulatory Commission to calculate realistic
pipe break probabilities.
To account for the fact that all the stresses in the pressurizer surge line would not be expected to be
at their ASME Code limit and to allow for weld-to-weld and plant-to-plant variations, the nonpressure
stresses are assumed to vary uniformly between 0.5 and 1.0 of their maximum values. To account for
the probability of the initial crack being present and the design-limiting transient occurring, the same
postprocessing that was used in previous PWR primary loop evaluation (Lo et ai., 1984) is also used
for this study. Specifically, the probability of crack existence is a Poisson distribution with a propor-
tionality factor of 0.000l/in.3 of weld volume and the probability of the design-limiting earthquake
occurring is also a Poisson probability distribution with an expected frequency of 0.0001 earthquakes
per year (this value is based on the median generic seismic hazard curve for plants east of the Rocky
Applications in Pressure Vessels and Piping 549
Mountains). For a higher earthquake frequency, the reduction in break probability for shutdown con-
ditions, relative to normal operating conditions, is even greater.
Figure 22-9 graphically shows the normalized break probability ratios as a function of operating time
and shutdown mode. The ratio of total integrated break probability at normal mode 1 conditions to that
at shutdown conditions is a minimum of 18.1 for mode 3 and 28.7 for mode 4. When the shorter time
in the shutdown modes is taken into consideration, then the probability of pipe break occurring during
shutdown conditions is even further reduced.
The time interval (per year) that plants spend in modes 3 and 4 is determined from a survey of the
utilities supporting the studies. Probabilistic risk assessment is then utilized to incorporate the pipe
failure rates into an overall risk comparison. The core damage frequencies for mode 3 and 4 large-
break LOCAs are calculated, taking into account the reduced availability of safety systems and reliance
on operator action. Even for these additional considerations, the total risk of core damage resulting from
a large-break LOCA is less in modes 3 and 4 than in mode 1. This lower risk posed by a large LOCA
in these shutdown modes indicates that additional design or operational changes are not needed to
resolve this concern.
Another example PSM application is for evaluation of stress corrosion cracking (SCC), which has
been observed in recirculation piping of a boiling water reactor (BWR). The cause of the SCC is a
complex interaction of several variables related to stress levels (including residual stress), thermal history
of the material (sensitization), and adverse environment water chemistry. Various remedial measures to
decrease the frequency of cracking in BWR recirculation piping have been suggested. These measures
include changing material, reducing residual stresses, or altering coolant chemistry. To evaluate the
combined effects of the controlling variables, the proposed remedial measures, and the considerable
scatter in both field and laboratory observations, the probabilistic models for stress corrosion crack
initiation and growth in the PRAISE code (Harris et ai., 1986) are used.
Figure 22-10 shows some typical results from this type of PSM evaluation. In this case, the results
100 r-------------------------A_-_-----~6~==6=----6--------~
_ , __ ' u
/6--_-6
x
a: /
:::.::
u. /6~' FOR NORMAL OPERATING CONDITIONS
0 6--"-
N
>-
I-
......
....J
......
CD 10
a:
ill
0
a:: FOR MODE 3 CONDITIONS
D.... _0--0
0_0
~ 0_0-
a: 0...-- ___ -0-----0
w
a:: ~ __ ----0------0--
CD o ,0------0
....J
a: ,
I-
0 _0"/ FOR MODE 4 CONDITIONS
l- 0-----
I
0 10 20 30 40 50
TIME AT FULL POWER (YEARS)
Figure 22-9. Comparison of pipe break probabilities for various modes of operation. (Source: Gresham et al. [1989].
Copyright 1989 by the American Nuclear Society, La Grange Park, Illinois. Reprinted with permission.)
550 Applications in Pressure Vessels and Piping
are the predicted effects of water chemistry change on piping reliability (Harris and Balkey, 1993).
Cumulative failure probability as a function of time is provided for a girth weld in a 4-in. diameter line
(wall thickness of 0.34-in.) of 304 stainless steel subjected to random residual stresses for small lines.
The evaluation compares the effects of nominal oxygen level conditions of 0.2 ppm (parts per million)
for steady operating conditions and 8 ppm for plant start-up with those for other proposed changes to
the coolant oxygen content after 20 years of operation. As can be seen, changes during plant start-up
have a minimal effect, whereas changes during steady state have a large effect, with an increasing
benefit with decreasing oxygen level.
The PSM methods used to generate the results of Fig. 22-10 can also be used to evaluate the effects
of in-service inspection and leak detection and are readily adaptable to the analysis of the effects of
age-dependent degradation of material properties. Results from such analyses would be most useful for
decisions regarding extended operation of BWR recirculation piping and other BWR piping systems
subject to SCC.
A number of other example applications of PSM methods to nuclear plant piping are described in
Section 6.2 of an ASME survey paper (Sundararajan, 1986) and in the "piping" and "inspection"
sections of a magazine article published the same year (Balkey et al., 1986). Since that time a number
of additional works on the subject have been published. Some representative examples include the
following .
• The PRAISE code cited previously is compared with and verified by the PARIS probabilistic code (Bruckner-
Foit et al., 1989) developed independently in Germany for nuclear plant piping.
• An interactive personal computer code implementing the PRAISE algorithms is also used to perform para-
0.20 r--~-.,.--~-~--r--~--,....-....,
0.15
~
nI
Q)
...J
i\j 0.10
E
~
Q..
0.05
0.00 L -_ _=-..L..-_....L...-_....L.-_....I...-_-'-_.....L..._-'
o 10 20 30 40
TIME (Years)
Figure 22-10. Effect of lowering oxygen content at 20 years on the failure probability of BWR recirculation piping.
(Source: Harris and Balkey [1993]. Reprinted with permission from the American Society of Mechanical Engineers.)
Applications in Pressure Vessels and Piping 551
metric studies to optimize the safety of the Oak Ridge National Laboratory advanced neutron source reactor
(Fullwood and Hall, 1990).
• The accuracy and efficiency of several different methods of structural reliability analysis are compared for
Japanese nuclear plant piping subject to fatigue crack growth, including enhancement due to corrosion effects
(Schueller et at., 1991).
• The probabilities of leak and break are calculated for a carbon steel piping weld in a typical BWR main
steam piping line (Fujioka and Kashima, 1992). Sensitivity studies are also performed as a function of time
for 12 parameters used to specify the initial crack size, fatigue crack growth, and failure criterion.
• A selected critical weld in the auxiliary feedwater system of a typical nuclear plant is evaluated to determine
its failure probability with time due to thermal cycling. The plant probabilistic risk assessment is also modified
to demonstrate the effects of this passive component weld failure on the overall core damage risk (Phillips
et at., 1992).
Because of high-temperature piping failures at some fossil fuel-fired electric generating stations, a
number of utilities initiated piping integrity evaluation programs. These programs for longer-term con-
tinued operation require the use of long-term evaluations, including accelerated creep tests, creep crack
growth analyses, time-dependent fracture mechanics analysis, and time-based PSM analysis. An example
method (Rao et at., 1987a) for high-temperature piping PSM evaluation is shown in Fig. 22-11.
In the material testing to assess creep effects, including accelerated isothermal and isostress creep
rupture tests, the specimen creep displacements are continuously measured until rupture occurs. The
strain versus time curve and steady state creep rate are developed for each test by fitting a least-squares
line through the data. Creep crack growth tests are conducted using compact-type specimens machined
from seam welds to obtain crack length and load-line deflection as a function of time. The data are
processed numerically to obtain the growth and deflection rates as a function of time and creep crack
driving force (C,). Tests of creep behavior of the types of defects found in fossil plant piping suggest
that the crack growth follows a relationship of the form daldt = b(C,)q, where daldt is the crack growth
rate and band q are material constants.
Using these aged weld metal creep properties, expected crack growth is first calculated under normal
plant service and accelerated conditions for service times of up to 10 years. The analysis considers both
part-through cracks as well as through-wall cracks as initial conditions. The results of the deterministic
fracture mechanics analysis indicate that the part-wall cracks have the potential to become through-wall
cracks and leak, but that through-wall axial cracks will remain stable and not grow axially.
To address the sensitivity of some of the parameters and uncertainties in the fracture mechanics
evaluation, probabilistic evaluations are performed to quantify their effects. One of the areas of interest
is the criteria and parameters used to calculate the onset of ductile tearing. Given crack initiation, crack
extension by ductile tearing is especially important as it could lead to catastrophic failure. The statistical
variations of the material toughness, the crack depth, and stress intensity (crack driving force) are also
assessed. Probability of crack initiation as a function of time is shown in Fig. 22-12.
The PSM methods and tools in this example application are being used to provide additional technical
detail to the evaluation of fossil fuel plant piping integrity. More importantly, results like those of Fig.
22-12 are being used to provide additional bases for decisions regularly being required of fossil fuel
plant utilities. Probabilistic structural mechanics results can be applied to decisions regarding repair,
replacement, monitoring, or extended operation of piping.
In another example application, the failure probability of tubes in the catalytic steam reformers of
552 Applications in Pressure Vessels and Piping
an actual process plant is calculated using probabilistic structural mechanics (PSM) methods (Angelsen
et aZ., 1992). The tubes, made of a creep resistant HP-45 niobium-modified alloy, are subjected to high
thermal stresses that are cycled during various start-stop procedures. The failure mechanism is creep
damage resulting in stress rupture cracking at fairly low strains (1 to 2%). The levels of tube creep
damage are categorized as A for little or no damage, B for intermediate damage, and C for unacceptably
severe damage.
The reformer tube calculations involve both deterministic and probabilistic methods, as shown in
Fig. 22-13. A strain-based model linked to a nonlinear finite element program is used to predict the
thermal stresses, creep damage, and mean time to failure. The probabilistic analysis then calculates the
effects of data scatter and other uncertainties, such as those on mean temperature, pressure, and the
temperature difference across the tube wall. In addition, the creep damage model parameters and as-
sociated material properties are treated as random variables with appropriate statistical distributions.
Finally, the results from an in-service inspection are used to correct the rate of damage and improve
the prediction of failure probability as a function of operating time.
In this application, a unique approach is used to generate quantitative distributions of damage level
Step 1
Logo
Creep Oelonnation Behavior
Step 2
Cracked
Component
Ct = 1(0.., A, n, a, t)
~ Step 3 r?
Critical Crack Length (a e)
Probability
,'....' to Failure
~"l "'/
"
Crack
Size
(a) Through
¢ Inspection
Interval
Wall Step 4
t
- :,~~
,,'
Service TIme
Figure 22-11. Methodology for evaluation of high-temperature fossil plant piping. (Source: Rao et aZ. [1987a].
Reprinted with permission from the American Society of Mechanical Engineers.)
Applications in Pressure Vessels and Piping 553
18
b 16
>C
z
0 14
~
~
~ 12
~
0
«
a: 10
0 Twice the Normal
u.
0 Operating Stress Level
~
::::i
iii
8
\
~ 6
0
a:
a..
4
10 105
Figure 22-12. Failure probability of high-temperature fossil plant piping. (Source: Rao et al. [1987a]. Reprinted
with permission from the American Society of Mechanical Engineers.)
for A-, B-, and C-type tubes from the qualitative ratings recorded during previous in-service inspections.
A level of 0 is used for no damage, and 1 is used for failure. The resulting probability of failure
prediction as a function of operating time and inspection is shown in Fig. 22-14. As can be seen, this
type of information can be used directly to determine when reinspection would be required for a given
level of tube reliability. In Fig. 22-14, this reliability requirement is specified as the goal probability of
CALCULATION INSPECTION
II
I I I
(I Jl
Creep Model Inspection Finding
DelermiD i.lic
~ Jl
Calculated Damage Measured Dar.nage
II II
(I
IProbabilistic Analysis I
listio
1
(I
Figure 22-13. Methods for analysis of steam reformer tubes. (Source: Angelsen et aL [1992]. Reprinted with
permission from the American Society of Mechanical Engineers.)
554 Applications in Pressure Vessels and Piping
failure (POF) at 10 years and is shown by the horizontal line labeled "POF =year 10." The probabilistic
model also predicts the number of tubes with unacceptable (C-level) damage as a function of time.
When compared with the actual number of C-tubes observed in previous inspections, the agreement
with the predictions is excellent and verifies the accuracy of the probabilistic creep damage model. In
this example application, the results calculated using PSM methods are used to make decisions regarding
continued reliable operation and planning of the reformer tube inspection and replacement activities.
These same probabilistic analysis methods can also be used to identify which uncertainties contribute
most to the failure probability. This information can then be used to prioritize actions that can be taken
to improve the overall failure probability prediction process.
Probabilistic structural mechanics methods similar to those used for high-temperature creep of fossil
fuel plant piping are also being applied to probabilistic analysis of ligament cracks in a boiler superheater
outlet header (Rao et ai., 1987b). Here cracks found during an in-service inspection are evaluated for
expected creep crack extension and the probability of header fracture with operating time. This type of
evaluation is used to provide a more quantitative and cost-effective basis for decisions as to when the
headers should be inspected, repaired, or replaced. For example, some of the results indicate 1 year of
additional operation is acceptable for the size of cracks found but longer-term operation requires ad-
ditional inspections and subsequent evaluation to maintain the required level of reliability.
In another example application, the failure probability of high-temperature (973°K) piping in pure
bending is calculated directly by Monte Carlo simulation and approximated using a first-order reliability
method (FORM) (Riesch-Oppermann and Bruckner-Foit, 1991). In this case, the difference between the
two PSM methods is less than 20%, which is much less than the uncertainty in the input parameters
used to calculate the change in failure probability with operating time. The FORM method is also shown
to be useful in identifying the most important parameters in this type of probability calculation.
Finally, a probabilistic approach to fracture mechanics analysis of both axial and circumferential
II>
L
::l
.-
.0
II!
.0
o
L
a.
o 10 15 20
Year
_PDF/No insp. -+- PDF/ 9- i nsp ___ PDF/ 10- i nsp
Figure 22-14. Effect of inspection updating in years 9 and 10 on the probability of failure (pOp) of steam reformer
tubes. (Source: Angelsen et aL [1992]. Reprinted with permission from the American Society of Mechanical
Engineers.)
Applications in Pressure Vessels and Piping 555
welds in a high-pressure water pipeline has been recently described (Wannenburg et al., 1992). In this
study, the statistical distributions of defect size and occurrence probability and material properties are
used to calculate the failure probability, considering the actual results of nondestructive examination.
By considering the expected cost of piping failure and the sensitivity of the calculated failure risk,
several fracture control options are analyzed to determine which option is the most cost-effective one.
7. CONCLUDING REMARKS
As shown by the example vessel and piping applications in the previous sections, the primary objective
of PSM methods is to address and quantify the effects of uncertainties and to provide additional risk-
based information for effective decision making. Technically defensible and realistic estimates of failure
probabilities are vital for making cost-effective decisions regarding doing nothing, repairing, replacing,
inspecting, or implementing other mitigative options for the components of concern. An additional
perspective on the benefits of these methods is that typically the decision maker has little knowledge
of the degree of risk inherent in a deterministic analysis, whereas the risk is explicitly given by the
results of the probabilistic structural mechanics analysis.
Another benefit of PSM methods over the more conventional deterministic methods for pressure
vessel and piping performance analysis is that the PSM methods systematically tie together all aspects
of an evaluation, including an assessment of the effects of uncertainties. The engineering, safety, and
economic insights gained from the logic and thought processes involved in the PSM methodology have
proved to be invaluable in the development of solutions to some complex issues involving pressure
vessels and piping. Many of these solutions are already being implemented in plant operating and
licensing requirements in areas of concern, such as components subject to unacceptable levels of aging
degradation. The reason this is happening is because the technical viability of the solution can be
quantitatively demonstrated and, in most cases, the solution also results in reduced overall costs.
REFERENCES
ANGELSEN, S. 0., and O. T. SANGERUD (1991). A probabilistic approach to ammonia pressure vessel integrity
analysis. In: Fatigue, Fracture and Risk 1991. New York: American Society of Mechanical Engineers, pp.
59-66.
ANGELSEN, S. 0., J. D. WILllAMS, and D. G. DAMIN (1992). A probabilistic remaining lifetime analysis of catalytic
reformer tubes: Methods and case study. In: Fatigue, Fracture and Risk 1992. New York: American Society
of Mechanical Engineers, pp. 119-126.
BALKEY, K. R., and E. L. FURcm (1984). Probabilistic fracture mechanics sensitivity study for plant specific
evaluations of reactor vessel pressurized thermal shock. In: Advances in Probabilistic Fracture Mechanics.
New York: American Society of Mechanical Engineers, pp. 71-86.
BALKEY, K. R., T. A. MEYER, and F. J. WilT (1986). Probabilistic structural mechanics: chances are .... Me-
chanical Engineering 108:56-63.
BAMFORD, W. H., C. C. HEINECKE, and K. R. BALKEY (1988). Effects of low upper shelf fracture toughness on
reactor vessel integrity during pressurized thermal shock events. In: Life Extension and Assessment: Nuclear
and Fossil Power Plant Components. New York: American Society of Mechanical Engineers, pp. 43-50.
BRUCKNER-FolT, A., TH. SCHMIDT, and J. THEODOROPOULOS (1989). A comparison of the PRAISE code and the
PARIS code for the evaluation of the failure probability of crack-containing coinponents. Nuclear Engi-
neering and Design 110:395-411.
CHEVERTON, R. D., and D. L. SElBY (1992). A probabilistic approach to the evaluation of the PTS issue. Journal
of Pressure Vessel Technology 114:396-404.
556 Applications in Pressure Vessels and Piping
DICKSON, T. L., and F. A SIMONEN (1992). The application of probabilistic fracture analysis to residual life
evaluation of embrittled reactor vessels. In: Reliability Engineering-1992. New York: American Society of
Mechanical Engineers.
DIlLSTROM, P., F. NILSSON, B. BRICKSTAD, and M. BERGMAN (1992). Application of probabilistic fracture me-
chanics to allocation of NDT for nuclear pressure vessels: A comparison between initiation and fracture
probabilities. In: Fatigue, Fracture and Risk 1992. New York: American Society of Mechanical Engineers,
pp. 127-132.
FUJIOKA, T. and K. KAsmMA (1992). A sensitivity study in probabilistic fracture mechanics analysis of light water
reactor carbon steel pipe. International Journal of Pressure Vessels and Piping 52:403-416.
FULLWOOD, R. R., and R. E. HALL (1990). PRAlSDPD: An aging pipe reliability analysis PC code. Reliability
Engineering and System Safety 30:427-446.
GRESHAM, 1. A, B. G. CASSIDY, B. A BISHOP, and B. S. MoNTY (1989). Core damage risk associated with loss
of coolant accidents during shutdown operations. In: Proceedings of the International Topical Meeting on
Probability, Reliability and Safety Assessment PSA '89. La Grange Park, Illinois: American Nuclear Society,
pp. 787-795.
HARRIS, D. O. (1992). Probabilistic fracture mechanics with application to inspection planning and design. In:
Reliability Engineering-1992. New York: American Society of Mechanical Engineers.
HARRIS, D.O., and K. R. BALKEY (1993). Probabilistic considerations in the life extension and aging of pressure
vessels and piping. In: Technology for the '90s, New York: American Society of Mechanical Engineers, pp.
245-269.
HARRIS, D.O., D. D. DEDHIA, and E. D. EASON (1986). Probabilistic Analysis of Initiation and Early Growth of
Stress Corrosion Cracks in BWR Piping. Paper 86-PVP-11. New York: American Society of Mechanical
Engineers.
JOHNSON, K. I., F. A SIMENON, A M. LIEBETRAU, and E. P. SIMENON (1986). New Techniques for Modeling the
Reliability of Reactor Pressure Vessels. Paper 86-PVP-1O. New York: American Society of Mechanical
Engineers.
La, T., H. H. Woo, G. S. HOLMAN, and C. K. CHOU (1984). Failure probability of PWR reactor coolant loop
. piping. In: Seismic Events Probabilistic Risk Assessments. New York: American Society of Mechanical
Engineers, pp. 11-25.
MoYLAN, M. F., K. R. BALKEY, C. B. BOND, and V. A PERONE (1987). Reactor Vessel Life Extension. Paper 87-
PVP-15. New York: American Society of Mechanical Engineers.
NRC (Nuclear Regulatory Commission) (1982). NRC Staff Evaluation of Pressurized Thermal Shock. Policy Issue
SECY-82-465. Washington, D.C.: Nuclear Regulatory Commission.
NRC (Nuclear Regulatory Commission) (1987). Format and Content of Plant-Specific Pressurized Thermal Shock
Safety Analysis Reports for Pressurized Water Reactors. Regulatory Guide 1.154. Washington, D.C.: Nuclear
Regulatory Commission.
PmLLlPs, 1. H., T. W. BOLANDER, M. L. MAGLEBY, and V. A GEIDL (1992). Investigation of the risk significance
of passive components using PRA techniques. In: Fatigue, Fracture and Risk 1992. New York: American
Society of Mechanical Engineers, pp. 91-100.
RAo, G. V., T. A MEYER, and D. 1. COLBURN (1987a). Methodologies to Address Integrity Concerns Resulting
from High Energy Piping Evaluations. Paper 87-PVP-15. New York: American Society of Mechanical
Engineers.
RAo, G. V., F. J. WITT, and T. A MEYER (1987b). Integrity and Remaining Life Assessment of Boiler Headers
Containing Ligament Cracks. Paper 87-PVP-14. New York: American Society of Mechanical Engineers.
RIESCH-OPPERMANN H., and A BRUCKNER-FOIT (1991). Probabilistic fracture mechanics applied to high temper-
ature reliability. Nuclear Engineering and Design 128:193-200.
ROSINSKI, S. T., E. L. KENNEDY,I. R. FOULDS, and K. M. KINSMAN (1990). PWR vessel flaw distribution devel-
opment: an overview of feasibility. In: Damage Assessment, Reliability, and Life Prediction of Power Plant
Components. New York: American Society of Mechanical Engineers, pp. 73-78.
Applications in Pressure Vessels and Piping 557
SCHUELLER, G. I., A. TSURUI, and J. NIENSTEDT (1991). On the failure probability of pipings. Nuclear Engineering
and Design 128:201-206.
SUNDARARAJAN, C. (1986). Probabilistic assessment of pressure vessel and piping reliability. Journal of Pressure
Vessel Technology 108:1-13.
TIANJIE, C. (1989). Application of Edgworth's series to the assessment of the fracture failure probability of a
spherical tank. International Journal of Pressure Vessels and Piping 36:359-366.
TuRNER, R. L., K. R. BALKEY, and 1. H. PmLLlPs (1984). A plant specific risk scoping study of reactor vessel
pressurized thermal shock. In: Advances in Probabilistic Fracture Mechanics. New York: American Society
of Mechanical Engineers, pp. 87-104.
VITALE, E. (1989). Trends in the evaluation of the structural integrity of RPVs. Nuclear Engineering and Design
116:73-100.
WANG, M. O. and S. H. DAI (1990). A study of a method for evaluating reliability gain and wane for pressure
vessels. In: Damage Assessment, Reliability, and Life Prediction of Power Plant Components. New York:
American Society of Mechanical Engineers, pp. 79-84.
WANNENBURG, 1., G. C. KLINTWORTH, and A. D. ROTH (1992). The use of probability theory in fracture me-
chanics-a case study. International Journal of Pressure Vessels and Piping 50:255-272.
23
APPLICATIONS IN AIRCRAFT
STRUCTURES
1. INTRODUCTION
Statistical methods can be used in the design, certification, and maintenance of aircraft structures. These
activities are currently conducted according to deterministic specifications provided by the governing
agencies. In the case of military aircraft this is the U.S. Air Force, Navy, or Army. For commercial
aircraft the Federal Aviation Administration is the governing authority.
Aircraft structures have traditionally used a fail-safe or damage-tolerant design approach that uses
specific factors of safety, conservative loads, and material allowables. These durability and damage-
tolerant design approaches are detailed in a number of military standards and specifications (U.S. Air
Force [USAF], 1974, 1975a,b,c; Brussat et ai., 1987; Gallagher et ai., 1984).
The need for more efficient, higher performance structures and the desire to have better analytical
tools for predicting structural performance are encouraging more applications of probabilistic methods
to these design schemes. Palmberg et ai. (1987) give an overview of the U.S. Air Force damage-tolerant
design approach and discuss the inherent variabilities in loads, initial quality, the crack growth process,
inspection results, and material behavior. Conservative loads, material allowables, and a 1.5 safety factor
were typically used to allow for the variability. Whereas this approach has served the industry fairly
well in the past, new analytical techniques based on probabilistic methods can provide more optimal
results.
Improved methods of tracking loads during flight have provided a better understanding of the vari-
ability in loads. Flight load spectra can now be generated with considerable accuracy, even including
very damaging effects such as buffet loads (perez et ai., 1990). Material properties are often not available
with a sufficient number of test repetitions to provide statistical relevance, and remain one of the
challenges to greater application of statistical methods in aircraft design. Some characterization of 7050
aluminum initial fatigue quality (Bums et ai., 1991) shows that overall material quality has improved
over the years as better quality control and manufacturing processes have been developed. This work
is beneficial and underscores the need to better characterize currently used material systems and fastener
configurations. Work by Roth (1990) in characterizing the material for engine rotor disks indicates that
care must be taken when preparing samples and evaluating test data to ensure that the true service
behavior of the material is determined.
558
Applications in Aircraft Structures 55!)
Hooke (1987) gives a thorough review of a number of models that have been proposed for crack
growth, primarily from fastener holes. Extensive comparisons with test data have been made. One
concern when picking such a model is noted, that being the accuracy of the lower tail portion of the
propagation life distribution when dealing with the damage tolerance requirement that considers the
propagation of larger cracks. Durability requirements focus on the small crack sizes and in this case
prediction of the entire crack population becomes important.
Durability analysis has typically relied on a damage accumulation model, in particular Miner's rule,
in addition to an equivalent strain model to predict crack initiation. The difficulty here is the desire to
predict microscopic events from a limited number of macroscopic parameters. The equivalent strain
equation has been the focus for much research effort in an attempt to find some combination of engi-
neering variables that will give accurate crack initiation lives. Some of these relationships work rea-
sonably well for specific situations, but none of them gives consistent performance.
Provan (1981) developed a theoretical model of fatigue damage using a linear pure-birth Markov
stochastic proces for mode I fatigue crack propagation. Although successful in predicting the scatter
seen in fatigue data it does not fully answer the question of predicting the microscopic behavior from
engineering variables.
Another approach to durability analysis is that of an equivalent initial flaw size (EIFS). This method
projects a distribution of inherent material flaws from test data, which is then used to determine the
distribution of times to the development of a specified critical flaw size (Manning et ai., 1987). This
method treats the crack initiation phase as fundamentally the same as the crack growth phase, except
that crack sizes are much smaller.
General methodologies for incorporating statistical methods into the design and analysis process for
general aerospace structures have been presented by Walker (1989) and a discussion of an approach for
treatment of aircraft engines has been presented by Roth (1991). These are useful in establishing the
methodology and approach needed to implement probabilistic approaches to design, certification, and
maintenance. Although many of the methods and procedures of probabilistic structural mechanics dis-
cussed in Chapters 2 to 18 of this handbook are applicable to aircraft structures, the uniqueness of the
load spectrum and usage differences must be considered.
This chapter focuses on static analysis of military aircraft for design and the use of inspection data
for fleet management. Examples are given to show representative calculations and typical results. The
procedures discussed here may also be adapted for commercial aircraft.
The design applications are discussed in Section 3 to illustrate the use of the stress-strength inter-
ference method and show how this compares to a typical margin of safety approach. The sources of
variability in the problem are discussed including loads, geometry, material properties, and internal
loads prediction.
The certification of structure through full-scale testing is addressed in Section 4, particularly as it
relates to composite structure. The use of field inspection data and crack growth behavior to develop
statistical approaches to fleet management is discussed in Section 5. The emphasis of the chapter is on
current applications, and present research efforts that may result in future applications are not discussed.
2.1. Notations
2.2. Abbreviations
3. DESIGN APPliCATIONS
In a typical aircraft analysis, the margin of safety must be computed for all locations where the com-
bination of loading, materials, and structural design features produces either a strength, durability (crack
initiation), or damage tolerance (crack growth) critical condition. Residual strength considerations
(strength after a specified level of damage) are also of interest, particularly in the commercial aircraft
industry.
These detailed calculations are conducted using internal loads from finite element models. These
models use deterministic external load distributions that represent the various critical flight conditions
Applications in Aircraft Structures 561
in the mission profile. The sources of variability in this process are several. The external loads for a
particular flight condition have a degree of scatter as evidenced in typical flight test data. The material
properties also have a degree of scatter. Dimensional tolerances on part drawings lead to some variability
in the size of the parts. The finite element methods used to predict the internal loads have some degree
of uncertainty associated with their results. All of these are discussed individually in the following
sections and we show how the reliability, rather than a factor of safety, can be calculated.
8.
6.
"0 4.
~'"
co
cc 2.
:;;:
O.
-2.
-4.~~~~--~----~-- __~__~~__~____~__~
-4. -2. o. 2. 4. 6. 8. 10. 12.
NZCOR
Figure 23-1. Wing root bending moment versus normal acceleration.
562 Applications in Aircraft Structures
normal variate shown in Fig. 23-2, the 90% confidence range lies between the values of -Za and +za
that constitute 90% of the area under the curve. This corresponds to a value of a equal to 0.05. The
value for Za is found from tables of the standard normal variate to be 1.645.
Any normally distributed random variable x can be expressed in a normalized form Z by use of the
transformation
z = (x - 11)/0' (23-1)
where /.L and (J' are the mean and standard deviation of x, respectively. The upper limit of the confidence
band is then
The mean value of the wing root bending moment at the 9.0g normal acceleration level is found from
Fig. 23-1 to be 6.421 X 106 in .. lb. The upper confidence limit is 6.547 X 106 in .. lb. The standard
deviation found from these values and Eq. (23-3) is 76,596 in .. lb. Therefore the coefficient of variation
in this case is found to be 0.012.
where A and B represent the A- and B-basis values of the property, respectively.
f(z)
In some cases the parts of interest are manufactured from forgings. Forging properties are listed in
MIL-HDBK-SF as S-basis properties. S-basis properties are values that represent minimum allowable
values that have no statistical significance. This results from the fact that forging materials undergo
secondary forge operations that affect the properties. These secondary operations can be conducted by
any number of companies. The properties will vary somewhat, depending on which manufacturer does
the operation and the type of forging process used.
In the aerospace industry the material supplier must provide certification sheets giving the properties
of the material being delivered. In retrospect, these values can be used to construct the statistical
characteristics of the actual parts of interest.
(23-6)
Number of parts
manufactured Factor
5 2
10 3
25 4
100 5
700 6
564 Applications in Aircraft Structures
Load: The mean value of the fastener load P = 5874 lb. (ultimate) or 3916 lb. (limit). Using a
coefficient of variation of 0.04 for the load, the standard deviation of the load is found as Sp = 157 lb.
Material properties: The part is manufactured from 7050-T7452 aluminum, which is a forging
material. The properties are representative of the completed parts and must be found from a statistical
analysis of material property certification sheets, because the only available data in MIL-HDBK-5E are
S-basis properties (minimum expected values).
The part experiences an elevated temperature condition, which leads to a reduction factor on the
ultimate stress. Also, the bearing ultimate value is approximately 1.8 times the tensile ultimate value
for this material. The mean value of the tensile ultimate strength is F tu = 76,070 psi; the temperature
correction =0.97. The standard deviation Stu =1490 psi. The mean value of the ultimate bearing strength
is Fbru = 1.8(Ftu)(0.97) = 137,093 psi; the standard deviation Sbru = 1.8(Stu) = 2682 psi.
Geometry: The standard deviations of the dimensional values are approximated using the range
given by the tolerances on the drawing. In this case we will assume that we plan to manufacture 500
of these aircraft parts, thus the factor 5 will be used to estimate the standard deviation (see Table
23-1).
Figure 23-3. Aircraft wing structure, showing closure rib (top figure - closure rib in position at A; bottom figure
- planform of closure rib).
Applications in Aircraft Structures S65
Thickness T = 0.200:g:~
where 'YLP is the coefficient of variation due to load prediction (as discussed in Section 3.4); a value of
0.05 is used in this example as typical of finite element load prediction methods (Whitehead, 1986).
where Fbru is the mean value of the ultimate bearing stress and <1>(.) is the cumulative probability
distribution of a standard normal variate. Substitution of the numerical values yields
For very detailed calculations, as in this example, the stress-strength interference method is used by
many. More advanced techniques such as first-order and second-order reliability methods, Monte Carlo
simulation, and probabilistic finite element methods may also be used. These methods are described in
Chapters 3 through 5 of this handbook. Alford et al. (1991) used finite element analysis in conducting
a risk assessment for the C-141 aircraft. Burnside and Cruse (1989) used probabilistic finite element
analysis for the reliability assessment of aerospace system components.
The reliability of a complete structure in service is a function of the individual reliabilities of its
component parts. This function is complex and generally not known, as there are varying degrees of
S66 Applications in Aircraft Structures
Probability
Dens~y
o 50 100 200
Load - percent DLL
correlation between the individual parts, particularly due to the loading. Alford et al. (1991) constructed
a fault tree for a specific joint in a C-141 aircraft that reflected their view of how the various failure
mechanisms could come together to produce complete joint failure. To do this for an entire wing
structure would be a formidable task.
Static testing of a complete test article can be used to generate a reliability for such a complex
structure by computing the interference of the expected service load distribution and the static test load
distribution. This is one of several possible approaches outlined by Rapoff et al. (1989) and Whitehead
et al. (1986). In this case, the static test load distribution is just a single value, equal to 150% design
limit load (DLL). This is illustrated in Fig. 23-4.
The reliability can be computed by finding the area under the standard normal curve from -00 to an
ul?per limit defined by Rapoff et al. (1989):
(23-10)
where ~F is the strength demonstrated in the static test~ ~s is the mean peak load expected (100% DLL),
'Yr is the coefficient of variation in strength of the full-scale article, 'Ys is the coefficient of variation in
structural response, 'Yp is the coefficient of variation in expected peak load, and Zo.os is the value of the
standard normal variate at the 95% confidence level (= 1.645).
For a typical static test ~F = 150% DLL, and the following parameters are estimated
a = 5.078
Applications in Aircraft Structures 567
R = 0.999999
Shrinking military budgets and tight economic times in general are causing the military and commercial
operators to extend the life of existing airframes. Some fleets (such as the C-141 transport) are operating
well beyond their original design lifetimes as life extension programs squeeze out all possible use from
these aircraft. Statistical methods can play a significant role in the life extension process. Work by
Berens (1988), Berens and Bums (1990), Christian et al. (1986), Saff et al. (1987), and Smith et al.
(1990) is of interest in regard to the effects of inspections and long-term usage on structural integrity.
Incorporation of data from inspections, associated with force management activities, allows the in-
service behavior of the structure to be characterized.
Because aircraft usage varies, accurate tracking of flight loads can be used to generate individualized
inspection programs that minimize the risk of structural failure and ensure the most efficient allocation
of support resources. Inspections and maintenance can be performed for cause rather than spending
time and money on inspections that find nothing.
l00,~r-----~-------r------'-----~r-----~
10,~ _-----t----r-h,.::;-----:::!I~'r_--__1I__---_I
1,~ I-----t-+---!--t------+-----~~----_I
l00r----<H-1r--+----~---~I_~~_I
101_--~-t----+---~---__11__---~
wing fold
wing root
100% CLBM lor Moch 1.0.
1L-----~-------L-- ____ It.____
15,000~
7.6 ... Symmetrleal____
~~
Pullup ~
Ccrltlcal
Crack
Length
In.
Nlallure
Baseline
!
N - Flight Hours
than planned. Similarly, when an aircraft is flown less severely than the design baseline, it will have a
lifetime longer than planned. These differences are accounted for through a usage factor. The usage
factor is the ratio of the design lifetime to the projected lifetime at the severity of the actual service
flights. This is determined through a crack growth analysis for the particular design detail. This process
is illustrated in Fig. 23-6.
Predicted
Life Distribution
Critical
Crack Size
~Failed PariS
Crack
Size
Predicted
Growth
99.9r---------------,,-----~------___,
90
50
Percent
Probability
01
Failure
10
The crack life projection process used in this example is deterministic, using a crack growth curve
that represents the mean behavior. A full statistical approach could be accomplished by including the
characterization of the material behavior and performing a Monte Carlo simulation to determine the
failure distribution.
A Weibull distribution can be fit to the completed data. A typical case is shown in Fig. 23-8. In
many cases decisions will be made from only a small amount of data. It is therefore important that the
confidence bands associated with the failure data be determined as well.
f(x) = -a- _ (
x-x_0
j3-xo j3-xo
)U-lexp [( _ _)U]
_x-x
j3- xo
0
(23-12)
Although it appears cumbersome, after some algebra, the reliability can be computed numerically using
a Laguerre-Gauss integration scheme. The final form of the reliability is (Christian et al., 1986)
(23-13)
570 Applications in Aircraft Structures
Pr~blit'l
OlsUibuUon
Xoco
Ufe .. hi
where, in this case, z is a dummy variable of integration. The Laguerre-Gauss integration formula
becomes
R = e- XOpI L WJ(Zi}
i=l
(23-14)
The Zi and Wi values are the coordinates and associated weight functions for the degree of integration
desired. The coordinate Zi is the ith zero of Lm(z), the mth Laguerre polynomial. The expression for the
weights are (Hildebrand, 1974)
(23-15)
These are tabulated in the reference cited, as well as in other numerical methods texts, for a variety of
different numbers of coordinates. The number required depends on the accuracy desired and the function
being integrated. The work done by the author and his colleagues in analyzing F-15 aircraft inspection
data utilized a 15-point integration with success.
5.4. Example
A cracking problem in the vertical tail of a fighter aircraft structure resulting from buffet loads
prompted an inspection to determine the extent of the problem. The data accumulated from this process
produced 12 failures (cracks of critical or greater size) and flight hour information for 902 aircraft. This
information was used to construct the failure distribution and the flight hours distribution. The cumu-
lative probability of failure was then calculated, using Eqs. (23-11) to (23-14), which gives the prob-
ability of failure as a function of flight hours for the fleet. This assumes that the failure distribution
would remain the same. If inspections yielded new failures, these could be included to modify the
failure distribution. The probability of failure would then need to be updated to reflect this improved
failure information. The probability distribution for the aircraft lives is shown in Fig. 23-10. The cu-
mulative probability of failure is shown in Fig. 23-11.
Conclusions are often based on a minimum of data. In this case, only 12 failures were used to
construct the failure distribution. This implies that the confidence is somewhat low as to the exact shape
Applications in Aircraft Structures 571
1.2
.
z
0
1.0
1=
=:l
CD
a:
~
0.8
CI)
L ~\
Ci
~
:J 0.6
a5
od;
CD /
1\
0
ex:
0.. /
/ \
0.4
0.2
J ~
0.0 I---
O. 500. 1000. 1500. 2000. 2500. 3000. 3500. 4000.
FLIGHT HOURS
100.-------------------------__________~
!
E! 80
ii
IL
'0
60
~
.
:a
0
.
,g
a. 40
~.
CD
'5
E 20
~
(,)
Life - Hours
1001---------=::;:~~~
80
f
=
ii
IL
...
--
0
60
---0-- ConI. Bound
~ Mean
:Ii
.•
.a
0
11.
40 ---0-- ConI. Bound
E
::s
u 20
Life - Hours
of this distribution. Confidence bounds are essential to quantify the uncertainty associated with the
distribution. The 90% confidence bounds for the cumulative probability of failure are shown in Fig.
23-12.
6. CONCLUDING REMARKS
Current design criteria for aircraft structures are based on deterministic methods. There is a move,
however, toward probabilistic approaches. Uncertainties in loads, geometry, material properties, and
analysis procedures can be incorporated explicitly in a probabilistic analysis. Use of the stress-strength
interference approach for this purpose is discussed with an illustration. Use of the stress-strength in-
terference method in conjunction with certification test results is also discussed with an example. Life
prediction and tracking of individual aircraft and failure probability assessment of a fleet of aircraft are
also discussed and illustrated.
REFERENCES
ALFORD, R. E., J. B. COCHRAN, and R. P. BELL (1991). C-141 WS-405 risk assessment. In: Proceedings of the
Aircraft Structural Integrity Program Conference. T. D. Cooper, J. W. Lincoln, and R. M. Bader, Eds. Wright
Patterson Air Force Base, Ohio: U.S. Air Force, pp. 35-70.
BERENS, A. (1988). Structural risk analysis in aging aircraft fleets. In: Proceedings of the Aircraft Structural
Integrity Program Conference. T. D. Cooper, J. W. Lincoln, and R. M. Bader, Eds. Wright Patterson Air
Force Base, Ohio: U.S. Air Force, pp. 365-397.
BERENS, A., and J. BURNS (1990). The application of risk analysis to an aging aircraft fleet. In: Proceedings of
the Aircraft Structural Integrity Program Conference. T. D. Cooper, J. W. Lincoln, and R. M. Bader, Eds.
Wright Patterson Air Force Base, Ohio: U.S. Air Force, pp. 155-183.
Applications in Aircraft Structures 573
BRUSSAT, T. R, 1. C. EKVALL, and J. A. JAUREGUI (1987). Documentation of the Navy Aircraft Structural Integrity
Program (NASIP). Report No. NADC 87089-60. Warminster, Pennsylvania: Naval Air Development Center.
BURNS, J. G., et al. (1991). Probabilistic durability evaluation of ALCOA 7050 aluminum. In: Proceedings of the
Aircraft Structural Integrity Program Conference. T. D. Cooper, J. w.Lincoln, and R. M. Bader, Eds. Wright
Patterson Air Force Base, Ohio: U.S. Air Force, pp. 305-322.
BURNSIDE, H., and T. CRUSE (1989). Probabilistic structural analysis methods (PSAM) for aerospace system com-
ponents. In: Proceedings of the Aircraft Structural Integrity Program Conference. T. D. Cooper and 1. W.
Lincoln, Eds. Wright Patterson Air Force Base, Ohio: U.S. Air Force, pp. 263-282.
CuR.:!STIAN, T. F., H. G. SMITH, and C. R SAFF (1986). Structural risk assessment using damage tolerance analysis
and flight usage data. In: Proceedings of the ASME Winter Annual Meeting. New York: American Society
of Mechanical Engineers.
GALLAGHER, J. P., F. J. GIESSLER, A. P. BERENS, and R M. ENGLE, JR. (1984). USAF Damage Tolerant Design
Handbook: Guidelines for the Analysis and Design of Damage Tolerant Aircraft Structures. Report No.
AFWAL-TR-82-3073. Wright Patterson Air Force Base, Ohio: U.S. Air Force.
HAUGEN, E. B. (1980). Probabilistic Mechanical Design. New York: John Wiley & Sons.
HiLDEBRAND, F. B. (1974). Introduction to Numerical Analysis, 2nd ed. New York: McGraw-Hill.
HOOKE, F. H. (1987). Aircraft structural reliability and risk analysis. In: Probabilistic Fracture Mechanics and
Reliability. 1. W. Provan, Ed. The Hague, The Netherlands: Martinius Nijhoff, pp. 131-170.
MANNING, S. D., 1. N. YANG, and J. L. RUDD (1987). Durability of aircraft structures. In: Probabilistic Fracture
Mechanics and Reliability. J. W. Provan, Ed. The Hague, The Netherlands: Martinius Nijhoff, pp. 213-267.
PEREZ, R., S. HARBISON, H. G. SMITH, JR., and C. R SAFF (1990). Development of Techniques for Incorporating
Buffet Loads in Fatigue Design Spectra. Report No. NADC-90071-60. Warminster, Pennsylvania: Naval Air
Development Center.
PALMBERG, B., A. F. BLOM, and S. EGGWERlZ (1987). Probabilistic damage tolerance analysis of aircraft structures.
In: Probabilistic Fracture Mechanics and Reliability. J. W. Provan, Ed. The Hague, The Netherlands: Mar-
tinius Nijhoff, pp. 47-130.
PROVAN, J. W. (1981). The micromechanics approach to the fatigue failures of polycrystalline metals. In: Cavities
and Cracks in Creep and Fatigue. J. Gittus, Ed. New York: Elsevier's Applied Science, pp. 197-242.
RAPOFF, A. J., H. D. DILL, and K. B. SANGER (1989). An Improved Certification Methodology for Composite
Structures-Draft Copy. Final Report, Contract Number N62269-87-C-0258. Warminster, Pennsylvania:
Naval Air Development Center.
Rom, P. G. (1990). Competition in probabilistic life analysis. In: Proceedings of the Aircraft Structural Integrity
Program Conference. T. D. Cooper, J. W. Lincoln, and R M. Bader, Eds. Wright Patterson Air Force Base,
Ohio: U.S. Air Force, pp. 735-753.
Rom, P. G. (1991). Probabilistic design. In: Proceedings of the Aircraft Structural Integrity Program Conference.
T. D. Cooper, J. w.lincoln, and R. M. Bader, Eds. Wright Patterson Air Force Base, Ohio: U.S. Air Force,
pp. 463-483.
SAFF, C. R, H. G. SMITH, and T. F. CHRISTIAN (1987). Applications of Damage Tolerance Analysis to Structural
Risk Assessment. Paper presented at the 28th AIANASME/ASCE/AHS Structures, Structural Dynamics and
Materials Conference, Monterey, California, 6-8 April 1987.
SMITH, H. G., C. R. SAFF, and T. F. CHRISTIAN (1990). Structural Risk Assessment and Aircraft Fleet Maintenance.
Paper presented at the 31st AIANASME/ASCE/AHS Structures, Structural Dynamics, and Materials Con-
ference, Long Beach, California, 2-4 April 1990.
USAF (U.S. Air Force) (1974). Airplane Damage Tolerance Requirements. MIL-A-83444. Wright Patterson Air
Force Base, Ohio: U.S. Air Force.
USAF (U.S. Air Force) (1975a). Aircraft Structural Integrity Program, Airplane Requirements. MIL-STD-1530A.
Wright Patterson Air Force Base, Ohio; U.S. Air Force.
USAF (U.S. Air Force) (1975b). Airplane Strength and Rigidity Reliability Requirements, Repeated Loads and
Fatigue. MIL-A-8866B. Wright Patterson Air Force Base, Ohio: U.S. Air Force.
574 Applications in Aircraft Structures
USAF (U.S. Air Force) (197Sc). Airplane Strength and Rigidity Ground Tests. MIL-A-8867B. Wright Patterson
Air Force Base, Ohio: U.S. Air Force.
USAF (U.S. Air Force) (1992). Military Standardization Handbook: Metallic Materials and Elements for Aerospace
Vehicles. MIL-HDBK-SF. Wright Patterson Air Force Base, Ohio: U.S. Air Force.
WALKER, K. (1989). Structural fatigue risk assessments: elements and example. In: Proceedings of the Aircraft
Structural Integrity Program Conference. T. D. Cooper, 1. W. Lincoln, and R. M. Bader, Eds. Wright Pat-
terson Air Force Base, Ohio: U.S. Air Force, pp. 236-262.
WHITEHEAD, R. S., et al. (1986). Certification Testing Methodology for Composite Structure. Report No. NADC-
87042-60. Warminster, Pennsylvania: Naval Air Development Center.
24
APPLICATIONS IN SHIP
STRUCTURES
1. INTRODUCTION
The structural weight of naval surface combatants constitutes approximately 35% of the total lightship
displacement, making hull structures the heaviest of all ship subsystems. Any improvements in vessel
capability through growth of the mission-related payload will necessitate an equivalent reduction in
weight in some other subsystem. Because of the proportionally low cost of the hull structure subsystem,
as shown in Fig. 24-1, improvements can be made here without drastically increasing total vessel cost.
The analogous problem in the civilian shipping world is just as important. Any reduction in structural
weight is a savings in the vessel initial cost and will represent additional cargo-carrying capacity for
the same vessel displacement. Alternatively, the same amount of cargo could be carried at a lighter
displacement, thereby reducing fuel consumption. All of these mean an increased profitability for the
vessel.
How can structural weight be reduced? The use of new materials and technologies can provide a
means of making the vessel lighter. But what about vessel strength? Current design criteria will become
invalid or unrealistic as new materials and technologies are introduced. This is because the design
criteria are typically codified in the form of simple equations or charts that are meant for a particular
application or material. They usually contain some empirically derived factors of safety that may not
be evident to the user. The use of these criteria can possibly lead to overdesigning the structure or, even
worse, underdesigning due to improper estimation of safety.
Consequently, improved design criteria and analysis methods need to be developed. These methods
should be capable of handling the new technologies and materials as well as existing ones. Because the
loading of a ship structure is mainly the sea, a truly random system, the most appropriate new method
should be one that takes into account the randomness of both loading and structural properties to
estimate the risk of unacceptable response. Many reliability or risk assessment methods have been
proposed for use with ship structures (see Mansour, 1972a,b; Faulkner and Sadden, 1975; Stiansen et
ai., 1979; Faulkner, 1981; Mansour et ai., 1984; White and Ayyub, 1985). Uncertainties are modeled
in terms of the mean, the variance, and the probability density and distribution functions of the structural
strength and loadings. The limitations and assumptions involved in each method are a result of how
that method uses part or all of the statistical information available.
575
576 Applications in Ship Structures
In this chapter a discussion of sea loads on ships is given first. Then a brief review of the literature
on ships and three examples of how structural reliability methods can be applied to ship structural
design are given. Finally, applications of probabilistic methods in in-service inspection planning and
life prediction are noted.
2.1. Notations
100
36%
90
1496K1Ton
80
70
60
50
40
30
20
10
o
Full Load Displ. Acquisition Cost
Breakdown Breakdown
Figure 24-1. Weight and cost breakdown for a notional surface combatant.
Applications in Ship Structures S77
Desired reliability level (probability) that life n will meet or exceed design fatigue life Nd
M,w Hull girder bending moment present when in still water
Mu Ultimate bending moment
Mw Wave-induced hull girder bending moment
Mo Maximum bending moment in a simply supported beam under a uniform lateral load
m Negative slope of mean S-N regression line
N Total number of cycles, either simulation cycles or fatigue cycles; also, number of cycles to failure
Nd Number of fatigue cycles in the design life
NI Number of simulation cycles where g(.) < 0
PI Probability of failure
R Resistance or strength in the performance function
RI Reliability factor
S Stress range
SN Mean value of the constant amplitude stress range at the design life
SR Constant amplitude stress range at N cycles to failure
Sui Maximum stress range in a random loading expected only one in vessel lifetime
Src Equivalent stress range
t Plate thickness
V Ship speed in feet per second
Vc Coefficient of variation due to uncertainty in mean intercept of the regression line; includes effects
of fabrication, workmanship, and uncertainty in slope
Coefficient of variation due to errors in fatigue model and use of Miner's rule
Coefficient of variation due to scatter of fatigue test data about mean S-N line
Total coefficient of variation of resistance in terms of cycles to failure
Coefficient of variation due to uncertainty in equivalent stress range; includes effects of error in
stress analysis
YI Distance from the centroidal axis of the cross-section to the midthickness of the stiffener flange
Distance from the centroidal axis of the transformed cross-section to the midthickness of the plating
Section modulus of the vessel
Plate slenderness ratio
Safety index
Initial eccentricity of the beam-column, typically taken as a1750
Initial deformation of the plating
Central deflection of a simply supported beam under a uniform lateral load
Average strain
Eult Ultimate level of strain in a plate
TI Geometric eccentricity parameter
f(.) Gamma function
Fatigue life factor
Shape parameter
Column slenderness parameter
Bending stress ratio
v Poisson's ratio
Poisson's ratio of plate material
Poisson's ratio of stiffener material
578 Applications in Ship Structures
2.2. Abbreviations
model test data and the other is from seakeeping computer programs. Each has its own limitations.
Model testing is expensive and difficult to do correctly. The model must properly scale all of the
dynamic properties of the ship and its materials. The data collection needs to be done so as to reduce
scale effects. Seakeeping computer programs suffer from the many modeling assumptions that need to
be made in order to obtain a solution. The programs work well for small sea states and basic hull forms,
but begin to have problems as the motions become large. Bhattacharyya (1978) provides an excellent
overview of the ship motions problem, with a more recent discussion given by Dalzell (1989).
3.1.1. Static loads. The static loads are those loads whose natural periods are orders of magnitude
larger than the natural period of vibration of the vessel or its structure. Such loads as the weight of the
structure, cargo loads, dead loads, and still-water buoyancy are considered to be static loads. The load
cycle for these loads can be as long as the voyage. The loads resulting from the day-night thermal
cycles, although of a much shorter load cycle than the other loads, are also considered to be static.
Specialty loads such as dry-docking and grounding are also usually treated as static loads.
The calculation of the static loads is a relatively simple process for any given loading condition.
However, it is much more difficult to determine a model for the lifetime distribution of the static loading.
This is because the static loading is a function of the weight distribution of the vessel cargo, the density
of the water in which the vessel is floating, the operating profile of the vessel, etc. The statistical nature
of this type of loading has been investigated by Guedes-Soares and Moan (1982, 1985, 1988).
The static loads can induce both local and global load effects. For example, the integration of the
difference between the longitudinal weight distribution and the longitudinal buoyancy distribution results
in a longitudinal bending moment. The bending moment is considered a global load. On the other hand,
the dead load due to the weight of a piece of equipment on a platform deck produces a local load
effect. Other static loads that can induce local load effects include the local pressure due to hydrostatics,
the block loads due to dry-docking, and hydrostatic loads from cargo acting on tank bulkheads.
3.1.2. Slowly varying loads. Slowly varying loads are those loads for which even the shortest
load periods are much longer than the natural period of vibration of the vessel. The dominate load in
this category is the wave-induced dynamic pressure distribution on the hull. The pressure distribution
is strongly influenced by ship speed, wave encounters, and the resulting ship motions.
The slowly varying loads have both a local and global load effect. The magnitude of the wave-
induced hull girder bending moment is often as large or larger than that coming from the still-water
condition. As a result, any analysis of the strength of the ship hull girder needs to consider the extreme
values of the wave-induced loading.
The local effect of the slowly varying loads can be seen in variation of the wave profile on the hull
of the ship. As the shape of the immersed hull changes, the pressure distribution on the hull changes.
The pressure distribution forms the normal loading to the plate-stiffener panels that make up the hull
structure.
A considerable amount of research has been directed toward developing a statistical description of
S80 Applications in Ship Structures
the ocean environment in terms of wave loading on the hull of a ship. Works by Ochi (1978, 1979,
1981), Mansour and Faulkner (1972), Kaplan and Raff (1972), Mansour (1987), and Chakrabarti (1991)
present a broad perspective of the effort undertaken in this area.
3.1.3. Rapidly varying loads. Loads that have a period on the order of the natural period of
vibration of the structure are considered to be rapidly varying loads. Propeller-induced pressure pulses,
forced mechanical vibrations, and shock waves from gun blasts or underwater explosions are just a few
of the rapidly varying loads that must be addressed. But the loads in this category of most interest are
the loads that result from the impact of the ship hull with the sea. These loads can take the form of
wave slap on vertical structural members or the slam and resulting vibrations from the bottom of the
bow impacting the water surface as it tries to reenter the water after an extreme pitch event. The usual
two-node vibration of the hull that results from the bottom or bow flare slamming is known as whipping.
Whipping stresses have been shown to be an important factor when investigating the lifetime stress
spectrum of the ship's hull girder in bending (Ochi and Motter, 1973). Measurements have shown that
in some cases there is a substantial whipping stress component in the transverse bending of the hull
(Nikolaidis et al., 1992).
Again, this type of loading has both a local and a global load effect. The local load effect can be
considered as an extreme loading of the plating panel, between stiffeners. The possibility of plate rupture
or large plastic deformations needs to be considered. Purcell et al. (1988) recorded extreme plating
pressures during slamming events and discussed the probability of plate panel failure. The effect of
slamming on the hull plating was also discussed by Wiemicki (1986). The global loads are the whipping-
induced longitudinal bending stresses. These stresses need to be combined with the bending stresses
from the still-water and wave loads to determine the complete bending stress spectrum.
The effort to understand these relatively high-frequency loading effects has been underway for a
number of years. Among the literature the reader should investigate for further details are Ochi and
Motter (1973), Fero and Mansour (1985), Mansour and Lozow (1982), and Nikolaidis et al. (1992).
reported that the large amount of fatigue cracking seen on the Trans-Alaska Pipeline tankers is related
to pulsating hydrodynamic pressures on the hull, and not directly related to hull girder bending.
Another difficulty is that currently there are no readily available RAOs for determining the effects
of transverse hull girder bending on the response. Recent sea trial data show that in some seaways there
can be significant stress effects on structural components from transverse bending and whipping of the
hull. It is easy to see that these stresses can strongly influence the fatigue life of a component, depending
on the location of the component. The stresses arising from transverse bending could probably be
accounted for by using a combined stress formulation for fatigue, if some means of determining these
stresses for a given wave loading is developed.
Because they are more commonly used and more widely accepted, a spectral approach for modeling
wave-induced loads is discussed here. All of the spectral models start with the description of the loading
as a PSD of stress at some location within the ship. Then, using moments of the PSD, one can determine
the root mean square (RMS) of the process and such other parameters as the average zero-crossing
period, and the bandwidth (or broadness) parameter (Bhattacharyya, 1978). If the process is assumed
to be Gaussian, ergodic, and narrow banded, it can be shown that the peaks will follow a Rayleigh
distribution. The distribution for a given PSD can be completely defined by the RMS and the second
and fourth moments of the process.
3.2.1. Operational profile. In order to begin a probability-based design or assessment procedure,
a consistent means of generating an operational profile needs to be defined. Sikora et al. (1983) gave
a method for estimating the lifetime loads based on a simplified operational profile. That general ap-
proach is presented here.
The operational profile is provided as a matrix of conditions with enough possible combinations that
a reasonable approximation to a realistic loading profile is realized. As with any approximation, there
is a trade-off between increasing the number of operation cells (matrix size) to improve accuracy and
limiting the number of cells to make the procedure a reasonable design tool. When one considers that
for each ship heading and speed combination there can be up to 16 possible wave heights and for each
wave height there are 11 members of the sea spectrum family, the total number of computations required
increases rather quickly.
Sample operational profile matrix is shown in Table 24-1, using a combination of three speeds and
three sea states. Each of the nine possible speed-sea state combinations also has four possible headings.
The headings are head seas, bow seas (45°), quartering seas (135°C), and following seas. Beam seas
are not included because they do not excite the hull girder vertical bending mode of structural response.
The probabilities of occurrence for each cell represent the estimated percentage of the total time spent
in a sea of a given significant wave height at each of the heading and speed combinations. The sum of
the probabilities in each column must be less than or equal to one. The less than portion comes from
the fact that at the lower sea states some amount of time may be spent in beam seas. To determine the
amount of time spent in a particular cell the total time at sea must be multiplied by the probability of
occurrence from Table 24-1 and the probability of that significant wave height occurring.
3.2.2. Wave spectrum. The loading description procedure being discussed depends on having a
statistical description of the loading from the sea. Generally that starts with a PSD of the energy in a
given seaway. Because wave energy and wave height are directly related, the ordinate of the energy
spectrum is usually expressed in terms of wave height (heighf . sec). A number of formulations for
describing the energy in a sea.vay have been proposed over the years. They usually consist of one or
more parameters to attempt to better describe the different conditions experienced. The significant wave
height H. is the most commonly used parameter; the modal wave frequency Wm is also often used. Ochi
(1978) proposed a six-parameter family of spectra. To the two common parameters he added a shape
parameter, A, to allow for control of the sharpness of the spectrum peak. In addition Ochi added a
second set of the same parameters to account for the presence of a plateau or second peak at a higher
frequency. The six-parameter formulation can then be given as
where f(.) is the gamma function and j = 1,2 stands for the lower and higher frequency components,
respectively. The shape parameter forces the spectrum to become sharper with increasing A.
Table 24-2 provides values for the 6 parameters for a family of 11 wave spectra based on Ochi's
statistical analysis of data from the North Atlantic. The various spectra are intended to account for the
seaway from first development to fully risen. When using all 11 spectra to describe a seaway, each
would be weighted on the basis of its probability of occurrence. The most probable spectrum is weighted
at 0.5 with the other 10 spectra each having a weight of 0.05. Figure 24-2 shows the family of six-
Table 24-2. Six-Parameter Family of Spectra: Values of the Six Parameters as a Function of Significant
Wave Height"
• H. in meters.
Applications in Ship Structures S83
J20
~ t---r--r~-r~~f---+---+---+---+---+-~
Moll Probabl. Spoctrum
E
~
~ 15t---r-~Ht~~~r---+---+---+---+---+-~
~
CD
3:~1°r---r-i-tIiMt~A~~T-t---t---+---+---+---4
parameter wave spectra. Other spectra used in ship design include the Pierson-Moskowitz spectra
(Pierson et al., 1955) and the International Towing Tank Conference spectra (ITTC, 1972).
Another important factor to be considered when developing a loading model for the seaway is the
probability that a given significant wave height will be encountered. On the basis of his study of North
Atlantic weather and wave data, Ochi (1978) proposed the probabilities shown in Table 24-3. This
particular set of data is widely accepted, but not the only frequency of occurrence data. Other data have
been collected by the U.S. Navy and the U.S. Hydrographic Office.
3.2.3. Encounter spectrum. The PSD developed so far is based on the frequency of the waves
occurring in the seaway. A ship moving through the seaway will experience a different frequency of
encounter with the waves, depending on its heading and speed. Consequently the PSD needs to be
defined in terms of encounter frequency instead of wave frequency. The relationship between encounter
frequency and wave frequency is given as
(24-2)
where We is the encounter frequency, W is the wave frequency, g is the gravitational constant (ft/secf),
V is the ship speed (ft/sec), and 9 is the ship heading relative to the wave direction (head seas = 0°).
Because the area under the PSD curve represents the energy in the system, the area under the PSD
curve needs to be the same whether plotted against wave or encounter frequency. This requires a
transformation of the ordinates of the PSD curve to maintain the relationship.
3.2.4. Response amplitude operators. The PSDs built to this point are based on the wave height
and represent the input needed to determine ship response. The output response spectrum is the product
of the wave spectrum and the response amplitude operator (RAO) associated with a particular ship
heading and speed. The units of the RAO are (response per unit wave height)2 plotted as a function of
wave frequency. As mentioned earlier, the various RAOs can be found experimentally either from sea
trials or model testing, analytically by using seakeeping computer codes, or empirically from regression-
type analysis of experimental data. The fundamental concept on which the RAOs rely is that the
relationship between the input wave height and the output response is linear.
Bhattacharyya (1978) provides general curves for the bending moment response at different headings
and speeds. His data were based on the Series 60 family of hulls. Another general bending moment
RAO curve was developed by Sikora et al. (1983) from a series of model and full-scale tests for a
limited number of ships. It is intended for use when other data are not available. The axes were
nondimensionalized in order to allow comparison of different hull forms. Fifteen classes of destroyer-
type naval ships, one large naval ship, and two commercial ship types were used to generate the data
from which the RAO curve was derived. Because this is a limited data set one needs to ensure that the
hull parameters of the ship being investigated fall into the range of the ships tested. To be able to
reasonably use this RAO curve, the block coefficient should be in the range from 0.44 to 0.62 and the
waterplane coefficients should be between 0.72 and 0.84.
3.2.5. Response spectrum. One response spectrum is generated for each heading, speed, and wave
height combination included in the operational profile. The response spectrum, which is the product of
the wave spectrum and the RAO, should be expressed in terms of encounter frequency. The units of
the spectrum ordinate will be in terms of the vertical bending moment at midships (ft2 . LT2 . sec). To
be more useful, the units are usually converted to stress by dividing by the square of the design required
section modulus (in? . ft)2 at midships. The section modulus is calculated by taking the second moment
of area (moment of inertia) about the neutral axis of the midship section and dividing it by the distance
from the neutral axis to the strength deck. The stress response at any location other than the strength
deck can be determined by using stress transfer factors to adjust the stress at the strength deck. Stress
concentrations due to component geometry could be handled in a similar manner (Hughes, 1988).
By determining various moments of the response spectrum about the ordinate axis it is possible to
determine the RMS response, the average frequency of zero upcrossings, and a broadness parameter.
In addition it is possible to estimate the number of load cycles experienced in a particular response
spectrum by dividing the time spent in this condition by the average loading frequency. Assuming that
the response PSD is narrow banded, it is now possible to determine the statistics of extreme responses.
That is the type of information required to perform a probabilistic structural analysis.
no widely accepted means of combining all of the loads into a probabilistic treatment suitable for design
exists. It does appear that time domain approaches may provide the best means of solving this problem.
The loading on ship structures from the seaway is anything but simple. When one considers that not
only can the sea take on many shapes and sizes at different locations throughout the world, but that
the ship will continually move from one area to another throughout its life, it is obvious that the only
way to deal with the situation is through the use of statistical processes and methods. When the failure
mode being investigated is fatigue, the inherent variability (uncertainty) in the failure mode almost
mandates the use of some form of statistical procedure.
Many researchers, engineers, and design authorities in the marine industry reached this conclusion
a number of years ago. As a result there have been many investigations undertaken and a number of
procedures developed for including reliability methods in the design of ship structures. Mansour
(1972a,b, 1974) was one of the early leaders in applying reliability methods to ship structures. Mansour
(1972a,b) and Mansour and Faulkner (1972) looked principally at using reliability methods to evaluate
the probability of failure due to hull girder bending (example 1 in Section 4.1). Faulkner and Sadden
(1975) and later Faulkner (1981) looked at semiprobabilistic approaches for including reliability methods
in design. Mansour et al. (1984) and White and Ayyub (1985) compared different probabilistic ap-
proaches as to their applicability in ship design. Sikora et al. (1983) and Munse et al. (1983) applied
probabilistic methods to the problem of the design of ship structural details against fatigue (example 3
in Section 4.3). Madsen (1986) described a consistent approach for evaluating the reliability of ship
structures, using advanced moment methods in the context of a set of interacting computer programs.
White and Ayyub (1987a,b) looked at using simulation to provide design-oriented partial safety factors
for both primary hull girder analysis and fatigue. More recently, Nikolaidis et al. (1992) looked at the
rather complicated panel and hull girder ultimate strength failure modes and applied both simulation
and advanced moment methods to determine levels of reliability (example 2 in Section 4.2). Reports
related to the application of probabilistic methods in ship structures are frequently published by the
Ship Structure Committee (Mansour, 1990; Nikolaidis and Kaplan, 1991).
To demonstrate how probabilistic methods can be used in the structural analysis and design of ship
structures three failure modes are discussed in the following sections. The limit states are chosen to
give the reader an opportunity to see the very simple (hull girder), the very complex (stiffened panel),
and the potentially most useful (fatigue design).
4.1.1. Limit state analysis. The classic treatment of this problem considers the hull to be a box
beam experiencing pure bending. The problem is then just a simple beam in bending and can be written
586 Applications in Ship Structures
as
Mu = Z(J'y (24-3)
where Mu is the ultimate bending moment, Z is the section modulus of the vessel at the section of
interest, and (J'y is the tensile yield stress of the vessel material.
The limit state for hull girder bending can be expressed by
G=R-L (24-4)
where G is the performance function, R is the resistance or strength, and L is the load. When G :5 0
the structure fails; when G > 0 the structure survives.
When expressed in terms of Eq. (24-3) a nonlinear performance function results. This form separates
the load into the wave and still-water bending moments, Mw and MsWl respectively; and G is expressed
in units of bending moments:
(24-5)
Here the product (TyZ represents the resistance or strength of the system. The still-water bending
moment Msw could be considered as one load effect and the wave-induced bending moment Mw as
another load effect. Each will have its own distribution type, mean value, and coefficient of variation.
Each will represent the distribution of the loading in the lifetime of the vessel. Although this is not a
very sophisticated manner of combining load effects, it is useful to show the flexibility of simulation
methods.
4.1.2. Hull girder reliability assessment. To demonstrate the use of probabilistic methods in per-
forming a reliability analysis an example problem of a naval frigate is used. This frigate has been used
several times as an example of reliability analysis methods (Mansour and Faulkner, 1972; White and
Ayyub, 1985). The principal characteristics of the frigate are given in Table 24-4 and the midship section
is shown in Fig. 24-4. Equation (24-5) is used as the limit state. The basic variables for the limit state
are shown in Table 24-5, along with their respective statistical properties.
The strength uncertainties were evaluated in several of the references (Mansour, 1972; Mansour and
Faulkner, 1972). For this investigation, we separate those uncertainties associated with the material
properties and those associated with the configurations, structural geometries, and construction. The
former is applied to (Ty in Eq. (24-5) and the latter to Z in the same equation. The distribution of (Ty
Comprenlon Tension
Condition
Parameter Value
2 " I
r;:::!;,;!;;r:~I:DS~::CI::c!::!=ti No. 01 DECK
2".2" •• 25"
S"x.233· WES
·x.353" RIOER
No.2 DECK GIRDERS Il"X4l"x5lB.
FRAME .233" WEB S" DEEP AT No.2 OECK
6"x.595" TABLE
J
lONGITUDINAlS
Nos. 1,2,4,5,6,8,9,10,12,13,15,TO 24(INCl)26 & 26A •••• Il X4l".5lB.
Nos. 27, TO 31 (INCl) ••••••••••••••••••••••••••••••••• I" 3".2.45lB.
PLATE lONGlS
No.3. 21"x.2SO" WEB 6'x.233" RIDER
No.7. 15"•• 162" • 6" •• 233" ,
No. ll. 12" •• 162" • 6" •• 233'
No. 14. 15" •• 162" • 6'x.233"
No. 25. S~·'.IB6" " 4"x.353"
BASE LJN~E _ __
Figure 24-4. Midship section of example frigate. (Source: Mansour, A. E., and D. Faulkner [1972]. On applying
the statistical approach to extreme sea loads and ship hull strength. Transactions of the Royal Institution of Naval
Architects 114. Reprinted with permission from the Royal Institution of Naval Architects.)
588 Applications in Ship Structures
Table 24-5. Probabilistic Characteristics of Basic Variables for the Example Problem
Basic Coefficient of
Variable Mean Variation Distribution
ON/A-Not applicable.
has been shown in the literature (Mansour and Faulkner, 1972; Daidola and Basar, 1980) to be ap-
proximated well by a normal distribution. For the section modulus, the central limit theorem suggests
that for multiplicative models such as Z, a lognormal distribution should be adopted.
The statistics and distributions of the load variables Msw and Mw are estimated by the approach used
by Mansour (1972a,b). This technique involves utilizing either strip-theory ship motions computer pro-
grams or extensive model testing to generate bending moment RAOs, then using the principle of su-
perposition (a linear assumption) to obtain a bending moment spectrum and thus an RMS value of
bending moment. This allows the calculation of the expected value of the extreme bending moment for
long periods or for the entire vessel life. The approach is limited by the number of linearizing assump-
tions made and the imperfect knowledge of the actual lifetime voyage pattern of the vessel.
There are many methods available for performing reliability assessments (see Chapters 2 to 5 in this
book). For this example we use Monte Carlo simulation, both direct simulation and simulation using
variance reduction techniques (VRTs). For further discussion of the use of simulation in reliability
assessment the reader is directed to Chapter 4. Specific applications relating to ship structures may be
found in White and Ayyub (1985, 1987a,b) and Mansour (1990).
It was known from earlier results (Mansour and Faulkner, 1972) that the probability of failure is
small, consequently the number of simulation cycles required in direct simulation would be exception-
ally large. Therefore no solution was obtained using direct simulation. Simulation with variance reduc-
tion techniques (conditional expectation and antithetic variates techniques) was used to compute the
failure probability.
In 2000 simulation cycles the probability of failure was found to be Pc = 0.098 X 10-6 with a
coefficient of variation (COY) of 0.0174. Figures 24-5 and 24-6 show how the method converges on a
solution. It is interesting to note that in as few as 50 simulation cycles the coefficient of variation of
the solution is less than 10% and the solution is correct to 0.1 X 10- 6 •
Probability of Failure
(x10E-4} 1.1 r-----..------..,.----""T""----.------,
1.04
0.98
0.92
0.86
0.8
o 400 800 1200 1600 2000
Number of Simulation Cycles
in axial loading. Such structural members as keels, bottom girders, longitudinal bulkheads, and deck
girders can act as the side boundaries of the panel. When the panel is located so as to be in a position
to experience large in-plane compression the boundary conditions for the ends are taken as simply
supported. The boundary conditions along the sides can also be considered simply supported.
Three types of loads affect the panel. Negative bending loads are the lateral loads due to uniform
lateral pressure, which causes the plate to be in tension and the stiffener flanges to be in compression.
Positive bending loads are the lateral loads which put the plating in compression and the stiffener flange
in tension. The third load type is uniform in-plane compression. This type of loading arises from the
hull girder bending, and will be considered to be positive when the panel is in compression. These
loads may act individually or in combination with one another.
4.2.1. Limit state analysis of stiffened panels. When the in-plane and lateral loads act in com-
0.18
0.16
0.14
0.12
Coef. 0.1
of
Variation 0.08
0.06 M
I~
0.04
0.02 ~
0
0 400 800 1200 1600 2000
Number of Simulation Cycles
Stiffened Panel
Figure 24-7. Longitudinally stiffened panel.
bination, the plate-stiffener becomes a beam-column. Here collapse is still the result of failure of the
flange, as it is in the case of a column, but the effect of bending moment Mo and deflection So caused
by the presence of the lateral load must be considered. For the purpose of this discussion lateral de-
flections and eccentricity in the direction of the stiffeners are considered as positive. Because we can
have positive or negative bending moment due to the lateral load, either flange (the plate or the stiffener
flange) can be the failure flange and the failure can be either tensile or compressive. This would seem
to indicate that there are four possible collapse modes-either flange in tension or compression. In
actuality one of the modes, tensile failure of the plating, never occurs because of the neutral axis of
the combined plate-stiffener being so close to the plating. The other three modes represent possible
collapse mechanisms and are discussed briefly below. Much of the following discussion is after Chapter
14 of Hughes (1988), and the reader is directed there for further details.
4.2.1.1. MODE I: COMPRESSION YIELD OF STIFFENER FLANGE. Under the combination of axial
compression and negative bending the stiffener flange will be the compression flange. Collapse of the
panel occurs as a result of compressive failure of the stiffener flange either by the entire section (plate
and stiffener) reaching a full plastic moment Mp , or by buckling of the stiffeners in compression. When
there is a large amount of axial compressive stress (1., it directly increases in the stiffener flange the
compressive stress that was due to the negative bending. This leads to early compressive yielding of
the stiffener and a delay in the plate yielding. The result is that the combined plate-stiffener is unable
to achieve a plastic hinge condition. Rather, the stiffener reaches its limit of stress absorption and
becomes ineffective in carrying the load. The section is effectively reduced to the plating alone, which
collapses shortly thereafter.
Figure 24-8 is an interaction diagram showing the collapse mechanisms for the panel under lateral
and in-plane loads. The vertical axis is the ratio of the bending moment from the lateral load Mo to full
plastic moment Mp at collapse. The horizontal axis is the ratio of the collapse value of the applied in-
plane stress (1a,u and the material yield stress (1y. Because the in-plane load is usually much greater than
the lateral load, the analysis is usually to determine the level of in-plane stress needed for collapse
given a specified level of lateral bending moment. The lateral bending moment Mo and deflection 80
Applications in Ship Structures 591
0.5
~a.
compression
failure of plating
1.0
o °a.u
Oy
-0.5
-1.0
Figure 24-8. Collapse mechanisms in a stiffened panel under lateral and in-plane loads.
are those for a simply supported beam experiencing a uniform lateral load. The curve from point A to
point B in Fig. 24-8 represents the Mode I failure mechanism. The equations representing the curve are
given as follows (Hughes, 1988):
(24-6)
(24-7)
p = VI/A
where
Mo =maximum bending moment in a simply supported beam under a uniform lateral load
Yf = distance from the centroidal axis of the cross-section to the midthickness of the stiffener flange
I = the moment of inertia of the plate-stiffener combination
UF =the failure stress level: for Mode I U F =min{Uy"Ua,T}
UYs = yield stress of the stiffener
U a,T = elastic tripping stress for the stiffener
592 Applications in Ship Structures
However, because the stiffener is in compression, it is possible that tripping or flexural torsional
buckling could occur. If it does, the ultimate in-plane stress <Ta•u will likely not reach the value indicated
by curve AB. The equations above could still be used to determine the ultimate applied in-plane stress,
but <Tys would have to be replaced by <Ta.n the elastic tripping stress for the beam-column. The common
way to deal with this is to calculate <Ta.T and compare it to <Tys. The smaller of the two is then used in
Eqs. (24-6) and (24-7).
4.2.1.2. MODE II: COMPRESSION FAILURE OF PlATING. The combination of in-plane compression
and positive bending gives rise to the possibility of a Mode II failure mechanism. With small or moderate
lateral loads (Mo/Mp = 0.7 or less) collapse occurs due to compression failure of the plating. If the
plate were to remain perfectly elastic through the range of loading the analysis would be that for a
simple beam-column. However, for most welded plating the compressive collapse is a complex inelastic
process. The curve from points C to D in Fig. 24-8 represents the Mode II failure limit state.
Figure 24-9 is a typical curve representing the relationship between average strain lOa and total applied
stress <Tpa for a welded plate. The curve shows that the relationship between lOa and <Tpa becomes nonlinear
well before collapse. Plate failure is taken as the point on the curve where the plating has lost most of
its stiffness. Usually this is the point where the tangent to the curve has reached some lower limiting
value, and is represented in Fig. 24-9 as the value of the curve at a strain given by Eult. Because it is
easier to deal with stress in the analysis procedure, two levels of stress corresponding to a strain of Eult
are determined. The first is the actual value of the applied stress at failure, <Tpu' The second is the level
'.
- .•........•.....•....................•
':
"
, I
, :I
,// :
-1-------/-:;/"
slope = E
slope = E s= TE
~------------+---+---------------~~£a
Figure 24-9. Applies stress versus average strain for welded steel plating.
Applications in Ship Structures 593
of stress that would have been reached at a strain of Eull if the plate had remained elastic. This value
of stress is identified as CT F' The curve in Fig. 24-9 shows that the average value of stiffness of the
plate is significantly less than the elastic material stiffness. We can account for this by defining a secant
modulus E. =ET, where T is given by (Hughes, 1988)
(24-8)
where X = 1 + (2.75/~2), ~ is the plate slenderness ratio, given as = (b/t)(CTy/E)l!2, t is the plate thickness,
and b is the breadth of the panel.
In order to account for this inelastic effect on the ultimate strength of the stiffened panel, a trans-
formed section for the beam -column is defined. This is accomplished by defining a new effective width
of plating to be used as one flange in the beam-column. The new transformed width is equal to the
original width multiplied by the secant modulus term of Eq. (24-8). Given the transformed plating
width, a whole new set of transformed section properties can be found. These transformed section
properties are used in the determination of the panel ultimate stress.
So far we have not accounted for the presence of transverse in-plane compression CT.y> or the pos-
sibility of in-plane shearing stress T. Both of these have an effect on the amount of applied in-plane
stress CTa needed to cause collapse of the plating. They can be accounted for by using the following
equation:
T - 1 (JyP ~1 -
(JF = -T- 3 ( - 'T )2 (1 - ~
(JaY) (24-9)
(JyP ay,u
where CTyp is the yield stress of the plate material, and CT'y,u is the ultimate transverse stress found from
interaction of in-plane stresses (see Chapter 12 of Hughes, 1988).
The determination of the ultimate applied stress on the transformed section, (CT.,tr)ulh follows closely
what was done for Mode I in Eqs. (24-6) and (24-7). However, a few terms have been modified for
the transformed section. The following equations provide the revised form (Hughes, 1988):
(24-10)
}. -...!!....
c - 1TPtr 'V~
E" (24-11)
+ d)Yp,1r
~A"
(80 ~yp,tr
TI = 2 , TIp = -2-' p,,=
Plr Ptr Ir
where the terms not previously defined are as follows: Atr is the transformed area of plate-stiffener
combination =bT + As> ~ is the initial deformation of the plating, YP,tr is the distance from the centroidal
axis of the transformed cross-section to the midthickness of the plating, and I tr is the moment of inertia
of the transformed plate-stiffener combination.
The determination of the ultimate applied stress for the Mode II failure mechanism consists of the
following steps (Hughes, 1988).
2. From Eq. (24-8) calculate T and the use Eq. (24-9) to find O'p.
3. For the transformed section with a plate flange width Th, calculate all of the properties of the transformed
section (Am YP.t" It" Ptr). Find the initial deformation of the plating ~.
4. Using Eqs. (24-11), calculate the parameters A, 1], 1]p, and IJ..
5. Calculate the ultimate stress of the transformed beam column, (O'8,,,)ull> using Eq. (24-10).
6. Calculate the collapse load O'a.u, using the following
(24-12)
4.2.1.3. MODE III: COMBINED FAILURE OF STIFFENER AND PLATING. As the positive bending mo-
ment Mo becomes large the Mode II description of collapse is no longer valid. When Mo is large there
will be a large tensile stress in the flange of the stiffener. This tensile stress is somewhat reduced by
the presence of the in-plane compressive stress, but if the bending moment is large enough there will
be tensile yielding in the stiffener flange as well as the compressive failure of the plating. The point
on the Mode II curve where this combined failure takes place is labeled point D in Fig. 24-8.
When only bending loads are present the collapse usually occurs at the value of the full plastic
moment for the section. This is because the plastic neutral axis of the combined section often lies within
the plating thickness. Point E on Fig. 24-8 indicates the location of full plastic moment with no in-
plane load. The straight line between points D and E represents the failure line for Mode II collapse.
The actual collapse occurs slightly above this line, but the complicated interaction between stiffener
tensile yielding and the compressive collapse of the plating makes this problem too difficult to calculate
exactly with any efficiency.
To solve for the failure load, the equation for the line DE needs to be determined. This is accom-
plished by first determining the initial flange yield point D. The line shown in Fig. 24-8 starting at
point F and running through point D is used for this purpose. It is the line representing the first yielding
of the stiffener flange in tension for a given applied moment Mo. The intersection of this curve with
the Mode II failure curve is found through an iterative solution to define point D. Once point D is
known, an equation for line DE can easily be developed.
Because Mode III collapse is rare in most ship type structures, and because of the scope of the
material in this chapter, a full presentation of the procedure to determine the Mode III collapse stress
is not provided here. Rather, the interested reader is invited to read Chapter 14 of Hughes (1988) for
a full description of the solution procedure.
4.2.2. Reliability assessment of a longitudinally stiffened panel. A sample analysis of the prob-
ability of failure of a notional warship deck structure is provided to demonstrate the limit state conditions
discussed in the previous sections. Table 24-6 provides the statistical information on the material and
geometry variables. The reliability assessment is performed using an advanced second-moment method
and direct Monte Carlo simulation for two loading cases. Each of the loading cases represents a different
set of possible loading assumptions and combinations.
A computer program developed by Nikolaidis et al. (1992) was used to perform the reliability
assessment. The program uses an advanced second-moment reliability method to analyze the perform-
ance functions for the three possible panel failure modes. The interaction diagram shown in Fig. 24-8
is the performance function for the three failure modes. If the stress state falls inside the interaction
diagram, the structure survives. If it falls outside the diagram, the structure fails.
The performance function algorithms were adapted from the FORTRAN code used in MAESTRO,
Applications in Ship Structures 595
the ship structural optimization program by Hughes (1988). One of the interesting parts of the work by
Nikolaidis et a1. (1992) is that the program developed for the reliability analysis is capable of taking
any performance function and doing a reliability assessment. The performance function code needs only
to have a subroutine written to interface it with the reliability analysis code. This modularization of the
codes allows for rapid incorporation of almost any type of performance function.
The code taken from MAESTRO has an important advantage for use in reliability assessment. Be-
cause the code was written for structural optimization, partial derivatives of the performance function
with respect to each variable can be calculated. These partial derivatives are used in MAESTRO to
determine the direction and magnitude of changes needed in the variables at each iteration during
optimization. The partial derivatives are used by Nikolaidis et a1. (1992) to given an indication of which
variables are important in the reliability analysis and where additional effort should be made in reducing
uncertainty.
For all of the loading cases discussed in this section, the mean values of the transverse in-plane
compressive stress lTay and the in-plane shearing stress Txy are assumed to be functions of the mean
value of the axial in-plane stress ITax. This is a reasonable assumption when one considers that the source
of the stress is the same set of waves and that the reliability analysis is of a localized region of the
ship. In each of the following cases the ratio of the mean value longitudinal in-plane stress to the mean
values of the other two stresses will be
In general, each load case can have a different level of correlation among the random variables. To
ensure that the reader understands which random variables are correlated and which are not we provide
the following definition for the types of random variables used in the example cases. The three stresses
in the panel are made up of load effects from each of the three load components. The load components
are still water, wave, and whipping. The stress types are axial, transverse, and shear. The load effects
from each of the load components can be combined as shown in Eq. (24-14) to produce the three
Thble 24-6. Geometry and Material Variables for Stiffened Panel of a Notional Warship Deck Structure
Geometry variables
Panel span 96 in. N/A" Deterministic
Panel width 71 in. N/A Deterministic
Plate thickness 0.2188 in. N/A Deterministic
Web height 4.72 in. N/A Deterministic
Web thickness 0.125 in. N/A Deterministic
Flange breadth 4.20 in. N/A Deterministic
Flange thickness 0.220 in. N/A Deterministic
~anda 1.0 0.048 Normal
Material variables
(1vp and (1y, 88,000 psi 0.05 Normal
vp and v, 0.3 N/A Deterministic
Ep andEs 30 X 106 psi 0.05 Normal
4N/A-Not applicable.
S96 Applications in Ship Structures
stresses:
In Eq. (24-14), <T.., <TaY' and Txy are the combined axial, transverse, and shear stresses. Subscripts
wave, whipping, and still water specify the load component that causes the stress. Equation (24-14)
gives rise to nine random variables. The three stresses from the still-water load are considered to be
deterministic variables.
If there is perfect correlation among the load component random variables, then the load need only
be represented by three random variables, <Tax, <TaY' and T xy. If there is perfect correlation among the
stress component random variables, the load can be represented by only the first row in Eq. (24-14).
For the two cases discussed in this example, perfect correlation among the load components is
assumed. Equation (24-13) is still used, and it should not be construed as assuming perfect correlation
among the stress components. All it says is that the mean values of each component are related, the
components are still random variables. For other cases the reader is referred to Nikolaidis et al. (1992).
The hydrostatic lateral load induces a pressure that is also a random variable and it is identified as
the pressure random variable. Case 1 is taken from the part of the load spectrum where the vessel was
not experiencing green seas on its deck; pressure is therefore zero. In case 2 pressure it is taken from
that part of the load spectrum where green seas on deck are expected Table 24-7 shows the statistical
properties of the load variables for case 1 and Table 24-8 does the same for case 2.
The normal stresses in the axial and transverse directions and the shear stress make up three of the
load components. The stresses were determined from a finite element analysis of the full ship. The finite
element analysis runs were in effect the stress transfer functions that changed the input bending moment
to output stress at the desired locations. The long-term probability distributions of the wave and whip-
ping bending moments were calculated on the basis of the procedure described by Sikora et al. (1983)
(see Section 3.2.1). The wave bending moment was assumed to follow an exponential distribution and
the whipping bending moment a truncated Weibull distribution. The parameters for the Weibull distri-
bution were determined from available data from at-sea measurements and model tests.
As noted before, the probability of failure was determined by an advanced second-moment method.
For case 1 the probability of failure was determined to be 0.051. This corresponds to a safety index
(3 * of 1.64. The verify these results, a direct Monte Carlo simulation of the same problem was conducted.
Using 10,000 simulation cycles, a probability of failure of 0.056 was found. The coefficient of variation
of this estimate was 0.04.
The failure mode was determined to be Mode II, and the most important variables in this particular
case were the axial in-plane stress and the yield stress of the plating. Their relative importance can be
determined by the magnitude of the respective directional cosines of the variables at the design point
on the failure surface (limit state surface). Here the directional cosine of the in-plane stress was 0.943,
and the directional cosine of the yield stress was 0.303. Directional cosines of the other variables were
all less than 0.10.
A similar analysis was performed for case 2 also and the collapse mechanism was again a Mode II
type with a probability of failure of 0.050. This corresponds to a safety index (3* of 1.63. The Mode
II failure mechanism was expected, because of the presence of the lateral load.
4.2.3. Discussion of the reliability assessment approach. Notice that the probability of failure is
nearly the same for both case 1 and case 2. However, the mean value of the axial in-plane stress for
case 2 is about 18% lower than in case 1. This confirms the expected result that the presence of the
lateral load significantly reduces the in-plane load capacity of the panel. That is, the same probability
of failure is achieved at a much lower level of in-plane stress.
The second-moment method worked well in this application, primarily because the limit state func-
tions are fairly smooth and the transition from one failure mode to the other is well defined. The
computer algorithm actually calculates the failure stress for each of the three failure modes, but performs
the reliability assessment only on the one with the lowest applied axial stress.
The direct Monte Carlo simulation confirmed the results of the second-moment reliability method.
The direct simulation was adequate for use in these cases because the probability of failure is relatively
high and a large number of simulation cycles is not needed to obtain a sufficient number of failures to
develop an estimate.
The main reason for including this example problem in this chapter is that it demonstrates that even
complex limit states can be used in a reliability assessment and design procedure. The limit state in
this example was defined by almost 400 lines of FORTRAN code. The reliability assessment procedure
uses the limit state code at each incremental step during the analysis.
operating on the U.S. west coast (Sipes et al., 1991). The inspections have found a large number of
fatigue cracks in the side shell and stiffeners on these ships. As a result there is considerable effort
underway to identify the cause and possible cures for this type of fatigue cracking.
There are a number of reasons for fatigue cracking of structural details, including poor workmanship
in fabrication, poor welding practices, and poor design. In many cases, poor design represents the root
cause of cracking and failure. The major benefit of a reliability-based fatigue design approach that
utilizes probabilistic analysis is that a designer will be able to design an engineering system that is both
efficient and reliable to the level specified. It can also take into account some variations in the quality
of the workmanship and welding. In this section one approach for reliability-based fatigue design is
discussed (Munse et al., 1983) and evaluated as to its usefulness in the design of ship structures. 2
To understand better the discussion of the reliability-based fatigue design method, a few concepts
need to be clarified and some terminology introduced first.
4.3.1. Resistance curves. Many of the most promising methods for reliability-based fatigue design
use the available fatigue information (S-N information) for the strength side of the problem; here S =
stress range, and N = number of cycles to failure. The S-N curves usually represent a log-log linear
least-squares fit of the mean values of the constant amplitude fatigue tests for various stress ranges.
The equation of the S-N curve can be expressed as
(24-15)
where SR is the constant-amplitude stress range at N cycles to failure. The coefficients are m, the negative
slope of the curve, and C, the life intercept of the S-N curve.
The standard deviation of the fatigue life data can easily be found; however, the scatter of the data
about the mean fatigue line is not the only uncertainty involved in the S-N analysis. A measure of the
total uncertainty (coefficient of variation) in fatigue life VR is usually developed to include the uncertainty
in fatigue dat~, errors in the fatigue model, and any uncertainty in the individual stresses and stress
effects. Ang and Munse (1975) suggested that the total COY in terms of fatigue life could be given by
(24-16)
where
Vc = COY due to uncertainty in mean intercept of the regression line; includes effects of fabrication, workman-
ship, and uncertainty in slope
Vs= COY due to uncertainty in equivalent stress range; includes effects of error in stress analysis
m =slope of mean S-N regression line
4.3.2. Palmgren-Miner hypothesis and the equivalent stress range concept. For most real ma-
rine structures the loading does not take the form of a cyclic constant-amplitude applied stress. Rather,
the loading is a random sequence of various amplitudes and frequencies that do not repeat themselves.
This type of loading can best be expressed as a continuously distributed random variable SL' The
statistics of the variable SL are derived from recorded stress histories or estimated from wave records.
The results are usually expressed as a probability density function (PDF) of stress range Ps(s) for each
'other approaches for probabilistic fatigue analysis and design may be found in Chapter 7 of this book.
Applications in Ship Structures 599
stress or wave height record. However, in order to use the S-N fatigue data, a relationship between the
characteristic value of the wave-induced random stress and the constant-amplitude stress of the S-N
curve is needed. This is accomplished by using the Palmgren-Miner hypothesis to find an equivalent
constant-amplitude stress for the random load distribution.
The Palmgren-Miner (P-M) cumulative damage hypothesis is based on the concept of strain energy.
It states that failure occurs when the total strain energy due to n cycles of variable amplitude loading
is equal to the total strain energy from N cycles of constant amplitude loading, where N is the number
of cycles to failure at constant-amplitude loading. This can be written in the following form:
D = 2: nJNi
i=l
(24-17)
where B is the number of stress range blocks, nj is the number of stress cycles in stress block i, Ni is
the number of cycles to failure at constant stress range i, and D is the damage ratio, usually taken as
one at failure.
For a continuous probability distribution of the random stresses, Eq. (24-17) can be expressed as an
integration over the stress range s such that
D = LNps(s)
o
oo
C/S m
ds = l!.
C
Lsmps(s)
0
oo
ds (24-18)
The term in the integral on the right-hand side of Eq. (24-18) is the mean or "expected value" of
the stress range raised to the mth power, E[sm]. For a known distribution type this value is relatively
easily found. The expression for the total damage due to the random load can then be written as
(24-19)
where
N/C = l/S~
If D is assumed to be equal to one at failure, as is often done, then Eq. (24-19) states that the
expected value of the random stress range raised to the mth power is equal to the constant-amplitude
stress range at N cycles to failure raised to the mth power. Thus an equivalent constant-amplitude stress
range can be found for a random variable amplitude stress range from
(24-20)
For m = 2 this would represent the RMS value of the random load distribution. In the more typical
case for steels, where m --00 3, the equivalent constant-amplitude stress range would be the root-mean-
cubed (RMC) value of the random load.
4.3.3. Load or resistance transformation. The tools of probability theory as used in reliability-
based design apply only if the load and resistance terms in the performance function are expressed in
terms of the same basic quantities, that is, either in terms of stress or cycles to failure. In the case of
fatigue, the load curve (derived from the PSD of the seaway) is expressed in terms of stress range and
is distributed along the vertical axis. The strength or resistance is in terms of the number of cycles to
failure and is along the horizontal axis. Obviously, one of the two curves must be transformed. White
600 Applications in Ship Structures
and Ayyub (1987b) discussed a means to do a transformation of the resistance curve, which is the more
common practice. In the approach by Munse et af. (1983), which is discussed in the following sections,
a resistance transformation takes place. It is not done explicitly; rather it occurs as a part of the as-
sumptions made about relating the loading to the resistance.
4.3.4. Munse's method. The method discussed here is the result of a study by Munse et af. (1983)
that was funded by the Ship Structure Committee and is intended for application to ship structures. The
work is interesting not only for the recommended reliability-based fatigue design approach, but also for
the large amount of test data it contains for typical ship structural details.
The design approach is based on calculating a "design" stress range Srd for fatigue. This stress range
is the maximum peak-to-trough stress range expected at the point in question once under the most
severe sea state during the entire life of the structure. The design stress Srd must be less than or equal
to the nominal permissible stress permitted once during the life of the structure by the basic design
rules.
According to the Munse approach, the design stress range Srd is found using the following equation:
(24-21)
where SN is the mean value of the constant-amplitude stress range at the design life Nd , Rr is a reliability
factor, and ~ is a random load factor.
The mean value of the constant-amplitude stress range SN is found by entering the SN curve of the
structural detail of interest at the number of cycles Nd expected in the design life. The probabilistic
nature of the design method is introduced by the two factors in Eq. (24-21). In order to expound the
probabilistic basis for these factors, each is briefly discussed.
The reliability factor Rr is meant to account for uncertainties in the fatigue data, workmanship,
fabrication, use of the equivalent stress range concept, errors in the predicted load history, and errors
in the associated stress analysis. The factor comes from the assumption that fatigue life is a random
variable with a Weibull distribution and that the probability of survival through N loading cycles, PL(N),
is given by
where heN) is the hazard function as defined by Ang and Tang (1983). In essence, Eq. (24-22) says
that the probability of survival through N cycles is equal to the probability of survival through N - 1
cycles multiplied by one minus the conditional probability of survival in the cycle from N - 1 to N,
given survival to N - 1 cycles. The advantage of the hazard function is that it is relatively easy to use
to derive a relationship between the design life Nd and the mean value of fatigue life n of a Weibull
distribution. It can be shown that
n r(1 + u)
(24-23)
Nd = [1 - LN(Nd)]" = 'YL
where r(.) is the gamma function, u = viDs, VR is the total COY of resistance from Eq. (24-16), LN(Nd )
is the desired reliability level (probability) that the life n will meet or exceed the design fatigue life Nd,
and 'YL is a fatigue life factor.
It is interesting to note what the fatigue life factor actually represents. Equation (24-23) states that
the mean value of fatigue life to be used in the design process is equal to the desired design life Nd
multiplied by the fatigue life factor 'Yv The two most important factors in determining 'YL are the desired
Applications in Ship Structures 601
reliability level LN(') and the COY of the resistance VR' The result of Eq. (24-23) is, in effect, a shifting
of the Weibul1 distribution of fatigue life to the right such that the area under the PDF curve to the
right of Nd is equal to the desired level of reliability LN' The value of VR controls the shape of the
distribution and therefore influences the magnitude of the shift. Figure 24-10 illustrates how these factors
affect the mean value of fatigue life.
By combining Eq. (24-15) with the value of mean fatigue life from Eq. (24-23) the following
expression results:
(24-24)
=
where SN (C/Nd)l/m, and Rf (1/'ySlm. =
It is apparent from Eq. (24-24) that the product of the first two terms of the Munse "design" Eq.
(24-21) is an equivalent constant-amplitude stress range. The effect of the reliability factor is to reduce
the equivalent stress range. The reduced equivalent stress range is equal to that stress range that would
be found by using the mean value of fatigue life n, vice Nd, for calculating stress range from the S-N
curves. This is shown graphically in Fig. 24-10.
Thus the reliability factor contains a term that has the uncertainty (VR) of all of the factors that make
up the resistance, and a term that allows the designer or code writer to specify the desired level of
reliability [LN(Nd )]. The effect of increasing the uncertainty in resistance is a decrease in the reliability
factor and therefore a decrease in allowable stress. The same effect is achieved by increasing the desired
level of reliability.
The stress range developed so far by the design procedure is an equivalent constant-amplitude stress
range only on the characteristics of the S-N curve. Equation (24-20) provides a way to relate this
equivalent stress Src to the loading expected. From that equation the equivalent stress is equal to the
mth root of the mth moment of the random applied stress (loading) distribution. Because structural
Log S
Srd - F ' \ - - - - - - r -
Sre .....--eF--....;J~--+-------r-----t----o.....
Weibull Load
Distribution
Log N
Nd n
Figure 24-10. Graphical summary of Munse's method.
602 Applications in Ship Structures
elements are designed to extreme loadings, it would be convenient if a design relationship could be
introduced to relate the constant-amplitude equivalent stress range for the loading to the once-in-a-
lifetime extreme stress. Munse does this by introducing the following:
(24-25)
where Srd is the maximum stress range in a random loading expected only once in the lifetime of the
vessel, and ~ is a random load factor = Srd/E[S"/Im.
The random load factor represents the distance, along the vertical axis, between the equivalent stress
range for the loading Sre and the once-in-a-lifetime stress range Srd' The key to finding the distance is
to find the equivalent stress range in terms of the once-in-a-lifetime stress. Munse does this by utilizing
the definition of E[S"]lIm for the load distribution type, in this case a Weibull distribution.
(24-26)
where Pf(Srd) is the probability that the once-in-a-lifetime stress range will be expected and k is the
Weibull shape parameter for the load distribution. All of the other terms in the equation are as defined
before. In the design of ship structures the number of load cycles during the life of a ship is generally
considered to be 108 • Then the once-in-a-lifetime stress range is that stress range that appears once in
108 cycles. The probability of exceeding that stress range is thus 1/108, or 10-8 • Because the definition
of the random load factor has an Srd term in the numerator and it has the right-hand side of Eq. (24-
26) in the denominator, the Srd terms cancel. The random load factor is simply a function of the number
of load cycles expected in the lifetime, the Weibull shape parameter for the load, and the slope of the
S-N line. Munse et al. (1983) also provide a similar development of the random load factor for distri-
bution types other than the Weibull.
The use of Eq. (24-26) as the design equation is illustrated in Fig. 24-10. The SN term is just the
constant-amplitude stress range at the design life, Nd , from the S-N curve for the detail being investi-
gated. Multiplying that value by the reliability factor Rf in effect changes the design life to nand
produces the equivalent constant-amplitude stress range Sre. However, that value is usually not conven-
ient to use in design. It would be preferable to design a detail to survive the once-in-a-lifetime stress,
then check to see if it is also suitable in fatigue. For the given loading distribution and statistics, the
random load factor ~ can be found, which when multiplied by the equivalent constant-amplitude fatigue
stress range produces the once-in-a-lifetime stress Srd for that loading. The designer must then design
the detail to the smaller of the two controlling stresses; that is, the fatigue design stress Srd or the
nominal permissible stress permitted once in a life by the basic design rules or code.
4.3.5. Probabilistic fatigue design of structural details. In order to understand Munse's method
more clearly the results from the method for some common structural details are examined in this
section. Munse et al. (1983) provided a list of 72 of the most common structural fatigue details. This
list includes the values of m, C, and VN for each detail. Figure 24-11 shows some of the fatigue details
found most often in ships and Table 24-9 gives the S-N data for them.
The problem chosen is that of designing the non tight collar shown in Fig. 24-12. This is a typical
structural detail in tank vessels, and one that has been known to experience problems. It has been given
the reference number 3A11 by Jordan and Knight (1980). For this problem, it is desired that the level
of reliability be 99.9% for a design life Nd of 108 cycles. As can be seen from Fig. 24-12, the nontight
collar contains two weld details that need to be examined. The weld detail examined here is detail 25
(as shown in Fig. 24-11).
For this example, the fatigue life data are assumed to be lognormally distributed. The distribution
Applications in Ship Structures 603
~J-
28(F)-28
-~- C~:;J
10(G)-10
30A
(Full Penettatlon)
-~- -c 2:]-
20(S)-20 33(S)-33
CI~I')
21 (S)-21
-QD- 25
Figure 24-11. Common fatigue details in ship structures. (Source: Munse et aI. [1983]).
of the equivalent constant-amplitude stress ranges from the vessel load histories is also assumed to be
log normal. The coefficient of variation (COV) of the load distribution is assumed to be 0.20. This
value is recommended by Kaplan et al. (1983) for long-term load distributions, and is used here for
the equivalent stress distribution. With the desired level of reliability equal to 99.9%, the probability of
failure is 1 X 10-3 and 13* "'" 3.09.
Table 24-9. SoN Data for Common Fatigue Details in Ship Structures
Fatigue Detail
Fatigue Detail
_ _ _J
#33
#25
Figure 24-12. Nontight collar detail 3Al1.
From Table 24-9, for detail 25, m = 7.090 and log C = 15.79. With this information the following
calculations can be made:
To relate the equivalent stress to a design stress, the random load factor is required. For this example,
the long-term stress distribution is assumed to follow an exponential shape (Weibull with k = 1) and
the random load factor is found from Eqs. (24-25) and (24-26).
The value of 18.42 comes from finding the negative of the natural logarithm of the desired probability
of failure at the design stress range, that is, {-In{Pf(S,d)]} -11k. The value of 0.2928 comes from deter-
mining the gamma function of the S-N slope plus 1 raised to the negative 11m power; that is, f[(m/k)
lr
+ llm • These two factors come from Eqs. (24-25) and (24-26).
From Eq. (24-21) the design stress level can then be found as
Table 24-10 shows the results of similar calculations for the other details in Fig. 24-11. Of the two
fatigue details shown for the nontight collar of Fig. 24-12, detail 25 is the more potentially dangerous.
Detail 33 has an S,d of only 7.332 ksi.
In addition to design applications, probabilistic methods have been used for in-service inspection plan-
ning and life prediction. Chapter 6 of this book provides an example of the applications of probabilistic
fracture mechanics in in-service inspection planning. Effects of various types of nondestructive exam-
Applications in Ship Structures 605
inations and inspection intervals on the failure probability of deck double plate welds of a container
ship are studied to aid in selecting an optimal in-service inspection strategy.
Chapter 18 of this book provides an example of life prediction of naval vessels. Large plastic-
deformation failure mode of hull plates is considered and an extreme-value modeling approach is used
for the life prediction. Decrease in the failure probability, and thus increase in the vessel life, due to
in-service inspections is demonstrated. Results for inspection intervals of 1 and 2 years are given.
Reliability analysis and reliability-based design of ship structures is an area that has come of age. More
and more engineering code development authorities are recognizing the role that reliability methods can
play in the design of structural systems. There are a number of ongoing projects in the merchant and
naval ship industry which are building the foundations for major revisions in the design of ships.
Although it is likely that the appearance of ship design rules may not change significantly, the underlying
analysis undoubtedly will.
This chapter has referred to work performed over the past 20 years in trying to bring reliability-
based design to ship structures. The three example problems cover a range of work of almost 10 years.
They also show the different types of problems that can be solved and different methods that can lead
to solutions. These examples should not be considered an all-inclusive set. Use of probabilistic methods
in in-service inspection planning and life prediction is also discussed briefly.
REFERENCES
ANG, A. H.-S., and W. H. MUNSE (1975). Practical reliability basis for structural fatigue. Meeting Reprint 2492,
ASCE Annual National Structural Engineering Conference. New York: American Society of Civil Engineers.
ANG, A. H.-S, and W. H. TANG (1983). Probability Concepts in Engineering Planning and Design, Vol. II: Decision,
Risk, and Reliability. New York: John Wiley & Sons.
ANTONIOU, A. C. (1977). A survey of cracks in tankers under repair. In: Proceedings of the International Symposium
on Practical Design in Shipbuilding (PRADS), Tokyo, Japan.
BHATIACHARYYA, R. (1978). Dynamics of Marine Vehicles. New York: John Wiley & Sons.
CJiAKRABARTI, S. K. (1991). Strategies for Nonlinear Analysis of Marine Structures. Report-347. Washington,
D.C.: Ship Structure Committee.
606 Applications in Ship Structures
DAIDOLA, J. C., and N. S. BASAR (1980). Probabilistic Structural Analysis of Ship Hull Longitudinal Strength.
Report-301. Washington, D.C.: Ship Structure Committee.
DALZELL, J. F. (1989). The ship in a seaway. In: Principles of Naval Architecture, E. Lewis, Ed. Vol. III, Chapter
VIII, Section 4. Arlington, Virginia: Society of Naval Architects and Marine Engineers, pp. 84-109.
FAULKNER, D. (1981). Semi-probabilistic approach to the design of marine structures. In: Proceedings of the
Extreme Loads Response Symposium. Arlington, Virginia: Society of Naval Architects and Marine Engineers,
pp. 213-230.
FAULKNER, D., and J. A. SADDEN (1975). Toward a unified approach to ship structural safety. Transactions of the
Royal Institute of Naval Architects 117.
FERRO, G., and A. E. MANSOUR (1985). Probabilistic analysis of the combined slamming and wave induced
responses. Journal of Ship Research 29(3):170-185.
FRANKLIN, P. and O. F. HUGHES (1992). An Approach to Conducting Timely Structural Fatigue Analysis of Large
Tankers. Report No. VPI-AOE-192. Blacksburg, Virginia: Aerospace and Ocean Engineering Department,
Virginia Polytechnic Institute and State University.
GUEDES SOARES, C. (1984). Probabilistic Models for Load Effects in Ship Structures. Report No. UR-84-38.
Trondheim, Norway: Department of Marine Technology, Norwegian Institute of Technology.
GUEDES SOARES, c., and T. MOAN (1982). Statistical analysis of stillwater bending moments and shear forces in
tankers, ore and bulk carriers. Norwegian Maritime Research 10(3):33-41.
GUEDES SOARES, C., and T. MOAN (1985). Uncertainty analysis and code calibration of the primary load effects
in ship structures. In Proceedings of the 4th International Conference on Structural Safety and Reliability
(ICOSSAR '85), Vol. 3. New York: International Association for Structural Safety and Reliability, pp.
501-512.
GUEDES SOARES, c., and T. MOAN (1988). Statistical analysis of still-water load effects in ship structures. Trans-
actions of the Society of Naval Architects and Marine Engineers 96:129-156.
HUGHES, O. F. (1988). Ship Structural Design. Jersey City, New Jersey: Society of Naval Architects and Marine
Engineers.
IITC (International Towing Tank Conference) (1972). Technical decisions and recommendations of the seakeeping
committee. In: Proceedings of the International Towing Tank Conference, Berlin, Germany.
JORDAN, C. R., and C. S. COCHRAN (1978). In-Service Performance of Structural Details. Report-272. Washington,
D.C.: Ship Structure Committee.
JORDAN, C. R., and L. T. KNIGHT (1980). Further Survey of In-Service Performance of Structural Details. Report-
294. Washington, D.C.: Ship Structure Committee.
KAPLAN, P., and A. L. RAFF (1972). Evaluation and Verification of Computer Calculations of Wave-Induced Ship
Structural Loads. Report-229. Washington, D.C.: Ship Structure Committee.
KAPLAN, P., M. BENATAR, J. BENTSON, and T. A. ACHTARIDES (1983). Analysis and Assessment of Major Uncer-
tainties Associated with Ship Hull Ultimate Failure. Report-322. Washington, D.C.: Ship Structure
Committee.
MADSEN, H. 0., S. KRENK, and N. C. LIND (1986). Methods of Structural Safety. Edglewood Cliffs, New Jersey:
Prentice-Hall.
MANSOUR, A. E. (1972a). Probabilistic design concepts in ship structural safety and reliability. Transactions of the
Society of Naval Architects and Marine Engineers 80:1-25.
MANSOUR, A. E. (1972b). Methods of computing the probability of failure under extreme values of bending
moment. Journal of Ship Research 16(2):113-123.
MANSOUR, A. E. (1974). Approximate probabilistic method of calculating ship longitudinal strength. Journal of
Ship Research 18(3):203-213.
MANSOUR, A. E. (1981). Combining extreme environmental loads for reliability based design. In: Proceedings of
the Extreme Loads Response Symposium. Arlington, Virginia: Society of Naval Architects and Marine
Engineers, pp. 63-74.
Applications in Ship Structures 607
MANSOUR, A. E. (1987). Extreme value distributions of wave loads and their application to marine structures. In:
Proceedings of the Marine Structural Reliability Symposium. Arlington, Vrrginia: Society of Naval Architects
and Marine Engineers, pp. 159-168.
MANSOUR, A. E. (1990). An Introduction to Structural Reliability Theory. Report-351. Washington, D.C.: Ship
Structure Committee.
MANSOUR, A. E., and D. FAULKNER (1972). On applying the statistical approach to extreme sea loads and ship
hull strength. Transactions of the Royal Institution of Naval Architects 114.
MANSOUR, A. E., and J. Lozow (1982). Stochastic theory of the slamming response of marine vehicles in random
seas. Journal of Ship Research 26(4):276-285.
MANSOUR, A. E., H. Y. JAN, C. I. ZIEGELMAN, Y. N. CHEN, and S. J. HARDING (1984). Implementation of reliability
methods to marine structures. Transactions of the Society of Naval Architects and Marine Engineers 92:1-
26.
MUNSE, W. H., T. W. WILBUR, M. L. 1'ELLALIAN, K NICOLL, and K WILSON (1983). Fatigue Characterization
of Fabricated Ship Details for Design. Report-318. Washington, D.C.: Ship Structure Committee.
NIKOLAIDIS, E., and P. KAPLAN (1991). Uncertainties in Stress Analysis on Marine Structures. Report-363. Wash-
ington, D.C.: Ship Structure Committee.
NIKOLAIDIS, E., O. F. HUGHES, B. M. AYYUB, and G. J. WHITE (1992). Assessment of Reliability of Naval Vessels.
Washington, D.C.: Department of the Navy.
OCHl, M. K (1978). Wave statistics for the design of ships and ocean structures. Transactions of the Society of
Naval Architects and Marine Engineers 86:41-69.
OCHl, M. K. (1979). Extreme values of waves and ship responses subject to the Markov chain condition. Journal
of Ship Research (23(3):188-197.
OCHl, M. K (1981). Principles of extreme value statistics and their application. In: Proceedings of the Extreme
Loads Response Symposium. Arlington, Vrrginia: Society of Naval Architects and Marine Engineers, pp.
15-30.
OCHl, M. K, and L. E. MOlTER (1973). Prediction of slamming characteristics and hull responses for ship design.
Transactions of the Society of Naval Architects and Marine Engineers 81:144-176.
PIERSON, W. J., G. NEUMANN, and R. W. JAMES (1955). Practical Methods for Observing and Forecasting Ocean
Waves by Means of Wave Spectra and Statistics. Publication No. 603. U.S. Hydrographic Office.
PuRCELL, E. S., S. J. ALLEN, and R. J. WALKER (1988). Structural analysis of the U. S. Coast Guard Island Class
patrol boat. Transactions of the Society of Naval Architects and Marine Engineers 96:1-23.
RAWSON, K J., and E. C. TuPPER (1983). Basic Ship Theory, 3rd ed., Vol. 1. London and New York: Longman.
SIPES, J. D., J. M. MAcDONALD, M. R. BOWEN, H. P. COJEEN, B. SALERNO, J. C. MAxHAM, and J. BAXTER
(1991). Report of the Trans-Alaska Pipeline service tanker structural failure study. In: Proceedings of the
Maintenance, Inspection, and Monitoring Symposium. Arlington, Vrrginia: Society of Naval Architects and
Marine Engineers.
SIKORA, J. P., A. DINSENBACHER, and J. BEACH (1983). A method for estimating lifetime loads and fatigue lives
for SWATH and conventional monohull ships. Naval Engineers Journal 95(3):63-85.
STIANSEN, S. G., A. E. Mansour, et al. (1979). Reliability methods in ship structures. In: Proceedings of the Royal
Institute of Naval Architects Spring Meeting. London: Royal Institute of Naval Architects, pp. 381-406.
WHITE, G. J., and B. M. AYYUB (1985). Reliability methods for ship structures. Naval Engineers Journal 97(4):
86-96.
WHITE, G. J., and B. M. AYYUB (1987a). Reliability-based fatigue design of ship structures. Naval Engineers
Journal 99(3):135-145.
WHITE, G. J., and B. M. AYYUB (1987b). Reliability-based design format for marine structures. Journal of Ship
Research 31(1):60-69.
WIERNICKI, C. J. (1986). Damage to ship plating due to wave impact loads. In: Proceedings of the Society of Naval
Architects and Marine Engineers Spring Meeting and STAR Symposium. Arlington, Vrrginia: Society of Na~,al
Architects and Marine Engineers, pp. 151-161.
25
APPLICATIONS IN OFFSHORE
STRUCTURES
ROLF SKJONG
1. INTRODUCTION
Structural design for offshore structures is traditionally based on deterministic analysis. However,
factors such as fluctuations of loads, variability of material properties, and uncertainty in analysis models
all contribute to a generally small probability that the structure does not perform as intended. This small
probability is defined as the failure probability. Reliability, defined as the complement of the failure
probability, is a rational and consistent measure of safety.
Reliability methods deal with the uncertain nature of loads and resistance (strength), and lead to
assessment of the reliability. Reliability methods are based on analysis models for the structure in
conjunction with available information about load and resistance and their associated uncertainties. The
analysis models are usually imperfect, and the information about load and resistance is usually limited.
The reliability as assessed by reliability methods is therefore generally not a measure of a physical
property of the structure in its environment of actions, but rather a nominal measure of safety of the
structure, dependent on the model and the amount and quality of information. Correspondingly, the
estimated failure probability will also be dependent on the analysis model and the level of information,
and it can therefore usually not be interpreted as the frequency of occurrence of failure for that particular
type of structure.
Measuring the safety of a structure by its reliability makes the reliability a useful decision variable.
Fulfillment of a requirement to the reliability is then necessary in order to assure a sufficient safety
level in design. Such a requirement can either be derived by a cost minimization in a decision analysis,
or alternatively by requiring that the safety level (as resulting from the design by a reliability analysis)
be the same as the safety level resulting from current deterministic design practice. The latter approach
is based on the hypothesis that current design practice leads to acceptable levels of safety and economy.
In both cases the requirement to the reliability will be a target reliability, which will be dependent on
the analysis model as well as on the distribution assumptions made for representation of the uncertainties
in load and resistance.
The safety of a structure as measured by its reliability is normally a function of time, because of
failure modes such as fatigue and corrosion and as a result of the operation of the structure. Additionally,
608
Applications in Offshore Structures 609
in order to keep the structure at an appropriate safety level, some in-service inspection may also be
required. The initial reliability of the structure as it is put into service is computed on the basis of
design information. As in-service inspections (ISIs) are performed periodically during the service life
of the structure, Bayesian updating is used after each inspection to incorporate the lSI findings and
recompute the structural reliability.
Reliability models are commonly classified into four levels (Madsen et al., 1986a).
Level I: Reliability methods that use only one "characteristic" value for each uncertain parameter, for example,
the allowable stress method. These methods calculate only the factor of safety and cannot compute the failure
probability
Level II: Reliability methods that use two values for each uncertain variable, that is, mean and variance,
supplemented by a measure for correlation between variables, for example, first-order second-moment (FOSM)
methods
Level III: Reliability methods that use a description of the joint distribution of the uncertain parameters, for
example, approximate analytical methods such as first- and second-order reliability methods (FORMs and
SORMs) and "in principle exact" methods such as simulation techniques
Level IV: Reliability methods that compare a structural prospect with reference prospects according to principles
of engineering economic analysis under certainty, considering costs and benefits, of construction, maintenance,
repair, and consequential costs of failure.
In this chapter, on the use of probabilistic structural mechanics within the offshore industry, only
applications that build integrated reliability models based on level III methods are presented.
2.1. Notations
2.2. Abbreviations
3. DEVELOPMENT PROJECTS
The development of technology and specific applications within the offshore industry has taken place
within a few large and many small Joint Industry Projects (JIPs). Joint Industry Projects, in which a
number of companies join forces to develop technologies within areas of common interests, are common
in the offshore industry. In the United States and the United Kingdom such projects are usually executed
at universities; in Norway they are usually executed within research institutions.
The developments have been focused within areas of particular interest. These include the devel-
opment of methods and tools to compute reliabilities of components and systems, applications using
fatigue analysis and fracture mechanics, soil mechanics and geotechnical applications, stochastic load
and response analysis, load combinations, structural systems reliability analysis, probability-based design
code development, applications in inspection and maintenance, and applications in· requalification of
older offshore structures.
One of the earliest of these projects was conducted by the Construction Industry Research and
Information Association (UK) in 1976 (CIRIA, 1976). The report served as a reference document within
the offshore industry for a number of years.
One of the earliest software development projects started in 1977, on the basis of CIRIA (1976).
This led to an advanced level II program, including the first-order reliability method, for calculating
reliabilities (Heldor, 1979). This project was executed at Det Norske Veritas (DNV), sponsored by the
Norwegian Council for Scientific and Industrial Research (N1NF).
One major project that covered all the subjects listed above, and has been of interest to the offshore
industry, started at DNV in 1984 and was completed in 1992. This project, entitled Reliability of Marine
Structures (RMS@DNV), started as an internal DNV project; Saga Petroleum joined the project in
1986, and Statoil and Conoco joined in 1988. In the period 1984 to 1992 this project represented about
130 man-years of concentrated research effort on the development and application of reliability methods
for the offshore industry. The project had considerable success in the areas of methods on probabilistic
612 Applications in Offshore Structures
fatigue and fracture (including inspection and maintenance planning and in load and response analysis)
and software (pROBAN [Tvedt, 1993] and PROFAST [Madsen et al., 1989]). The project has been
followed by a large number of new JIP projects that concentrate on development of methods, software,
and procedures for probabilistic design and code calibration. As the first classification society, DNV
has accepted the use of reliability methods as an alternative design procedure; this is described in DNV
Classification Note No. 30.6 (DNY, 1992).
Probabilistic structural mechanics for offshore applications has been one of the topics of the European
Community (BC) project BRITE (research program covering industrial technologies) and EURAM (re-
search program in advanced materials) from 1986 and of the BRITE/EURAM project from 1989. The
Reliability Methods for Design and Operation of Offshore Structures project at the Dutch Research
Institution (TNO, the Netherlands) had a number of European universities and engineering companies
partners and Elf as the offshore operator partner (end user). This project started in late 1986 and was
completed in late 1990. It was a relatively large project (about $2.5 million), and its objective was to
develop methods for modeling member behavior and calculating overall structural systems reliability,
and to apply the methods to inspection and maintenance planning. The structural systems reliability
analysis is based on the virtual distortion methods (Turner et al., 1988); otherwise the developed soft-
ware (RASOS) is based on technology known and implemented in other software from other projects.
A number of follow-up projects are underway within the BRITE/EURAM project.
The first joint industry project in the United States on offshore structures reliability was organized
by Amoco Production Company in 1974 (Moses, 1975; Stahl, 1975). This effort was supported by 23
petroleum companies, steel companies, and consulting firms. It provided the springboard for work
undertaken by the American Petroleum Institute (API, Dallas, Texas) culminating in the development
of the API load and resistance factor design (LRFD) code (API, 1993).
In cooperation with Stanford University (Stanford, California) and others Amoco Production Com-
pany initiated the TIP "Offshore Structural Systems Reliability" with 36 oil companies, engineering
consultants, and classification societies as sponsors. The project was completed in 1988 with a speci-
fication for a failure path structural systems reliability program (Bjerager and Cornell, 1988). This was
followed by RMS@DNV (Karamchandani et al., 1993b).
The project on reliability of marine structures at Stanford University (RMS@Stanford), with C. A.
Cornell as project manager, started in 1988 as an advanced graduate research program in marine struc-
tures reliability. The topics ranged over the spectrum from probability-based design codes through
structural systems reliability to random process analysis of loads and responses. The project is sponsored
on an annual basis, typically by about 12 oil companies and classification societies. It has opened an
extensive exchange of researchers between the United States and Europe. Some of the earlier work
from Stanford University, preceding this project, are Cornell et al. (1984), Guenard (1984), and Guenard
and Cornell (1986); these formed the basis for RMS@Stanford.
Also at Stanford University, a number of other industry-sponsored research projects have been ex-
ecuted; for example, in the Reliability Analysis of Moored Marine Structures project, general methods
for analysis of combined environmental load effects and for calculation of statistics of nonlinear, non-
Gaussian waves, wave forces, and dynamic structural response were developed. The load combination
schemes are designed to include short- and long-term response to joint stochastic environmental loading.
The dynamic nonlinear analysis procedures ar based on Hermite moment models (Winterstein, 1988)
and Markovian diffusion analysis.
The list of smaller TIP projects is extensive. Furthermore, if the projects internal to the oil companies,
classification societies, and engineering companies were included the list would be extended further. It
is not possible to given an overview of all these projects, both because of the extent and the lack of
publications from some of the projects. The methods are, furthermore, used commercially on a number
of offshore field development projects. Some of these commercial applications are briefly listed at the
Applications in Offshore Structures 613
end of this chapter. In general, probabilistic methods are used when the offshore industry is faced with
new challenges, such as a new environment (artic, deep water), new material, or new design concepts.
In such cases existing deterministic design codes may be inadequate.
Modem structural reliability analysis methods and tools include a number of computational methods
for calculating distributions, reliabilities, sensitivities, and outcrossing rates of random processes. Some
of the methods have been developed for specific offshore applications, for example, the fast method for
calculating outcrossing rates of stochastic processes is used to model responses to wave load (Hagen
and Tvedt, 1991). Other reliability analysis techniques have been adapted, modified, or extended for
offshore industry applications. These methods include first- and second-order reliability methods
(FORMs/SORMs), Monte Carlo simulation, simulation with variance reduction (e.g., conditional ex-
pectation simulation, axis-orthogonal simulation, stratified simulation, Latin hypercube simulation), and
mean value FORMs. Many of these methods are described in Chapters 3 and 4 of this handbook.
Readers may also refer to Tvedt (1989, 1990), Madsen et ai., (1986a), Bjerager (1989a), and Ditlevsen
and Madsen (1990).
Most of the offshore applications today are covered by the use of FORMs. Other methods are,
however, used to verify the FORM results. In other situations, such as those involving discontinuous
limit states g(x), one may have to use methods that do not require the derivative of g(x) to be continuous
to give convergence of the optimizer that finds the design point (Madsen et ai., 1986a; Tvedt, 1989).
In addition to solving reliability problems with a single limit state it is also necessary in many
offshore applications to solve systems of unions of events (e.g., a tether in a tension leg), intersections
of events (e.g., failure if a number of anchor lines fail), and unions of intersections.
One of the most popular applications of probabilistic structural mechanics is for inspection and
maintenance planning. For this purpose it is necessary to calculate conditional probabilities, for example,
probability of fatigue failure conditioned on a crack of a given uncertain size found, or probability of
fatigue failure conditioned on no crack found in a number of inspection with a certain inspection method.
In the first case it is required to calculate
a
aeP[g(x) < 0 n h(x, aD + e) < 0]
P[g(x) < Ojh(x; aD) = O ] - - - a - - - - - - - - (25-1)
aeP[h(x, aD + e) < 0]
where h(x, aD + 8) is the event margin corresponding to detection of a crack aD, aD may be stochastic
due to measurement uncertainties, and g(x) is the limit state function corresponding to failure. This
means that the probabilities can be calculated as the partial sensitivity factors of a parallel system with
respect to the observed quantity aD, the detected crack size, which can be an uncertain quantity (Tvedt,
1989; Ditlevsen and Madsen, 1990).
In the second case it is required to calculate
This means that the conditional probability can be calculated by any software that can handle parallel
614 Applications in Offshore Structures
systems, if due considerations are given to the correlation between the variables in the failure event
forniulation g(x) < 0 and the inspection event hex) < O.
In offshore applications it is also required that the applied methods and software be able to calculate
outcrossing rates of random processes in an efficient manner (Tvedt, 1990; Madsen, 1990; Rice, 1954,
Der Kiureghian and Liu, 1986; Belyaev, 1968). The cumulative distribution F max""'() y
of the maximum
value in a given reference period [0, T] of a combination of time-dependent loads can be expressed in
terms of the mean outcrossing rate of a stochastic load vector process from a safe region. Let the
individual loads be given as the components of the load vector process X(t), and let yet) = Y(X(t), t)
be in general a nonlinear, explicit time-dependent combination of the load processes. The distribution
of Y is then approximately given as
(25-3)
in which Po = P(Y(O) < y) and vet) is the mean upcrossing rate of the process yet) through the level y
at time t, or equivalently the mean zero downcrossing rate of the process get) = g(X(t), t) = Y - Yet).
For a sum process Yet) = ~i .\';(t), the g function is linear.
The rate v(t) can be calculated as a parallel system parametric sensitivity factor (Hagen and Tvedt,
1991; Madsen, 1990).
a
v(t) = ae P[g(t) < 0 n g(t) + g(t)e < 0]6=0 (25-4)
in which
. . ag(t)
g(t) = g(X(t), X(t), t) = Vg(t)'X(t) +- (25-5)
at
Equation (25-4) can be determined using standard methods from time-independent reliability theory
(FORM, SORM, directional simulation), and g can be calculated analytically or numerically. The process
can be a differentiable Gaussian, non-Gaussian, stationary, or nonstationary process. Also, the failure
surface may be explicitly time dependent.
Equations (25-4) and (25-5) require the joint distribution of X(t) and X(t) at time t. An approximation
for the joint distribution in terms of marginal distributions and the correlation is given by the Nataf
model.
Alternatively, the rate vet) can be calculated from Rice's formula (Rice, 1954) as
or by the generalized Rice's formula (Belyaev, 1968; Belyaev and Nosko, 1969) as
v = f
Jag
Loo
x.>og.
• (.i. - ag.)/xfI:.(x, i.) din d(ag) (25-7)
where frr(y, y, t) is the joint probability distribution function of yet) and Yet), in(t) and agn are, re-
Applications in Offshore Structures 615
spectively, the velocity components of the process and failure surface in the direction of the outward
normal n = n(x) at time t, and !xilD(X,xD) is the joint probability density of X(t) and XD(t).
is used.
The wave spectral density is chosen as a one-dimensional spectrum multiplied by a frequency-
independent energy spreading function to account for the short-crestedness of the sea. The one-dimensional
spectrum is assumed to be a gamma spectrum:
(w > 0) (25-9)
The spectral moments of order zero and two are uniquely given by the significant wave height and the
average period as
(25-10)
(25-11)
The values (~, t/t) = (5, 4) correspond to the International Ship Structures Congress (ISSC) spectrum
(ISSC, 1964), and the value ~ = 5 is determined from the asymptotic behavior for large W. The parameter
t/t can be used to randomize the spectral bandwidth. A common bandwidth parameter is &, which is
Applications in Offshore Structures 617
defined in terms of the three lowest-order spectral moments. For ~ =5 the result becomes
8-~- f(3/tjI} (25-12)
- \I~~2 - \If(2/tjI}f(4/tjI}
where f(.) denotes the gamma function. For the ISSC spectrum =8 =0.92 and 8 increases for increasing
values of the exponent ",.
The short-crestedness of the sea is obtained by superposing long-crested waves from different direc-
tions rather than from one direction only. The fraction of the total wave energy assigned to waves from
each individual direction is det~rmined from the wave energy spreading function. In practice, a finite
number of elementary wave directions is selected and finite weights are obtained by a discretization of
the wave energy spreading function. When the spreading function is taken as independent of the one-
dimensional wave spectrum, it may be taken as, for 19 - 901< C1T,
1 f(N/2 + I} N
w(6, 60} = 2cv1Tf(N/2+ 1/2} cos [(6 - 6012c] (25-13)
where 9 is the elementary wave direction and 90 is the main wave direction. Values of N = 2 and c =
0.5 are often chosen in deterministic design procedures. For N - 00 or c - 0, the limit of a long-
crested wave is approached. The knowledge about wave energy spreading for sea waves is limited. In
the probabilistic analysis the exponent N and possibly also c are treated as random variables to allow
for the incorporation of uncertainties in these variables (Haver, 1990).
(25-14)
(25-15)
where the asterisk denotes a complex conjugate. The analysis aims at computing correct values for the
response variance and zero crossing frequency, and an iterative procedure may be necessary. Each
response quantity is a random process that in general is nonnormal due to the nonlinearities.
618 Applications in Offshore Structures
The uncertainty in the global response arises from uncertainty in mass, damping, and stiffness prop-
erties. These properties are modeled as random variables and the partial derivatives of the response to
these parameters are determined numerically.
m
~ F
(J" = L.J Ci --" (25-16)
i-I Ai
Here Fi is a sectional force or moment, Ai is the corresponding sectional constant (e.g., area or sectional
modulus), and m is the total number of degrees of freedom in all bracing member ends and in the two
chord ends of the tubular joint. Ci is the geometric stress concentration factor (SCF) for the ith sectional
force. Geometric stress amplification arises from the gross geometrical configuration including the weld.
An additional notch stress amplification arises from the local geometry at the weld toe. The notch stress
concentration varies randomly around the weld and from weld to weld. The notch stress concentration
is not predicted but its scatter is reflected as a part of the scatter in the fatigue strength test results. The
geometric stress amplification is computed by a linear elastic analysis of a detailed finite element model
of the joint (see Fig. 25-1). In some cases semiempirical formulas are applied instead of the detailed
linear elastic analysis and only a subset of the m sectional forces is considered, usually the two bending
moments and the axial force in the brace. Because of such simplifications and because of uncertainty
in the fabrication, the applied stress concentration factors may be uncertain to a large degree and this
is reflected in the uncertainty model.
The spectral density of the hot spot stress follows directly from Eqs. (25-15) and (25-16):
(25-17)
SCFs
s'L.
~~----~~~'-------SWL
Wave energy spectrum
, ,,
Sea bed
~:."<:'<:Irn:::~~~~-m-i:Lo"=:-c"aiJl~eoiO:-nd force/moments
Ii
'I f :
transfer function U' Spectral density of
hot spot stress
. (H)2UL.
r==:>
Influ~n,ce
coeffiCients
w SCF; SCF j
s.~.
mmSCF SCF,
Wo(W)=w~(W)rr-'-JHF (W)H F (W)
Ai Aj ,' j'
i =1, i: 1
Figure 25-1. Jacket structure with tubular joint and hot spot.
Applications in Offshore Structures 619
Using this expression the variance and the mean period are computed from the spectral moments of
order zero and two. The hot spot stress process is nonnormal and is, in each sea state, assumed to be
of a form
(25-18)
where X(t) is a zero-mean and stationary Gaussian process, and a dot denotes the time derivative. C1
and Cz are determined from the variance and mean period in each sea state. A process of this type was
studied by Pierson and Holmes (1965). The effect of the nonlinearity varies from sea state to sea state,
which is easily understood from the variances of the two terms in the Morison equation. The variances
are proportional to H~ and H~, respectively. Usually only a few selected sea states are analyzed and
the effect of the nonlinearity for the remaining sea states is obtained by interpolation or extrapolation.
The effect of the nonlinearity depends on how drag-dominated the structure is. For example, Skjong
and Madsen (1987) analyzed a drag-dominated jacket using two stochastic linearization points.
Attempts have been made to integrate the global response analysis with the local stress analysis,
using the full influence matrix concept (Gibstein et ai., 1989). In these analyses the shell element model
of the tubular joints can be incorporated directly into a frame model of the structure.
Di = l/NISi] (25-19)
where N[S;} is the number of cycles to failure for stress range Si' The fatigue strength is expressed
through the S-N relation, which gives the number of stress cycles N under constant-amplitude loading
with stress range S necessary to cause failure. This means that when selecting the S-N curve an implicit
failure definition (usually through wall) used in the fatigue experiments are implied (Department of
Energy [DOE], 1984). In practice little load bearing capacity is lost at this stage (Carr et ai., 1986;
Karsan and Kumar, 1990; Karamchandani et ai., 1993a,b). S-N curves for different types of joints are
given in DNV (1982). For tubular joints the S-N curve is often selected as the T curve from the DOE
(1982). The T curve is shown in Fig. 25-2 and can be written as
(25-21)
The upper part of the T curve (for N < 107 cycles) is the result of a linear regression analysis of test
results of a form shown in Fig. 25-2. The distribution of In N for a fixed value of S is assumed to be
normally distributed with mean value varying linear with In S and with a standard deviation <Tin N
independent of S. The mean value of In N is estimated from the test results (DOE, 1982) as
The slope m = 3 for N < 107 is justified from fracture mechanics considerations. The change in slope
to mj = 5 at N = 107 is not well justifiable the slope mj is therefore modeled as a random variable in
a probabilistic analysis,
ml = 3 +M (25-23)
;2 ;
-mI4
( )
N = KS- m (t ~ 32 mm) (25-24)
where t is the wall thickness in millimeters. For t < 32 mm no reduction is carried out. The reduction
factor is subjected to uncertainty and this is modeled in the probabilistic analysis by changing the
exponent m/4 to a random variable with this mean value. The sensitivity of the reliability to the value
of the exponent can then be directly determined.
The statistical analysis in DOE (1982) assumes a complete prior knowledge of the slope m (m = 3)
and no prior knowledge of the standard deviation of In N. In Madsen et al. (1989) and Madsen (1985)
a more refined statistical analysis is presented. On the basis of a Bayesian statistical analysis with
various choices for prior distribution, simple results for the posterior distribution of m, K, and (TIn N are
derived.
In S
500
400
300
280
240
~ 200
E
~
• 180
Q)
160
0>
c 140
f!
lZ 120
!!!
Ci5 100
80
60
50~~~WL~~~~-U~~~--------.
10"
In N
(25-25)
where n(S;) is the number of stress cycles of stress range S; in the stress history and N(S;) is the number
of stress cycles of stress range S; necessary to cause failure. The summation is over all stress ranges.
The damage is computed for each sea state separately and then weighted according to the long-term
sea state probabilities, and accounting for the wave spreading. For a linear S-N relation, the damage
in a time period T is D; for the ith sea state,
where Va,; T is the expected number of stress cycles in the time period T. The stress range distribution
is computed for an ideal narrow-banded response process with the same variance Ao,; and same zero
crossing frequency Va,;' For a normal response process the stress range then follows a Rayleigh distri-
bution and Eq. (25-26) yields
(25-27)
Similar expressions for the nonnormal process described in Pierson and Holmes (1965) are given in
Madsen et al. (1986a). When the S-N relation is piecewise linear the results are also simple to give in
closed form as sums of terms similar to Eq. (25-27), but with each term including the incomplete
gamma function. Slightly more complicated formulas are arrived at when the technique to account for
the non-Gaussian response resulting from the nonlinearity of the Morison equation is used (Skjong and
Madsen, 1987). If the narrow-banded assumption is violated, as in some offshore examples in which
the response is characterized by two distinct responses, one high frequency and one low frequency,
approximate methods that work in many such cases can be found in Jiao and Moan (1990). The
advantage of all these advances in the fatigue damage modeling is that the derived analytical formulas
are easily implemented in engineering software (Madsen et al., 1989) that are easy to use for most
engineers.
The limit state function for the S-N approach is now
g(x) = 2:
iE(HsoTz )
qiDi - 11 (25-28)
where !J.. is defined as the Miner sum at failure, and !J.. = 1 is used in classic, deterministic design
procedures.
where C and m are material constants and .:1KThreshold is .:1K at the threshold point (see Fig. 25-3). The
equation was proposed on the basis of experimental results, but is also the result of various mechanical
and energy-based models (Wirsching et al., 1987). Generally the crack growth rate da/dN is expressed
as a stress intensity range .:1K for different material thicknesses, different environments, and for different
stress ratios R = uunJUIl13X' The general fatigue cracking pattern exhibited by most metallic materials is
shown in Fig. 25-3. The curves may be more complicated for corrosive environments.
Equation (25-29) describes the crack growth for a through-the-thickness crack. For surface cracks a
description of the crack length, depth, and shape is necessary. This is commonly done by assuming an
initial semielliptical crack that maintains its shape during the growth. In this case the crack length 2c
and the depth a suffice to describe the crack. By assuming the Paris relationship to be valid at the
deepest point and at the surface, Equation (25-29) is replaced by a pair of coupled equations (Aamodt,
1984; Raju and Newman, 1981; Shang-Xian, 1985),
where !:J.(. and flI( are stress intensity factors, and (m", C.) and (me, Ce) are material parameters for
the two points. The two-dimensional description can easily be incorporated in the probabilistic descrip-
tion. For simplicity, the limit state function below is given in one-dimensional form.
The crack increment in one cycle is generally very small compared to the crack size and Eq.
(25-29) is consequently written as
Finll Failure
I
~ t
Re,ian I Re,ian II EIIect af
I increasing
10- 5 Ie'
Ran-continuum I antlnuum mit hanlsm
. mun stress
mechanisms I(striation growth I
.....
~ Iii) mean stress
Ii ii I environment I (iii Ith ickness
-.....
10- 6 Regian III
E :!!.:C(M 1m
......
IdR "Static mode"mechanisms
I (cleavage, intergranular,
I
I or fibrous I
Luge influence of
Effect of (i I microstructure
increasing (i i I mean slress
10- 9 meln
stress (iiilthickness
Threshold la,6K
dK tI•
Figure 25-3. Schematic showing the variation of fatigue crack growth rate, da/dN, as a function of stress intensity
factor IlK.
Applications in Offshore Structures 623
where N is the number of stress cycles. The stress intensity range IlK is computed by linear elastic
fracture mechanics and is expressed as
where S is the far-field or hot-spot stress range and Y(a) is the geometry function. The geometry function
depends on the overall geometry, including the geometry of the crack and the geometry of a possible
weld. To explicitly account for uncertainties in the calculation of IlK, the geometry function is written
as Y(a) = Y(a, y), where Y is a vector of random parameters. Inserting Eq. (25-33) into Eq. (25-32)
and separating the variables leads to the equation
l ac
ao Y(a,
da
Yr(-yt;ar
= C 2: s~
;=} I
(25-34)
where I is the number of stress cycles, a o is the initial crack size, and ac is the critical crack size, or
the through-wall crack size if the failure criterion is to correspond to the most common S-N criterion.
Refinements of this model to account for material inhomogeneities are possible (Irving and McCartny,
1977; Ortiz and Kiremidjian, 1988; Madsen and Skjong, 1987), and data for welded material properties
are available in Lassen (1989). The model (Eq. [25-34]) can be extended to variable-amplitude loading
when the appropriate value for S is inserted for each stress cycle. The extension neglects possible
sequence effects.
The limit state function for the fracture mechanics formulation when IlKThreshold = 0 is
(25-35)
which, for the load model in Eq. (25-27), substituting L~ S~ by vo.;TE[S"] would read
(25-36)
The formulas vary slightly with and without threshold, with and without accounting for the non-
Gaussian response due to Morison nonlinearities (Skjong and Madsen, 1987), but the formulas are
analytical and are easily included in computer implementations (Madsen et ai., 1989).
Results from the S-N and fracture mechanics approaches are compared in Kirkemo (1988) and the
agreement is satisfactory. The advantage of fracture mechanics over S-N procedure, in addition to
provide a more detailed description of the physical phenomenon, becomes more apparent when inspec-
tion results are to be considered.
be described by
(25-37)
where hF(x, Ii) is the event margin for the detection at time 1;. The condition hF(x, Ii) = 0 is used because
the event (detection) has occurred. The same model as for the failure event is applied, except that the
crack has grown to the detected crack size am and the time of inspection is used instead of the service
life. The uncertainty in the detected crack size represents the inaccuracy of the inspection equipment.
Similarly, inspection results of the second type (does not find a crack, "no detection") at time I; are
modeled as
where aD is the smallest detectable crack size or detection threshold, represented by probability of
detection (POD) curves (see Fig. 25-4). The condition h~x, Ii) > 0 appears because the event did not
occur.
Using the models for two-dimensional description of crack growth complicates the modeling slightly,
because the coupled differential equations must be solved (see Eqs. [25-30] and [25-31]). The advantage
of this model is that the inspection events can be directly coupled to detected depth (e.g., eddy current)
or length (e.g., magnetic particle or visual inspections).!
The updated failure probability is
(25-39)
where r inspections of the no-detection type and s inspections of the detection type are envisaged. The
'More details on detection probabilities of inspection methods can be found in Chapter 11 of this handbook.
1.0
.......
c
0
u
95·'. cont id~nce b:and for th~
prob:ability of detect ion (POD)
...
...."0 0.5
..
-
>.
.Q
"
.Q
0
L
D~fects : 34
a.. Observ:ations : 342
0 100 200
D~f~ct length . mm
Figure 25-4. Distribution of smallest detectable crack size (probability of detection [POD] curve).
Applications in Offshore Structures 625
calculation of these updated reliabilities can be performed by Bayes formula. For the no-detection events
the updating is obvious, whereas for the detection events formulas are derived by observing that the
equality constraints can be calculated using the sensitivity parameters for parallel systems as described
in Madsen et al. (1986a, 1987), Tvedt (1989), and Madsen (1985); see Eq. (25-1). In the updating it
is important that the stochastic variables in the failure event and inspection event be the same, except
for the upper integration limit. This is what defines the dependencies in the parallel systems resulting
from the Bayes formula. It is also important that methods capable of calculating the design point of the
intersecting event be used (Tvedt, 1989).
It is assumed that In A and liB are bivariate normally distributed. The five parameters E[ln A], E[l/B],
0" [In A], O"[l/B], and p[ln A, liB] are estimated by calculating probabilities corresponding to the limit
state function:
(25-41)
where E [.] is the expectation, 0"[.] is the standard deviation, and p[.] is the correlation coefficient. In
total, six gk.l functions are calculated to obtain one long-term stress distribution (defined by the five
parameters), leaving a slightly overdetermined system for estimating the five parameters (Skjong and
Madsen, 1987).
The value of the mth moment for a Weibull distribution of the stress ranges S is
[g(x) :s 0] (25-43)
626 Applications in Offshore Structures
where v is the average zero-crossing frequency. Similarly the inspection events are simplified to
h~x) = 0 (25-44)
and
(D da
hm{x) = Lo -Y-(a-,-Y)-m(-y:rra-:='ll'=a~r - CVTinSPiAmr(1 + m/B); (25-45)
for "finds" and "non finds," respectively. The quality of the approximation is demonstrated in Kirkemo
(1988).
where A(z) and Yea, Y, z) now are functions of the design variable z.
If inspections are performed at times TJ and T2 with no repair at time TJ and repair at T2, the safety
margin for failure time t > T2 is
MOl = L.
r da
Y(a, Y, zt('ll'ar/2 -Cv(t - T2)A(zrr(1 + m/B) (25-47)
If it is assumed that all detected cracks will be repaired, the event margin corresponding to the event
that a crack is found and repaired at the first inspection Tl can be formulated as
R = f."',
"0
do
Y(a, Y, zt(-rra)mf2 - CVTIA(ztr(1 + m/B) (25-48)
(25-49)
Assuming that I inspections are performed at times Tj , i E [1, 1], the reliability index ~ for failure
before tis
and so on, for the various paths in Fig. 25-5, and the sequences of 0 and 1 in the subscript of R represent
sequences of repairs and no-repairs. From this the expected number of repairs can be calculated as
E[R;] (Madsen, 1988).
In the above discussion, two outcomes (repair and no-repair) at each inspection are considered. It is
- Branch 1
- Branch 2
•
•
•
- Branch j
•••
••
•
- Branch m
I I
Figure 25-5. Repair realization for single elements (0 denotes "no repair"; 1 denotes "repair").
628 Applications in Offshore Structures
possible to consider two or more alternative repair methods at each inspection. If, for example, two
alternative repair methods are considered, the tree (Fig. 25-5) will have three branches at each node. If
it is assumed that the decision maker acts in an optimum manner during the in-service life, the bracing
into three and possibly more branches (Madsen and Sorensen, 1990) can be avoided by dynamic pro-
gramming methods.
The inspection quality is modeled by treating the detectable crack size a" as a random variable. If
the POD curve is assumed to be exponential,
(25-54)
+ 2: {CF(Ti)M'F(Ti-
i=1
h Ti)/(l + rl'}
subject to the reliability constraint J3(t) ~ J3mi", the minimum and maximum time between inspections
tmin :5 t; :5 tmax , the limitations on inspection quality qmin :5 q; :5 qmm and the limitations on the design
variable Zmin :5 Z :5 Zmax> and IlPF(T;-IJ T;) is the probability of failure between times T;_I and T j•
Here r is the real corporate rate of return and the cost functions are initial cost Cf(z) = Cw + Cwo(z
- zo) (a function of the design variables); the inspection cost CIN(q,), which could be a function of the
inspection quality q;, the repair cost CR , and the cost of failure CF; CqO is the cost of an initial proposed
design with design parameters Zo and Cwo is the cost of changing the initial design.
The control variables in the optimization formulation are the design variable z, the inspection times
t = (tl' t2, ... , tf) and the inspection qualities q = (qIJ q2> ... , qf)' This is a continuous optimization
problem (i.e., t, q, and z are continuous variables). The integer programming problem with qj as a set
of existing inspection qualities (corresponding to a set of inspection methods) would be of more direct
use. It is, however, seen from the available results that the most important implications can be derived
from the continuous optimization problem (Skjong et al., 1989; Dalane et al., 1990; Madsen and Sor-
ensen, 1990).
6.1. Overview
The offshore industry has for some 20 years been concerned with the problem of quantifying the
reliability of the total structural system, as opposed to the traditional component-based approach (Vugts
and Edwards, 1992). The cases that in general have received the most attention in structural systems
reliability are as follows.
The target reliability index I3Member,target may depend on the importance of the structure and the member,
that is, different safety factors may be applied for different classes of structures (DNV, 1992).
As seen, current probability-based design codes do not attempt to quantify the reliability of a structure
as a system. In order to do so, and to take advantage of this in the optimization of the structural design,
a systems reliability approach must be used.
the problem, this mode is often taken to be representative of the most likely failure mode in practical
applications.
The primary outcome of a stochastic collapse analysis is the probability that the structure fails due
to an extreme storm loading condition. In the analysis, a probabilistic model for the extreme event will
be adopted, representing, for example, the worst storm in a year. The collapse analysis can be used to
determine the importance of single members in the structure and to define the requirement of these
members with respect to fatigue failure (Lotsberg and Kirkemo, 1989).
Chapter 8 of this handbook also discusses some systems reliability applications for offshore struc-
tures; they include incremental loading models, incorporation of a "systems factor" in probability-
based codes to account for redundancy, effects of material behavior on systems reliability, and reliability
optimization of offshore platform geometry.
7. PROBABILITY-BASED REQUALIFICATION
Today there are more than 6000 fixed offshore structures in operation on the continental shelves,
many of which have passed or are getting close to passing the intended design lives. From a traditional
deterministic point of view or from the requirements of the codes this would easily lead to the decom-
missioning of the platforms. Renewed drilling activities to further develop the reserves and new drilling
technology would, on the other hand, in most cases make it economically attractive to continue operation
of the platform. This is the main reason for the offshore industry's interest in requalification of existing
structures by use of probabilistic methods. The methods discussed in Section 5 have the potential of
including new information from the in-service history of the platforms, and, if possible, requalify the
platforms for continued operations. The new information could be from inspection results, load moni-
toring, response monitoring, survival of extreme loads, etc.
Applications in Offshore Structures 631
There are many variations on how existing platforms are requalified using probabilistic methods.
They depend on present codes, the structure under consideration, the type of new information available,
etc. For a description of such procedures see Diamantidis et al. (1991), Skjong (1987), Ocean Industry
(1991), Lotsberg and Kirkemo (1989), and Larsen et al. (1986). Simpler procedures that are also based
on probabilistic methods, but not on level III formulations, can be found resulting from the PMB
Engineering/Mineral Management Service's (PMB/MMS) Assessment, Inspection, and Maintenance
(AIM) project (Bea et al., 1988), and in applications described in Frieze (1989), Martindale et al. (1989),
and Bea et al. (1992).
The American Society of Civil Engineering (ASCE) Task Committee on Inspection, Maintenance,
and Requalification (a subcommittee of the ASCE Committee on Offshore Structures Reliability) com-
pleted its 3-year effort in 1993 and prepared two papers on the subject (Banon et al., 1994; Banon,
1994). The first paper describes the methods and analysis and inspection necessary for reassessing the
safety of existing platforms. The second paper describes the formal process of reassessment.
The American Petroleum Institute (API) has also started work on developing simple guidelines for
platform reassessment. These guidelines, once developed, will be added to future editions of API code
RP2A-LRFD (API, 1993).
Further research is required to widen the use of these methods. One of the larger activities in this
area started with a 3-year project within the CECIBRITE program "Decision Making Methodology for
Requalification of Aging Structures," with AGIP, COWl, ATKINS, MPA, MIT GmbH, and DNV Re-
search as partners.
Generally the industry would like to have design codes that are international, simple to use, flexible,
and that give a uniform and acceptable risk when applied.These goals are, however, conflicting. The
price paid by simplicity is in many cases nonuniform safety levels. Because the environmental char-
acteristics and water depths vary on the continental shelf it is also difficult to develop simple design
rules that cover large areas and at the same time give uniform safety levels. The use of probabilistic
methods to calibrate load and resistance factor design (LRFD) format codes is, however, recognized by
the offshore industry and regulatory bodies. The probabilistic methods are seen as a rational tool to
achieve the optimum compromise in achieving the goals.
The upgrading of the API RP2A code from Working Stress Format to the International Standards
Organization (ISO)-required Partial Safety Factor Format or Load and Resistance Factor Design format
(LRFD) is therefore an important milestone. Work on the code started in the late 1970s. Several API
reports (API, 1993) and some papers (Moses and Larrabee, 1988; Lloyd and Karsan, 1988; Ferguson,
1990) were published. The API issued a draft LRFD code in 1989. The draft was revised and the first
edition was issued in 1993 (API, 1993). RP2A is not the first attempt within the offshore industry to
transfer from Working Stress Format to LRFD. This was done in 1977 when the Norwegian Certifying
Authority DNV introduced a "limit state" code (DNV, 1977; Fjeld, 1977; Abrahamsen, 1976). The
statutory requirements of the Norwegian Petroleum Directorate (NPD, 1985) now incorporate material
and load factors taken from the DNV (Ferguson, 1990).
Today, the probabilistic methods used to calibrate LRFD format codes is well accepted by the off-
shore industry, and the Norwegian Certification Authority DNV have had the following policy for some
years: "No major rule development without formal calibration by reliability-based code optimization."
Chapter 15 discusses the general procedure of probability-based design code development. Further
discussions of code development directly related to offshore structures can be found in Turner et al.
632 Applications in Offshore Structures
12+-------~·~----~~------L-------~------~------+
---6--- UNCAUBRATED
11 -,-0-,- CAUBRATED
- - -0- - - OPTIMIZED
10
7+-~"-''-'-,,-.-r.-~,,-r'-'-''-.-..-.-ro-r.-.-r+
0.85 0.90 0.95 1.00 1.05 1.10 1.15
LEG SPACING
Figure 25-6. Required soil resistance capacity (normalized units).
(1992) and Hauge et al. (1992). The second paper also demonstrated the savings achieved by optimum
code calibration and the further benefits of using probabilistic design.
Figures 25-6 and 25-7 show examples of two simple design rules for a jack-up structure. The purpose
of the first rule is to specify the required axial capacity of the soil to prevent "punching" for a given
spudcan. The purpose of the second rule is to specify the required axial capacity to prevent buckling
in a leg bracing close to the deck. The two rules are similar in format.
12+-------~-------L-------L-------L------~------_+
- 6 - - - UNCAUBRATED
_. -0-' - CAUBRATED
11 ---0--- OPTIMIZED
10
7+-,,-.-r.-~,,-r'-'-"-'-r'-~"-r-..-~,,-..-~r+
0.85 0.90 0.95 1.00 1.05 1.10 1.15
LEG SPACING
Figure 25-7. Required bracing resistance capacity (normalized units).
Applications in Offshore Structures 633
The two design rules considered are given in the LRFD format as follows:
(25-56)
where
=
'Ymh 'Ym2 Safety factors for the soil resistance
'Yf = Safety factor for environmental load
'YL = Safety factor for live loads
ReI = Characteristic soil strength
R C2 = Characteristic buckling strength
COe, DOe = Scale of loads on the legs by equivalent drag coefficient and diameter
he = The 50-year return period wave height
LLc = Characteristic live load
LOc = Characteristic dead load (weight of structure)
QI> Q2 =Constants for each load effect
x = Leg spacing
Level ill models were used in these two calibrations. More detailed descriptions can be found in Hauge
et al. (1992), Kjeoy et al. (1989), Bradshaw (1988), and DNV (1984). Figures 25-6 and 25-7 show the
uncalibrated capacity requirements; calibrated capacities, which give approximately the same level of
reliability (irrespectively of leg spacing); and capacities required by optimized codes.
A major project to develop a model code for design of floating platforms is currently in progress,
following the' 'Do it once, do it right, and do it internationally" motto of the Vienna Agreement between
the ISO and the CEN (Thomas, 1992). To achieve this goal, detailed and advanced level ill models are
being developed to calibrate fairly simple design rules. Fairly uniform reliability levels have reportedly
been achieved.
Limit states for buckling failure of orthogonally stiffened cylindrical shells and stiffened flat plates
were considered in the tension leg platform (TLP) hull reliability analysis (Mathisen et at., 1993). The
following loads were included in the analysis.
• Still-water loads acting on the TLP in the upright position, in the absence of environmental actions
• Loads due to mean and low-frequency environmental actions, which induce horizontal offset of the platform,
with associated set-down
• Wave-induced loads in the frequency range of the incoming waves
In the level ill analysis, the loads were transformed into local load effects, in the form of stresses,
as required in the buckling capacity formulations. A vector outcrossing formulation (Hagen and Tvedt,
1991) was used to express the probability of failure. An inner layer of FORM calculations was used to
find the marginal outcrossing rate, with respect to the long-term distribution of the environmental ac-
tions. The probability of failure was then computed in an outer layer FORM computation, taking into
account the probability distribution of the time-independent stochastic variables.
The results were used to propose a design rule format, with associated partial safety factors. Although
the level ill formulation is rather complicated, the level I design rule format (LRFD format) and the
634 Applications in Offshore Structures
characteristic quantities involved in the rule are quite straightforward, and fairly similar to those used
in current design practice. The proposed design rule was then used to generate a set of test designs,
and the level III analysis was applied to these designs to check the reliability level obtained with the
proposed design rule. Satisfactory results were obtained, but further iterations on the design rule format
and partial safety factors would be required to complete the calibration of the design rule.
From the review of research projects and research topics it can be seen that the offshore industry
has shown significant and increasing interest in the use of probabilistic structural mechanics methods.
In particular the interest has focused on calibration of technical codes and standards, inspection planning,
and probabilistic design. Probabilistic design methods, used directly to calibrate the load and resistance
factors for one particular structure, in one particular environment, are used in most cases for new
structural concepts, new materials, and new environments; for example, probabilistic methods have been
used extensively for the design of tension leg platforms.
The use of probabilistic methods is diversified and it would be an impossible task to give an extensive
and systematic review of all applications. Many of the applications that saved the offshore industry
hundreds of millions of dollars are never published or are published in a form in which the savings
compared to conventional design are not made explicit. One exception is represented by the Saga
Petroleum Snorre TLP projects, in which probabilistic methods were used in three different areas: project
economy, code calibration of the ultimate capacity formulations of the tension leg platform system, and
criticality of misposition of the foundation templates for the tethers (Bysveen et al., 1990; Lotsberg,
1991).
One application of probabilistic structural mechanics has become a standard service offered by many
engineering consultant companies around the North Sea: reliability-based inspection planning and life
extension. The commercial use of these methods started when the methods were first developed in 1985
(Madsen, 1985), the first documented commercial use being from 1986 (Larsen et al., 1986). We find
that Amoco Production Company, Phillips Petroleum, Elf, and Statoil have all used these methods on
a number of projects. In Phillips Petroleum these methods are an integral part of their reanalysis system.
Similar methods are used by the Danish Undergound Consortium (A. P. Moller, Shell, and Texaco) for
the GORM A (Riber, 1990) and TYRA fields (Pedersen et al., 1992). These methods were also used
by AGIP for requalification of four platforms offshore Congo (Vanzini et aI., 1989). The Gulf of Mexico
applications are based on simplified methods compared to these level III models (Bea et al., 1988;
Martindale et al., 1989).
REFERENCES
AAMODT, B. (1984). Application of Finite Element Method to Problems in Linear and Nonlinear Fracture Me-
chanics. Doc. Ing. Thesis. Trondheim, Norway: Norwegian Institute of Technology.
ABRAHAMSEN, E. (1976). Safety requirements of offshore engineering. In: Proceedings of BOSS. Trondheim, Nor-
way: Norwegian Institute of Technology, pp. 877-899.
API (American Petroleum Institute) (1993). Recommended Practice for Planning, Designing and Constructing
Fixed Offshore Platforms-Load and Resistance Factor Design (RP2A-LRFD). Dallas, Texas: American
Petroleum Institute.
ARNBJERG-NIELSEN, T. (1991). Rigid-Ideal Plastic Model as a Reliability Analysis Tool for Ductile Structures.
Ph.D. Dissertation. Lyngby, Denmark: Technical University of Denmark.
Applications in Offshore Structures 635
BACK-GANSMO, 0., and O. Baadshaug (1984). Structural Systems Reliability of the Argus Island Tower. Technical
Report No. 84-3355. Hovik, Norway: Det Norske Veritas.
BANON, H. (1994). Assessing fitness for purpose of offshore platforms. Part II. Risk Management, maintenance
and repair. Journal of Structural Engineering, ASCE (in press).
BANON, H., R. G. BEA, F. 1. BRUEN, C. A CORNELL, W. F. KRIEGER, and D. A STEWART (1994). Assessing
fitness for purpose of offshore platforms. Part I. Analytical methods and inspection. Journal of Structural
Engineering, ASCE (in press).
BEA, R. G., F. J. PUSKAR, C. SMITH, and J. SPENCER, (1988). Development of AIM (Assessment, Inspection,
Maintenance) programs for fixed and mobile platforms. In: Proceedings of the OTe. Dallas, Texas: Offshore
Technology Conference Publications.
BELYAEV, Y. K. (1968). On the number of exits across the boundary of a region by a vector stochastic process.
Theory of Probability Applications 13:320-324.
BELYAEV, Y. K., and V. P. Nosko (1969). Characteristics of excursions above a high level for a Gaussian process
and its envelope. Theory of Probability Applications 14:296-309.
BITNER-GREGERSEN, E. M., and S. HAVER (1989). Joint long term description of environmental parameters for
structural response calculation. In: Proceedings of the 2nd International Workshop on Wave Hindcasting and
Forecasting. Ontario, Canada: Environment Canada, Atmospheric Environment Service, pp. 21-32.
BJERAGER, P. (1989a). Probability computation methods in structural and mechanical reliability. In: Computational
Mechanics of Probabilistic and Reliability Analysis. W. K. Liu and T. Belytschko, Eds. Lausanne, Switzer-
land: Elme Press.
BJERAGER, P. (1989b). Plastic systems reliability by LP and FORM. Computers and Structures 31(2):187-196.
BJERAGER, P., and C. A CORNELL (1988). Specification for a Failure-Path Based Structural Systems Reliability
Program. Palo Alto, California: C. A Cornell, Inc.
BJERAGER, P., and R. OLESEN (1987). RAPJAC User's Manual. Report No. 87-2014. Hovik, Norway: Det Norske
Veritas.
BRADSHAW, I. 1. (1988). Jack-up structural behavior and analysis methods. In: Mobile Offshore Structures. L. F.
Boswell, C. A D'Mello, and A 1. Edwards, Eds.
BYSVEEN, S., A G. KJELAAS, 1. LEREIM, and T. MARTIIINSEN (1990). Experience from applications of probabilistic
methods in offshore field activities. In: Proceedings of OMAE, Vol. II. New York: American Society of
Mechanical Engineers, pp. 142-149.
CARR, P., M. CLAYTON, P. L BUSBY, and J. DOBSON (1986). A probabilistic strategy for subsea inspection of steel
structures. In: Proceedings of the European Petroleum Conference. London: Society of Petroleum Engineers,
pp. 187-196.
CIRIA (Construction Industry Research and Information Association) (1976). Rationalization of Safety and Serv-
iceability Factors in Structural Codes. Report No. 63. London: Construction Industry Research and Infor-
mation Association.
CORNELL, C. A, R. RACKWITZ, Y. GUENARD, and R. G. BEA (1984). Reliability evaluation of tension leg platforms.
In: Proceedings of the ASCE Specialty conference on Probabilistic Methods and Structural Reliability. New
York: American Society of Civil Engineers, pp. 159-162.
DALANE, 1. I., R. SKJONG, and I. LOTSBERG (1990). Optimal fatigue design of offshore structures. In: Proceedings
of OMAE. New York: American Society of Mechanical Engineers.
DE, R. S., and C. A CORNELL (1991). Factors in structural system reliability. In: Proceedings of IFIP Conference
on Reliability and Optimization of Structural Systems (Munich, German), Springer-Verlag's Lecture Note in
Engineering, Vol. 76. Berlin, Germany: Springer-Verlag.
DER KruREGHIAN, A, and P.-L. Lru (1986). Structural reliability under incomplete probability information. Journal
of Engineering Mechanics, ASCE 112(1):85-104.
DIAMANTIDIS, D., G. RIGHETTI, and F. ZUCCARELLI (1991). Reliability based requalification criteria for existing
jack platforms. In: Proceeding of OMAE, Vol. II. New York: American Society of Mechanical Engineers,
pp. 213-219.
636 Applications in Offshore Structures
DITLEVSEN, 0., and P. BJERAGER (1986). Methods of structural systems reliability. Structural Safety 3:195-229.
DITLEVSEN, 0., and H. O. MADSEN (1990). Baerende Konstruktioners Sikkerhed. SBI-Rapport 221. Statens By-
ggforsknongsinstitutt. Published partly in Danish, to be published in English 1994.
DNV (Det Norske Veritas) (1977). Rules for the Design, Construction, Inspection of Offshore Structures. Technical
Report. Hovik, Norway: Det Norske Veritas.
DNV (Det Norske Veritas) (1982). Rules for the Design, Construction and Inspection of Offshore Structures,
Appendix C Steel Structures. Reprint with corrections. Hovik, Norway: Det Norske Veritas.
DNV (Det Norske Veritas) (1984). Strength Analysis of Main Structures of Self Evaluating Units. Classification
Note No. 31.5. Hovik, Norway: Det Norske Veritas.
DNV (Det Norske Veritas) (1992). Structural Reliability Analysis of Marine Structures. Classification Note No.
30.6. Hovik, Norway: Det Norske Veritas.
DOE (Department of Energy) (1982). New Fatigue Design Guidance for Steel Welded Joints in Offshore Structures.
London: Department of Energy.
DOE (Department of Energy) (1984). Offshore Installation: Guidance on Design and Construction. London: Her
Majesty's Stationery Office.
FERGUSON, M. C. (1990). A comparative study using API RP2A-LRFD. In: Proceedings of the OTe. Dallas, Texas:
Offshore Technology Conference Publications, pp. 341-349.
FJELD, S. (1977). Reliability of offshore structures. In: Proceedings of the OTe. Dallas, Texas: Offshore Technology
Conference Publications, pp. 459-471.
Frieze, P. A. (1989). Probability based safety assessment of existing and future offshore structures. In: Proceedings
of OMAE. New York: American Society of Mechanical Engineers, pp. 355-362.
GIBSTEIN, M., M. BAERHEIM, and P. OSEN (1989). Refined fatigue analysis approach and its application to the
Veslefrikk jacket. In: Proceedings of the International Symposium on Tubular Structures. Erkki Niemi, Ed.
Lappeenrana, Finland.
Guenard, Y. F. (1984). Application of System Reliability Analysis of Offshore Structures. Report No. 71. Stanford,
California: Stanford University.
GUENARD, Y., and C. A. CORNELL (1986). A method for reliability analysis of steel-jacket offshore platforms under
extreme loading conditions. In: Proceedings of the Advances in Reliability Technology Symposium. Bradford:
University of Bradford.
HAGEN, 0., and L. TvEDT (1991). Vector process outcrossing as a parallel system sensitivity measure. Journal of
Engineering Mechanics, ASCE 117(10):2201-2220.
HAUGE, L., R. LoSETH, and R. SKJONG (1992). Optimal code calibration and probabilistic design. In: Proceedings
of OMAE. New York: American Society of Mechanical Engineers.
HAVER, S. (1990). On the modeling of short crested sea for structural response calculations. In: Proceedings of the
European Offshore Mechanics Symposium. Trondheim, Norway: International Society of Offshore and Polar
Engineers.
HELDOR, E. (1979). Description of the Computer Program PROBAN for Level 2 Structural Reliability Analysis.
Veritas Report No. 79-0589. Hovik, Norway: Det Norske Veritas.
HELLAN, O. (1990). USOF-Computer Program for Progressive Collapse Analysis of Steel Offshore Structures.
Report No. STF71-A90001. Trondheim, Norway: SINTEF Structural Engineering.
HOHENBIcm..ER, M. (1984). Matematische Grundlagen der Zuverlassigslceitsmethode Erster Ordnung und Einige
Erweiterungen. Ph.D. Thesis. Munich, Germany: Technical University of Munich.
HOLM, C. A. (1990). Reliability Analysis of Structural Systems Using Nonlinear Finite Element Method. Ph.D.
Dissertation, Trondheim, Norway: Norwegian Institute of Technology.
HORTE, T., AND P. BJERAGER (1991). Finite Element Reliability Method (FERM). Research Report No. 90-2062,
Hovik, Norway: Det Norske Veritas.
ISSC (International Ship Structures Congress) (1964). Proceedings of the 2nd International Ship Structures Con-
gress (ISSC), Delft, the Netherlands, July 20-24, 1964.
Applications in Offshore Structures 637
IRVING, P. E., AND L. N. McCARTNY (1977). Prediction of fatigue crack growth rates: Theory, mechanisms and
experimental results. In: Metal Science (Proceedings of the Fatigue 77 Conference). Cambridge, England:
Cambridge University Press.
JIAO, G., and T. MOAN (1990). Probabilistic analysis of fatigue due to Gaussian load processes. Probabilistic
Engineering Mechanics 5(2).
KARAMCHANDANI, A (1987). Structural System Reliability Methods. Report No. 83. Stanford, California: Stanford
University.
KARAMCHANDANI, A, J. I. DALANE, and P. BJERAGER (1993a). A systems approach to fatigue of structures. Journal
of Engineering Mechanics, ASCE 118(3):684-700.
KARAMCHANDANI, A, J. I. DALANE, and P. BJERAGER (1993b). System of offshore structures including fatigue
and extreme wave loading. Journal of Marine Structures 4:353-379.
KARSAN, D. I., and A KUMAR (1990). Fatigue failure paths for offshore platform inspection. Journal of Structural
Engineering, ASCE 116(6):1679-1695.
KIRKEMO, F. (1988). Applications of probabilistic fracture mechanics to offshore structures. Applied Mechanics
Reviews 41(2):61-84.
KJEOY, H., N. G. BOE, and T. HYSING (1989). Extreme response analysis of jack-up platforms. In: 2nd International
Conference on the Jack-Up Drilling Platform. Barking, England: Elsevier Applied Science Publishers.
LARSEN, E. N., R. SKJONG, and H. O. MADSEN (1986). Assessment of pipeline reliability under the existence of
scour-induced free spans. In: Proceedings of the OTC. Dallas, Texas: Offshore Technology Conference
Publications, pp. 475-481.
LASSEN, T. (1989). Measurements and fracture mechanics modeling of fatigue crack growth in welded joints. In:
Computers and Experiments in Stress Analysis. Berlin, Germany: Springer-Verlag.
LWYD, J. R., and W. C. CLAWSON (1983). Reserve and residual strength of pile founded offshore platforms. In:
Proceedings of the International Symposium on the Role of Design, Inspection, and Redundancy in Marine
Structural Reliability. Williamsburg, Virginia: National Academic Press, pp. 157-196.
LWYD, J. R., and D. I. KARSAN (1988). Development of reliability-based alternative to API-RP2A In: Proceedings
of the OTC. Dallas, Texas: Offshore Technology Conference Publication.
LoTSBERG, I. (1991). Probabilistic design of the tethers of a tension leg platform. Journal of Offshore Mechanics
and Artic Engineering 113(2):162-170.
LoTSBERG, I., and F. KJRKEMO (1989). A systematic method for planning in-service inspection of steel offshore
structures. In: Proceedings of OMAE. New York: American Society of Mechanical Engineers.
MADSEN, H. O. (1985). Bayesian fatigue life prediction. In: Probabilistic Methods in the Mechanics of Solids and
Structures. Berlin: Springer-Verlag.
MADSEN, H. O. (1988). PRODIM Theoretical Manual. Research Report No. 88-2029. Hovik, Norway: Det Norske
Veritas.
MADSEN, H. O. (1990). Sensitivity Factors for Parallel Systems. Internal Report. Lyngby, Denmark: Danish En-
gineering Academy.
MADSEN, H. 0., and R. SKJONG (1987). Stochastic modeling of fatigue crack growth. In: Proceedings of the ISPRA
Seminar on Structural Reliability.
MADSEN, H. 0., and J. D. SORENSEN (1990). Probability-based optimization of fatigue design, inspection and
maintenance. In: Proceedings of the 4th Symposium on Integrity of Offshore Structures. Barking, England:
Elsevier Applied Science Publishers.
MADSEN, H. 0., S. KRENK, and N. C. LIND (1986a). Methods of Structural Safety. Englewood Cliffs, New Jersey:
Prentice-Hall.
MADSEN, H. 0., R. SKJONG, and MOGTADERI-ZADEH (1986b). Experience on probabilistic fatigue analysis of
offshore structures. In: Proceedings of the OMAE, Vol. 2. New York: American Society of Mechanical
Engineers.
MADSEN, H. 0., R. SKJONG, A G. TALLIN, and F. KiRKEMO (1987). Probabilistic fatigue crack growth analysis of
638 Applications in Offshore Structures
offshore structures, with reliability updating through inspection. In: Society ofNaval Architecture and Marine
Engineers (SNAME) Conference, Arlington, Virginia.
MADSEN, H. 0., R. TORHAUG, and R. SKJONG (1989). PROFAST-Theory Manual. Research Report No 89-2005.
Hovik, Norway: Det Norske Veritas.
MARTINDALE, S. G., W. E KRIEGER, S. K PAULSON, S. T. HONG, C. PETRAUSKAS, T.-M. Hsu, and J. E. PFEFFER
(1989). Strength/risk assessment and repair optimization for aging, low-consequence, offshore fixed plat-
forms. In: Proceedings of the OTC, Vol. II. Dallas, Texas: Offshore Technology Conference Publications,
pp. 483-502.
MAnnsEN, J., R. RAsHED!, K MORK, B. ZIMMER, and R. SKJONG (1994). Reliability based code for TLP hull
structures. In: Proceedings of the OMAE. Paper No. 1346. New York: American Society of Mechanical
Engineers.
MINER, M. A. (1945). Cumulative damage in fatigue. Journal of Applied Mechanics 12:159-164.
MORUTSU, Y., H. OKADA, and S. MATSUZAKI (1985). Reliability analysis of frame structure under combined load
effects. In: Proceedings of the 4th International Conference on Structural Safety and Reliability (ICOSSAR).
I. Konishi, M. Shinozuka, and A. H. S. Ang, Eds. New York: International Association for Structural Safety
and Reliability, pp. 117-128.
MOSES, E (1975). Cooperative Study Project on Probabilistic Methods for Offshore Platforms. Technical Report.
Tulsa, Oklahoma: Amoco Production Company.
MOSES, E, and R. D. LARRABEE (1988). Calibration of the draft RP2A-LRFD for fixed platforms. In: Proceedings
of the OTe. Dallas, Texas: Offshore Technology Conference Publications.
NPD (Norwegian Petroleum Directorate) (1985). Regulations for Structural Design of Load Bearing Structures
Intended for Exploitation of Petroleum Resources. Stavanger, Norway: Norwegian Petroleum Directorate.
Ocean Industry (1991). Planning system focuses structural inspection effort. Ocean Industry March: 53-55.
ORTIZ, K, and A. S. KiREMIDJIAN (1988). Stochastic modeling of fatigue crack growth. Engineering Fracture
Mechanics 29(3):317-334.
PARIS, P., and E ERDOGAN (1963). A critical analysis of crack propagation laws. Journal of Basic Engineering 85:
528-534.
PEDERSEN, c., J. A. NIELSEN, P. RIBER, H. O. MADSEN, and S. KRENK (1992). Reliability based inspection planning
for the TYRA Field. In: Proceedings of OMAE, Vol. II. New York: American Society of Mechanical Engi-
neers, pp. 255-263.
PIERSON, W. J., and P. HOLMES (1965). Irregular wave focuses on piles. Journal of the Waterways and Harbors
Division, American Society of Civil Engineers 91:1-10.
RAJU, I. S., and J. C. NEWMAN (1981). An empirical stress-intensity factor equation for surface crack. Engineering
Fracture Mechanics 15:185-192.
RiBER, J. P. (1990). Probabilistic Reliability Based Inspection, B&RlVeritec Seminar Notes. London: Brown &
Root.
RICE, S. O. (1954). Mathematical analysis of random noise. In: Selected Papers on Noise and Stochastic Processes.
N. Wax, Ed. New York: Dover, pp. 180-18l.
SHANG-XIAN, W. (1985). Shape change of surface crack during fatigue growth. Engineering Fracture Mechanics
22:987-913.
SIGURDSSON, G., E. H. CRAMER, A. J. HINKLE, and R. SKJONG (1992). Probabilistic methods for durability and
damage tolerance analysis. Paper presented at the USAF Structural Integrity Conference, San Antonio, Texas,
December 1-3, 1992.
SIGURDSSON, G., J. AMDAHL, R. SKJONG, and B. SKALLERUD (1993). Probabilistic collapse analysis of jackets. In:
Proceedings of the 6th International Conference on Structural Safety and Reliability (ICOSSAR). New York:
International Association for Structural Safety and Reliability.
SKJONG, R. (1987). Extended Lifetime of Offshore Structures. Lecture Notes at Norwegian Chartered Engineers
Association Course, December 1-2, 1987.
Applications in Offshore Structures 639
SKJONG, R., and H. O. MADSEN (1987). Practical stochastic analysis of offshore platforms. Ocean Engineering
14(4):313-324.
SKJONG, R., and R. TORHAUG (1991). Rational methods for fatigue design and inspection planning of offshore
structures. Marine Structures 4(4):381-406.
SKJONG, R., I. LOTSBERG, and R. OLESEN (1989). Inspection strategies for offshore structures. In: Proceedings of
ASCE Structures Congress. New York: American Society of Civil Engineers, pp. 412-421.
SORENSEN, J. (1988). PRODIM User's Manual. Research Report No. 88-2030. Hovik, Norway: Det Norske Veritas.
STAHL, B. (1975). Probabilistic methods for offshore platforms. In: Annual Meeting Papers. Dallas: American
Petroleum Institute, pp. JI-30.
STEWART, G., and J. W. VAN DE GRAAF (1990). A methodology for platform collapse analysis based on linear
superposition. In: Proceedings of OTe. Dallas, Texas: Offshore Technology Conference Publications.
STEWART, G., M. EFfHYMIOU, and J. H. VUGTS (1988). Ultimate strength and integrity assessment of fixed offshore
structures. In: Proceedings of BOSS, Vol. 3. Trondheim, Norway: Tapir, pp. 1205-1221.
THOMAS, G. A. N. (1992). The upstream oil and gas industry's initiative in the development of international
standards based on API Standards. In: Proceedings of OTe. Dallas, Texas: Offshore Technology Conference
Publications. pp. 431-439.
TuRNER, R. c., J. T. GIERLINSKJ, G. M. ZINTILIS, M. J. BAKER and HOLNICKl-SZULC (1988). The virtual distortion
method applied to the reliability analysis of offshore structures. In: Proceedings of the 2nd IFIP WG 7.5
Conference. P. Thoft-Christensen, Ed. Berlin, Germany: Springer-Verlag.
TuRNER, R. c., C. P. ELLINAS, and G. A. N. Thomas (1992). Towards the worldwide calibration of API RP2A-
LRFD. In: Proceedings of OTe. Dallas, Texas: Offshore Technology Conference Publications. pp. 513-520.
TvEDT, L. (1993). PROBAN, Version 4, Theory Manual. Research Report No. 93-2056. Hovik, Norway: Det Norske
Veritas.
TvEDT, L. (1990). Distribution of quadratic forms in normal space-application to structural reliability. Journal of
Engineering Mechanics, Division of the American Society Civil Engineers 116(6):1183-1197.
TvEDT, L., L. HAUGE, and R. SKJONG (1990). PROBAN, Version 4-Optimization, Theoretical Manual. Research
Report No. 90-2049. Hovik, Norway: Det Norske Veritas.
VANZINl, R., P. ROSSETTO, L. CONZ, G. FERRO, and G. RIGHETTI (1989). Requalification of offshore platforms on
the basis of inspection results and probabilistic analyses. In: Proceedings of OTC, Vol. II. Dallas, Texas:
Offshore Technology Conference Publications, pp. 481-492.
VUGTS, J. H., and EDWARDS, G. (1992). Offshore structural reliability assessment-from research to reality. In:
Proceedings of BOSS. (in supplement). London: BPP Technical Services.
WINTERSTEIN, S. R. (1988). Nonlinear vibration models for extremes and fatigue. Journal of Engineering Me-
chanics, ASCE 114(10):1772-1790.
WIRSCHING, P. (1983). Probability-Based Fatigue Design for Offshore Structures. Final Project Report, API-PRAC
Project 81-15. Tucson, Arizona: University of Arizona.
WIRSCHING, P. (1984). Fatigue reliability of offshore structures. Journal of Structural Engineering, ASCE 110:
2340-2356.
WIRSCHING, P. H., K. ORTIZ, and Y. N. CHEN (1987). Fracture mechanics fatigue model in reliability format. In:
Proceedings of OMAE, Vol. Ill. New York: American Society of Mechanical Engineers, pp. 331-337.
26
APPLICATIONS IN BRIDGES
1. INTRODUCTION
The eighth annual report of the Secretary of Transportation to the Congress of the United States on
highway bridge replacement and rehabilitation program (HBRRP) clearly attests to the need for reviving
the nation's aging transportation system. According to this report, 40% of the 575,607 inventoried
highway bridges in the United States were eligible for HBRRP funding (Federal Highway Administra-
tion [FHWA], 1986). Similar statements have also been made in more recent reports to the Congress.
The condition of the nation's bridges has remained of high priority for the FHWA and the state highway
agencies. The FHWA recommends that the states, in developing bridge projects, consider the rehabili-
tation alternative before deciding to replace a structure. Reliability assessment can be used effectively
for evaluating the condition of existing structures, comparing alternative rehabilitation options, and
designing new structures that are reliability based.
This chapter summarizes developments in the area of bridge reliability. The reliability of the super-
structure, piers, and pier foundations are discussed. Also, developments in reliability-based design codes
for bridge structures are summarized.
2.1. Notations
640
Applications in Bridges 641
2.2 Abbreviations
3. RELIABILITY ASSESSMENT
The reliability of a bridge structure can be assessed in a systems framework. The consideration of all
types of uncertainty is essential for obtaining realistic measures of reliability. In this chapter, a simple
definition of a bridge system is adopted. The bridge system is considered to consist of three main
subsystems: the superstructure, pier columns, and pier foundations. Reliability assessment of each of
these components is discussed in Sections 3.1, 3.2, and 3.3. The treatment of the complete bridge as a
system is discussed in Section 3.4.
642 Applications in Bridges
(26-1)
where <1>0 is the cumulative distribution function of the standard normal variable, and J3 is the reliability
index (safety index) at time t, given by
(26-2)
where !J.z = !J.Dp - !J.D" (J'z = «(J';' + (J'~)1/2, !J. is the mean, and (J' is the standard deviation.
Failure probabilities due to scour of an example bridge computed using Eq. (26-1) are plotted as a
function of time t in Fig. 26-1. As described in Chapter 18 of this book, the cumulative distribution
function of the life of a structure (in this case the life of the bridge with respect to scour failure) is
identified to the graph of failure probability as a function of time (Fig. 26-1).
Johnson (1992) derived a relationship between the failure probability due to scour and safety factor
for scour defined by
(26-3)
where Dp is the pier foundation depth and Ds is the computed scour depth. The safety factor, as a
function of the probability of failure due to scour, is given by
This equation can be used to determine the safety factor required to obtain a desired level of reliability.
Detailed derivation of Eq. (26-4) and the associated assumptions may be found in Johnson (1992).
floe conditions could be developed from historical data. Methods used in offshore structures reliability
analysis for the development of probabilistic models for flow velocities and ice floe conditions (e.g.,
Vivatrat and Slomski, 1984) may be adapted. A current National Cooperative Highway Research Pro-
gram (NCHRP) study on debris impact forces on piers may provide a database for this load.
Once the loads are known, the reliability of piers for a bridge can be computed by treating them as
compression members with eccentric axial forces. The reliability of such columns can be assessed by
using the techniques and results developed in the building industry (Ellingwood et al., 1980). However,
special attention should be given to the effect of scaling, because bridge piers tend to be larger in size
and amount of reinforcing steel than columns in buildings.
0.1
0.01
~ 0.001
::J
·co
LL.
_ 0.0001
o
~
:0 1E-05
CO
..0
....
0
a.. 1E-06
1E-07
the reviewed literature, researchers have concentrated their efforts on studying two failure modes: ul-
timate flexure failure and fatigue failure of structural details.
In performing reliability studies, uncertainties in strength measures and loads need to be quantified.
Also, computational methods for assessing structural reliability and developing reliability-based safety
factors need to be defined. The strength measures depend on the material types, construction method,
and failure modes. The loads include the dead load, live load, impact loads, and other environmental
loads, for example, wind, earthquake, snow, ice, temperature, water pressures, and debris.
Moses and Ghosn (1979, 1985) performed a comprehensive study on bridge girder reliability as-
sessment and code calibration. They considered several failure modes, including flexure and fatigue.
The statistical characteristics of the strength measures and loads were investigated. They developed a
live load model for the maximum 50-year bending moment (M) on a bridge,
M = amWHgiG, (26-5)
where a is a deterministic parameter that depends on a truck configuration and span length; m is a
random factor that depends on the variability of the load effect of a truck type; W is the ninety-fifth
percentile of weight for a dominating truck type at the site; H is a value related to the probability of
having closely spaced vehicles on a bridge that depends on the likelihoods of multiple presence and
overloads; g is a girder distribution factor; i is a dynamic amplification factor; and G, is future growth
factor. The value of H in Eq. (26-5) was derived on the basis of the Monte Carlo simulation; it falls
between 2 and 4% for two-lane bridges. The girder distribution factor depends on the method of analysis,
for example, AASHTO working stress design, or finite element analysis. Moses and Ghosn (1985) used
weigh-in-motion studies to estimate the values of the parameters in Eq. (26-5). Nowak et al. (1987)
investigated the live loads on bridges, and assessed the statistical characteristics of these parameters. In
the development of the Ontario highway bridge code, Grouni and Nowak (1984) assumed the upper
tail of the distribution of the 50-year maximum live load to be exponential. The impact load factors
were investigated by Billing (1984). Researchers reported insufficient data in this area (Nowak et aI.,
1987). Probabilistic characteristics of environmental loads developed by Ellingwood et al. (1980) for
building structures can also be used for bridge reliability analysis.
The dynamic factors for bridges were investigated by Hwang and Nowak (1991). It was concluded
that the dynamic factors decrease as the gross vehicle weight is increased. Also, in general, the dynamic
and static live loads can be considered as uncorrelated. The coefficient of variation of the dynamic
factor ranged from 0.4 to 0.7, as a function of the span length. Also, the dynamic load factor for one
truck is larger than the dynamic factor for two side-by-side trucks.
In the reliability assessment of the superstructure of a bridge, the modeling of the loads requires
adequate knowledge of the uncertainty sources and magnitude, the relationship between the nominal
(or design) loads and the mean (real) loads, the probability distribution types, the variations of loads
with time, and the stochastic load combinations. Moses and Ghosn (1985), Nowak et al. (1987), and
Nowak and Hong (1991) have studied bridge loads and combinations. In these studies, load surveys,
load simulation, and analytical models were used to evaluate bridge loads and their combinations prob-
abilistically. The results of these studies were used in the reliability assessment of bridge girders (Moses
and Ghosn, 1985; Tabsh and Nowak, 1991). These studies concluded that the lane load (AASHTO,
1989) is governed by a single truck for spans up to 120 ft for bending moment and up to 90 ft for
shear effects. 1\vo trucks on a bridge at the same time govern longer spans. For two-lane bridges, the
maximum moment and shear effects were obtained for cases involving side-by-side trucks. Also, Nowak
and Hong (1991) concluded that girder moment distribution factors in the current AASHTO (1989)
guidelines are conservative for girders with large spacing. Tabsh and Nowak (1991) investigated the
flexural reliability of highway bridge girders under the combined effect of dead load, live load, and
Applications in Bridges 64S
impact. Reliability measures were determined as functions of girder spans. The statistical characteristics
of the strength parameters were taken from the available data in the literature (e.g., Ellingwood et al.,
1980). The truck (live) loads were modeled on the basis of a single unit and a semitrailer. The authors
investigated composite steel girders, noncomposite steel girders, reinforced concrete girders, and pre-
stressed concrete girders. In general, prestressed concrete provided the largest values for the safety index
for all spans. The composite steel girders provided the other extreme end, that is, the smallest safety
indices for all spans. The safety indices ranged from 2.5 to 4 for all bridges. The investigated bridges
were designed according to the 1989 bridge specifications (AASHTO, 1989). For noncomposite steel
girders, the reliability indices were determined to be 3 to 3.5; for composite steel they are 2.5 to 3.5;
and for reinforced and prestressed concrete they are 3.5 to 4. Figure 26-2 provides a summary of these
reliability indexes (Tabash and Nowak, 1991).
3.3.2. Reliability of superstructures as a system. The bridge superstructure, consisting of all the
girders, stiffeners, and slabs, may be treated as a structural system and its reliability can be computed
using methods of structural systems reliability analysis (see Chapter 8). The system reliability will
depend on the correlation between girder strengths. Studies by Tabsh and Nowak (1991) show that an
assumed full correlation (correlation coefficient = 1) will provide lower system reliability than an as-
sumed zero correlation. The ratio of system safety index to individual girder safety index could be as
high as 1.6, or even more in some cases. The increased reliability is due to system redundancy.
Chapter 8 of this book discusses the methods of structural reliability analysis. That chapter also
provides examples of applications in bridges, including the incorporation of a "system factor" in load
and resistance factor design (LRFD) codes to account for system redundancy effects, effects of material
behavior on system reliability, and residual reliability after an accident.
Bennett et al. (1985) studied the effect of redundancy on bridge reliability. Ayyub and Ibrahim (1990)
provided a reliability-based definition of redundancy for truss bridges. Gongkang and Moses (1989)
used reliability methods to define redundancy and extended the investigation to damage tolerability of
bridges. Frangopol and Nakib (1989a) used redundancy measures for the evaluation of bridges. Also,
they investigated the effects of different damage states on both bridge redundancy and reliability.
Optimization techniques have been used in the reliability analysis of bridges. Frangopol and Nakib
---- -
4.5
l?"','- -
4
. -. :--
/:.- ~
...... -.
..
.... - -- .....
- ........
3.5
- - Prestressed Concrete
~
:g 2
,g • • • • Non·composlte Steel
II
'" 1.5 - • • CompoSite Steel
0.5
0
0 10 20 30 40 50 60
Span (m)
(1989b) developed a method for system optimization and reliability in bridge inelastic design. In their
study, the optimal design was defined as the bridge of the least weight, subject to a set of constraints
on the design and structural performance. Chapter 16 discusses reliability-based structural optimization
in more detail.
3.3.3. Inspection and life extension. Reliability methods can also be used for inspection, plan-
ning, and life extension for certain failure modes. Yazdani and Albrecht (1984) used reliability methods
to evaluate the fatigue failure probability as a function of number of load repetitions, which can be
related to bridge life by knowing the forecast of traffic volume and truck size content. Probabilistic
fracture mechanics was used for this purpose. In this study, failure was defined as the propagation of
fatigue crack across the thickness of a flange that has a structural fatigue detail. Monte Carlo simulation
with variance reduction techniques! was used for failure probability assessment. They then investigated
the effect of inspection interval, truck weight and truck traffic, and length of service life extension on
the probability of failure for three existing bridges with cover-plated girders. The effect of variable
amplitude loading was considered as outlined by Schilling et al. (1978). Yazdani and Albrecht (1987)
studied the Yellow Mill Pond Bridge, Connecticut. The bridge is on interstate 1-95, and has 14 spans,
with 7 composite rolled girders. The girders have category E' cover plates. The bridge was opened to
traffic in 1985, and in 1970 a crack was discovered during inspection in the eleventh span. The average
daily truck traffic for this bridge is high, about 5660. The mean equivalent stress range was computed
for the failed fatigue detail as 1.2 ksi. The AASHTO (1989) specification sets the allowable fatigue
strength from S-N data as two standard deviationt"from the mean regression line, which corresponds
to a probability of failure of 0.023. By using probabilistic fracture mechanics, it was determined that
the bridge has a probability of fatigue failure of 0.023 after 16 years of service. Therefore, inspection
should be scheduled before the sixteenth anniversary of the bridge. The researchers suggested that the
bridge should be inspected at years 16, 24, 33, 41, 48, 56, 63, 70, and 76, in order to maintain the
probability of failure at less than 0.0023. Mohammadi and Yazbeck (1989) have also suggested a
probability-based methodology for inspection planning of highway bridges.
iMonte Carlo simulation and variance reduction techniques are discussed in Chapter 4 of this handbook.
Applications in Bridges 647
April 5, 1987, the New York State Thruway Bridge crossing Schoharie Creek collapsed, killing 10
people. The ultimate cause of the failure was scour around the pier foundations; however, following
the erosion of the sediment around the foundation, a series of failures contributed to the actual collapse
of the bridge. The five spans of the bridge were supported by four piers. The total length of the spans
between the abutmt!nts was 540 ft. The bridge had a reinforced concrete deck and underlying steel
supporting members that, in turn, were supported by steel bearings on reinforced concrete piers and
end abutments (New York State Thruway Authority, 1987). The sequence of failure events is as follows
(New York State Thruway Authority, 1987): following undermining of pier 3 by scour, the pier moved
west and north. The plinth of pier 3 (a pedestal-like element on top of the footing) was ruptured into
two pieces. The upstream end of the pier dropped into the scour hole. This sequence of events caused
spans 3 and 4 to fall. Analyses indicated that 25 to 30 ft of undermining under the upstream end of
the footing would cause tensile stresses necessary to rupture the plinth. It is possible to analyze such
sequences of failures by event trees and fault trees.
A general discussion of reliability-based design codes (load and resistance factor design ([LRFD]
codes) and their development is given in Chapter 15. Some significant publications relating to reliability-
based bridge design codes are briefly described here.
Nowak and Lind (1979) outlined a methodology for practical bridge calibration. Moses and Ghosn
(1985) and Ghosn and Moses (1986) performed a code calibration for bridges. Kennedy and Baker
(1984) provided resistance factors for steel highway bridges. Also, in a parallel effort, Grouni and
Nowak (1984) performed some calibration on the Ontario highway bridge design code. In an addendum
to the AASHTO specifications (1989), an alternative LRFD code for bridges was provided. Shinozuka
et al. (1989) developed a theoretical basis for obtaining probability-based load combination criteria for
structural design of highway bridges. Shiraki et al. (1989) proposed a procedure for calculating the
optimal load factors for steel rigid-frame piers of bridges, using moment methods.
5. CONCLUDING REMARKS
Developments in the area of bridge reliability are summarized in this chapter. A bridge consists of
the superstructure, pier columns, and pier foundations. The reliability assessment of each of these
components is described. Treatment of the bridge as a system is also presented. Reliability-based design
codes and an application of probabilistic structural analysis in inspection planning and life extension
are also discussed.
REFERENCES
AASHTO (American Association of State Highway and Transportation Officials) (1989). Standard Specifications
for Highway Bridges, 13th ed. Washington, D.C.: The American Association of State Highway and Trans-
portation Officials.
ANG, A. H., and W. H. TANG (1984). Probability Concepts in Engineering Planning and Design, Vol. II: Decision,
Risk, and Reliability. New York: John Wiley & Sons.
AYYUB, B. M., and A. HALDAR (1984). Practical structural reliability techniques. Journal of Structural Engineering,
ASCE 110(8):1707-1724.
648 Applications in Bridges
AYYUB, B. M., and A IBRAlDM (1990). Post-tensioned trusses: Redundancy and reliability. Journal of Structural
Engineering, ASCE 116(6):1507-1521.
BENNEIT, R. M., A H.-S. ANG, and D. W. GOODPASTURE (1985). Probabilistic safety assessment of redundant
bridges. In: Proceedings of 4th International Conference of Structural Safety and Reliability. New York:
International Association for Structural Safety and Reliability, pp. 205-211.
BIWNG, J. R. (1984). Dynamic loading and testing of bridges in Ontario. Canadian Journal of Civil Engineering
11(4):833-843.
ELLINGWOOD, B., T. V. GAlAMBOS, J. C. MACGREGOR, and C. A CORNELL (1980). Development of a Probability
Based Load Criterion for American National Standard A58. Publication 577. Washington, D.C.: National
Bureau of Standards.
FHWA (Federal Highway Administration). (1986). Highway Bridge Replacement and Rehabilitation Program.
Eighth annual report of the Secretary of Transportation to the Congress of the United States. Washington,
D.C.: Bridge Division, Federal Highway Administration.
FRANGOPOL, D. M., and R. NAKIB. (1989a). Redundancy evaluation of steel girder bridges. In: Proceedings of the
5th International Conference on Structural Safety and Reliability, ICOSSAR '89, Vol. III. New York: Ameri-
can Society of Civil Engineers, pp. 2171-2178.
FRANGOPOL, D. M., and R. NAKIB (1989b). Examples of system optimization and reliability in bridge design. In:
Proceedings of the 5th International Conference on Structural Safety and Reliability, ICOSSAR '89, Vol. II.
New York: American Society of Civil Engineers, pp. 871-878.
GHOSN, M., and F. MOSES (1986). Reliability calibration of bridge design codes. Journal of Structural Engineering,
ASCE 112(4):745-763.
GoNGKANG, F., and F. MOSES (1989). Probabilistic concepts of redundancy and damage tolerability for structural
systems. In: Proceedings of the 5th International Conference on Structural Safety and Reliability, ICOSSAR
'89, Vol. II. New York: American Society of Civil Engineers, pp. 967-974.
GROUNI, H. N., and A S. NOWAK (1984). Calibration of the Ontario highway bridge design code. Canadian
Journal of Civil Engineering 11(4):760-770.
HARRISON, L. F., and J. L. MORRIS (1991). Bridge scour vulnerability assessment. In: Proceedings of the 1991
National Conference on Hydraulic Engineering. New York: American Society of Civil Engineers, pp. 209-
214.
HWANG, E.-S., AND A S. NOWAK (1991). Simulation of dynamic load for bridges. Journal of Structural Engi-
neering, ASCE 117(5):1413-1434.
JOHNSON, P. A (1992). Reliability-based pier scour engineering. Journal of Hydraulic Engineering, ASCE 118(10):
1344-1358.
JOHNSON, P. A, and B. M. AYYUB (1992). Assessment of time-variant bridge reliability due to pier scour. Journal
of Hydraulic Engineering, ASCE 118(6):887-903.
JOHNSON, P. A, and R. H. MCCuEN (1991). A temporal, spatial pier scour model. Transportation Research Board
Record 1319:143-149.
KENNEDY, L. D. J., and K. A BAKER (1984). Resistance factors for steel highway bridges. Canadian Journal of
Civil Engineering 11(2):324-334.
MOHAMMAD!, J., and G. J. YAZBECK (1989). Strategies for bridge inspection using probabilistic models. In: Pro-
ceedings of the 5th International Conference on Structural Safety and Reliability, ICOSSAR '89, Vol. III.
New York: American Society of Civil Engineers, pp. 2115-2122.
MOSES, F., and M. GHOSN (1979). A Comprehensive Study of Bridge Loads and Reliability. Report No. FHWN
OH-85/005. Cleveland, Ohio: Case Western Reserve University.
MOSES, F., and M. GHOSN (1985). A Comprehensive Study of Bridge Loads and Reliability. Report No. FHWN
OH-85/005. Cleveland, Ohio: Case Western Reserve University.
New York State Thruway Authority (1987). Collapse of the Thruway Bridge at Schoharie Creek. Prepared by Wiss,
Janney, Elstner and Associates, Northbrook, Illinois, and Mueser Rutledge Consulting Engineers, New York.
Applications in Bridges 649
NOWAK, A. S., and Y.-K. HONG (1991). Bridge live load models. Journal of Structural Engineering, ASCE 117(9):
2757-2767.
NOWAK, A. S., J. CZERNECKI, J.-H. ZHOU, and J. R. KAYSER (1987). Design Loads for Future Bridges. Report
No. FHWNRD-87/069. Ann Arbor, Michigan: University of Michigan.
NOWAK, A. S., and N. C. LIND (1979). Practical bridge code calibration. Journal of the Structural Division, ASCE
105(12):2497-2510.
SCHILLING, C. G., K. H. KLIPPSTEIN, J. M. BARSOM, AND G. T. BLAKE (1978). Fatigue of Welded Steel Bridge
Members under Variable-Amplitude Loadings. NCHRP Report 188. Washington, D.C.: Transportation Re-
search Board, National Research Council.
SHINOZUKA, M., H. FURUTA, S. EMI, and M. KUBO (1989). Reliability-based LRFD for bridges: theoretical basis.
In: Proceedings of the 5th International Conference on Structural Safety and Reliability, ICOSSAR '89, Vol.
III. New York: American Society of Civil Engineers, pp. 1981-1986.
SHIRAKI, W., S. MATSUHO, and P. N. TAKAOKA (1989). Probabilistic evaluation of load factors for steel rigid-
frame piers on urban expressway network. In: Proceedings of the 5th International Conference on Structural
Safety and Reliability, ICOSSAR '89, Vol. III. New York: American Society of Civil Engineers, pp. 1987-
1993.
TABSH, S. w., and A. S. NOWAK. 1991. Reliability of highway girder bridges. Journal of Structural Engineering,
ASCE 117(8):2372-2387.
VIVATRAT, V, and S. SLOMSKI (1984). Probabilistic selection of ice loads and pressures. Journal of Waterway,
Port, Coastal and Ocean Engineering, ASCE 110(4):375-391.
YAZDANI, N., and P. ALBRECHT (1984). Risk Analysis of Extending Bridge Service Life. Final Report to the
Maryland State Highway Administration. College Park, Maryland: Department of Civil Engineering, Uni-
versity of Maryland.
YAZDANI, N., and P. ALBRECHT (1987). Risk analysis of fatigue failure of highway steel bridges. Journal of
Structural Engineering, ASCE 113(3):483-500.
27
APPLICATIONS IN STEEL
STRUCTURES
PAVEL MAREk
1. INTRODUCTION
Deterministic concepts based on allowable stresses and on a single deterministic safety factor are being
replaced in structural steel design specifications by semiprobabilistic concepts offering a better evalu-
ation of random variables, such as material properties and loads, affecting the reliability of structures.
The reliability assessment procedure, called the limit states method (or load and resistance factor design
[LRFD]l), is based on the statistical evaluation of material properties, loading effects, and other struc-
tural parameters. Because of the lack of information on the probability distributions of individual random
variables and their interaction, the current applications of reliability methods in specifications are based
on various simplifications and have a deterministic format expressed in terms of partial safety factors
related to the loads (load factors) and the resistance of the structure (resistance factors).
1\vo basic groups of reliability conditions are considered. The first group is related to limit states of
carrying capacity (ultimate limit states, e.g., strength, stability, fatigue, brittle fracture, and stability of
position), and the second group contains criteria reflecting limiting states of serviceability (e.g.,
deflection).
The principal new ingredient is the use of probabilistic models in the development of partial safety
factors. The models are subject to improvements in correspondence with the advances in reliability
theory and computer technology, new data, test results, and other information. More accurate statistical
characterization of variables involved in the analysis, as well as improvements in the reliability assess-
ment methods, are leading to more uniform reliability of structural components and to more rational
design procedures. Research and development activities relating to the development of the U.S. LRFD
specifications (American Institute of Steel Construction [AISC], 1986) are described in a number of
publications, for example, Galambos and Ravindra (1977, 1978) and Ravindra and Galambos (1978,
1979).
Designers are cautioned to understand the basic concepts and also exercise independent professional
judgment when applying specifications based on limit states methods. This is especially true in the case
650
Applications in Steel Structures 651
of using and comparing limit states method specifications published in different countries or regions.
Individual specification writing bodies are following the main ideas on reliability assessment; however,
the interpretations of the basic reliability format as well as of individual factors, data, and other infor-
mation differ. The comparison of individual parts, factors, and numerical values in the assessment
process according to any two different limit states design specifications is complicated because of the
different definitions of input values (e.g., definition of specified loadings), evaluation of design values
of material and other characteristics, and arrangement of reliability criteria. Direct comparison (e.g.,
AISC [1986] in the United States, Canadian Standards Association [CSA, 1974] in Canada, and
EUROCODE [1984] in Europe) is possible only by considering the resulting quantities, such as total
weight of a particular steel structure designed according to these three documents.
In the following sections, selected comments illustrate briefly the application of probability and
reliability theory in structural steel design.
2.1. Notations
A Area of cross-section
D Dead load
E Earthquake load
Fy Yield stress (a suffix act indicates actual value)
L Live load
LL Long-duration live load
Lf Roof live load
P Axial force
Pc Probability of failure
Q Structural response (suffix m indicates mean value)
R Structural resistance (the following suffixes are used: act - actual; d - design magnitude; m - mean
value; n - nominal)
S Snow load
SL Short-duration live load
V Coefficient of variation
W Wind load
0: Separation coefficient
~ Safety index (reliability index)
(]' Standard deviation
2.2. Abbreviations
Table 27-1. Load Combinations in Load and Resistance Factor Design Format"
l.4D
loW + 1.6L + 0.5L,
loW + 1.6L + 0.5S
loW + 1.6L, + 0.5L
loW + 1.6L, + 0.8W
loW + 1.6S + O.5L
loW + 1.6S + 0.8W
loW + 1.3W + 0.5L + 0.5L,
loW + 1.3W + 0.5L + O.SS
loW + 1.5E + 0.5L
loW + 1.5E + 0.2S
0.9D - 1.3W
0.9D - 1.5E
It1
- - - - - - - - - rI.l
~I I~ ~I
UFETIME UFETIME
(a) (b)
Figure 27-1. Response history: (a) Actual time dependence of response, (b) sorted response history expressed by
marginal curve. (Source: Marek [1990]. Reprinted with permission.)
654 Applications in Steel Structures
(b) Snow
.1 +1.0 0 +1.0
0
(<lIWind
-1.0
1
0
I'u
+1.0
r
-1.0
[]'W
-1.0
/
(d) Long-duration
load
.1
load
.1 +1.0 0 -1.0
.1
JU (~-D load
(e.g. earthquake)
+1.0 0 +1.0
Figure 27-2. Selected examples of marginal curves and corresponding histograms. (Source: Marek, P., M. Gustar,
and P. J. Tikalski [1993]. Monte Carlo simulation-a tool for better understanding of LRFD. Journal of Structural
Engineering, ASCE 119[ST5]:1586-1599. Reprinted with permission from the American Society of Civil
Engineers.)
Lr! Lr!
S! S!
.E.~==5i1ft1C==:::::::~ E
Figure 27-3. Loading scheme of a steel structure. (Source: Marek [1990]. Reprinted with permission.)
Applications in Steel Structures 6S5
table at the left in Fig. 27-4 contains the extreme (maximum) forces P corresponding to individual
loadings; the table on the right in Fig. 27-4 contains maximum and minimum values of force P cor-
responding to the summation of all applied extreme loads and the magnitudes of maximum and mini-
mum responses corresponding to selected levels of probability of exceedance. The positive signs in the
"minimum column" represent tensile forces and the negative signs represent compressive forces.
This Monte Carlo simulation approach allows consideration of different design situations (erection,
reconstruction, fire, etc.) by adjusting the shapes and extreme magnitudes of the marginal curves, and/
or by selecting a suitable probability of exceedance. For example, a nonexceedance probability of 99.9%
may be chosen in the case of carrying capacity limit states, and a 90% nonexceedance probability in
the case of serviceability limit states.
The nominal (specified or service) magnitudes of loads (see Section 3.2) are not introduced at all in
this procedure. Those magnitudes may be considered as a heritage from the era of the allowable stress
design.
This procedure based on Monte Carlo simulation may lead to a better understanding of response
combinations, to material savings, and to improvements to the limit states methods?
2In addition to the LRFD specifications load combination procedure and the Monte Carlo simulation procedure described in
this chapter, there are also a number of other methods of load combination that can be used in steel structures design. A discussion
of these methods may be found in Wen (1990).
- - - - - - - - - - - - - - - - - - T - - - - - - - - - - - - - - - - - -I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I I
I
I
I I
I I
\l
I I
I
I I
I
I I
I
:1 \ I
+ .... TENSION
- .... COMPRESSION
Figure 27-4. Frequency distribution of simultaneous effects of dead load, long-duration live load, and wind load.
(SouTce: Marek [1990]. Reprinted with permission.)
656 Applications in Steel Structures
4. MATERIAL PROPERTIES
Material properties form a significant part of reliability-controlling factors in structural steel design.
In particular, the yield stress, ultimate stress, fracture toughness, ductility, modulus of elasticity, thermal
coefficient, shear modulus, and properties related to low and high cycle fatigue (see, e.g., Committee
on Fatigue and Fracture, 1982) must be considered. The design values of individual properties are
usually derived from a statistical analysis of a large population of test results and applying a selected
level of probability of exceedence.
Attention has long been paid, in many countries, to the collection and evaluation of data on mechanical
properties of structural steel. Material properties data collected during the development of the AISC LRFD
specifications (AISC, 1986) are described by Galambos and Ravindra (1978). Figures 27-5 and 27-6 are
illustrative of the results from some recent work on steel properties (Mrazik, 1987; Bathon, 1992). Figure
27-5 also shows how different probability distributions can be fit to the same data.
Much more work has to be done in order to assure reliable information considering not only different
steel grades, but also different steel products, effects of temperature, loading rates, and fabrication technology.
5. RESISTANCE
5.1. Resistance in the Frame of Ultimate Limit States
The group of ultimate limit states contains different load carrying capacity criteria related to several
failure modes. A limit state may correspond to failure or intolerable local plastic deformation of a
structure (strength), to second-order effects (stability), to crack initiation and propagation (fatigue), to
toughness of the material (brittle fracture), to overturning (stability of position), etc. Structural resistance
in each of these failure modes depends on different material properties and mathematical models.
Steel 11375
0.20
Thickness 14-25 DUD
1971-75
WEIBULL mean = 231.4 MPa
mean = 277.2 MPa
US
LOGNORM std. dev. = 21.6 MPa
~
II'"
Q
mean = 206.8 MPa COV=0.078
~ 0.10
1
== mean = 277.2 MPa
Iloo
0.05
I
g => g =>
N
\I)
N I'l
\I)
I'l ~~
Yield Stress
Figure 27-5. Yield stress statistical characteristics of one type of Czechoslovak structural steel grades. (Source:
Mrazik [1987]. Reprinted with permission.)
Applications in Steel Structures 657
acteristics, as well as the minimum yield stress at selected levels of probability of exceedance, may be
computed from an adequately large set of experimentally established data. Similarly, the dispersion of
actual cross-sectional areas,A act (compared to nominal areas Anom) can be analyzed. The resulting design
carrying capacity of a structural member (e.g., resistance of a tension member) depends on the evaluation
of the product R act at a selected level of probability of exceedance:
(27-1)
Marek et al. (1990) have developed a procedure based on Monte Carlo simulation for determining
the design strength of built-up members (i.e., members composed of two or more parts), considering
the yield stresses and cross-sectional properties of each part (component) as independent variables. Such
an analysis, considering the same level of probability of exceedance for the built-up member as for the
homogeneous member, may lead to higher design values of carrying capacity than those computed by
the usual approach of multiplying the total cross-sectional area by the yield stress. Figure 27-7 shows
the "bonus" corresponding to a member built up of one to six components of equal cross-sectional
areas; the calculation was performed using the computer program M-Star (1991). The bonus is about
5 to 15% compared to the usual approach. A bonus factor of 1.05 for built-up members has already
been introduced in Czechoslovak specifications (Czechoslovak Institute for Standards [CSN], 1969).
6. RELIABILITY
~
Var: 13.25 (ksi)2
~ 700
I
COy: 7.77%
...
~ 600
=
~ 500
......=
...0=
400
j-= 300
200
Z 100
0
~ ~ ~ ~ ~ ~ a ~ n a ~ " ~ ~ ~ ~ ~ ~ n D ~
36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74
Figure 27-6. Yield stress statistical characteristics of U.S. structural steel grade A36. (Source: Bathon [1992].
Reprinted with permission.)
658 Applications in Steel Structures
limit state exists. However, because Q and R are random variables, there is a small probability P f that
R may be less than Q (see Fig. 27-8).
If the actual probability distribution shape of either (R - Q) or In(R/Q) is known, and an acceptable
failure probability can be specified, then completely probabilistic design criteria can be established.
Because of the lack of sufficient data on the distributions of individual random variables, the reliability
evaluation in the LRFD format (AISC, 1986) and several other specifications is based mainly on normal
or lognormal distributions for the composite variables Q and R. The reliability is expressed by the safety
index (reliability index) ~ as defined below..
1. Consider (R - Q). Failure is defined by (R - Q) < O. In this case the safety index is given by
~= (Rm-Qm)
(27-2)
(O"i + 0"~)112
where Rm and Qm are the mean values of Rand Q, respectively, and O"R and O"Q are standard deviations of
R and Q, respectively. Also,
(27-3)
where <lR and ao are the separation coefficients for R and Q, respectively. Because of the incompleteness
of statistical data, simplified separation coefficients are being introduced:
(27-4)
60.5 - - - - - 1- _ _ _ _ _ _ _I - - - - - - -
Bonus 1
1
1 _ _ _ _ _ _ _ 1_ _ _ _ _ _ _ _I _ _ _ _ _ _ _
55.5 1 1
l 50.5 - - - - - - - - _I _ _ _ _ _ _ _
I...
1
"
:i
~
45.5
40.5
35.5 - - - - - - - - - - - - - -I - - - - - - -
1
30.5
1 2 3 4 5 6
+ A441(42)
* A36
•
A572(50) o A572(60)
Figure 27-7. Design stress corresponding to the number of partial sections in built-up steel member. Arrows show
the "bonus" representing the material savings. (Source: Marek, Venuti and Gustar [1990]. Reprinted with
permission. )
Applications in Steel Structures 659
2. Alternatively, consider In(R/Q). Failure is defined by (R/Q) < 1; that is, In(R/Q) < O. In this case the safety
index is given by
(27-5)
(27-6)
In the development of the LRFD format (Ellingwood and Galambos, 1982; AISC, 1986), the ~
values computed using Eq. (27-5) are made to meet certain "target" values (~T) by selecting the load
and resistance factors suitably. For example, (1) under dead plus live and snow loading, ~T = 3.0 for
members and ~T =4.5 for connections; (2) under deal plus live plus wind loading, ~T =2.5 for members;
and (3) under dead plus live plus earthquake loading, ~T = 1.75 for members.
Resistance
(a)
o R,Q
(Kip)
Frequency
(b)
o In(RIQ)
... 1
Figure 27-8. Frequency distributions: (a) resistance and load effects: (b) logarithm of R/Q.
660 Applications in Steel Structures
individual variables reflecting the actual characteristics of each variable (e.g., time dependency of load-
ing effects and scatter of mechanical and geometric properties of the structure). The evaluation of the
function In(RIQ) may be conducted considering the simultaneous effect of all independent variables.
The Monte Carlo method of simulation generates the frequency distributions of R, Q, and In(RIQ).
As an example, the reliability of a built-up structural steel member subjected to tension was inves-
tigated by Marek et al. (1993), using the computer program M-Star (1991). Variations of yield stress,
cross-sectional area, and the time-dependent load effect histories of several independent loads were
considered. Figure 27-9a shows an example of the computer program output, including the resulting
frequency distribution curve of function In(RIQ); Fig. 27-9b shows the individual frequency distribution
curves of resistance R and response Q. The failure probability Pf is given by the area of the probability
distribution of In(RIQ) to the left of the vertical axis.
This Monte Carlo approach can be used to evaluate the reliability of structural members considering
multiple load effects and scatter of structural resistance in a manner consistent with limit states method
rules. The advantage of this approach is its ability to accommodate the actual frequency distributions
of many independent variables expressed by histograms rather than assuming simplified distributions,
such as Gaussian and lognormal, for the variables. This method demonstrates the dependence of failure
probability on the actual shape of frequency distributions of the independent variables.
The Monte Carlo calculations can be conducted in just tens of seconds on a 486150-based personal
computer for most problems. Improved capacity and speed of personal computers enable engineers to
use Monte Carlo simulation even for complex problems. More detailed discussions on Monte Carlo
simulation may be found in Sundararajan (1985) and Chapter 4 of this book.
In addition to the LRFD approach and the Monte Carlo simulation method, the stress-strength
interference method, first-order and second-order reliability methods (FORMs and SORMs), and the
M-Star
Min. : -0.0965 Median: 0.804 Max. : 2.944
Mean : 0.834 Disp. : 0.0931 Dev. : 0.3050
Var. : 0.3654 Asym. : 0.891 Excess : 2.240
(a)
In«1 *(0.95+0.1 *a»*fyl)/«14/1.2)*(0.9+0.3*d)+9*w+8*sn+9*I1+7.5*sl»
7\
0.01% : -0.001802
0.10% : 0.0862
99.99% : 2.588
2.944 0.0 :0.01%
-0.0965
In(RIQ)
Frequency
(b)
~--,-~~-----------,~~~-------,----~-'----I~
Figure 27-9. Example of reliability evaluation, using Monte Carlo simulation: (a) computer program output; (b)
frequency distribution curves of Rand Q. (Source: Marek, P., and M. Gustar, and P. 1. Tikalski [1993]. Monte
Carlo simulation-a tool for better understanding of LRFD. Journal of Structural Engineering, ASCE 119[ST5]:
1586-1599. Reprinted with permission from the American Society of Civil Engineers.)
Applications in Steel Structures 661
probabilistic finite element method can also be used for the reliability assessment of steel structures. In
fact, Mahadevan and Haldar (1991) have used probabilistic finite element methods to validate the LRFD
approach. Chapters 2, 3, and 5 of this book discuss the stress-strength interference method, FORMs
and SORMs, and the probabilistic finite element method, respectively.
Much of the research and development in earlier years were related to hot-rolled steel structures
(e.g., AISC, 1986). In more recent years significant strides have been made in the application of prob-
abilistic methods to cold-formed steel structures as well. Although the general philosophy and methods
are the same for both types of steel, there are differences in material properties and target safety indices.
The LRFD specifications for cold-formed structural members from carbon and low-alloy steels have
been developed under the sponsorship of the American Iron and Steel Institute (AISI) (Hsiao et al.,
1989). The LRFD specifications for cold-formed stainless steel structures have been developed under
the sponsorship of the American Society of Civil Engineers (ASCE) (Lin et al., 1992; ASCE, 1990).
Material properties data used in the development of the specifications are discussed by Lin et al. (1988).
One significant difference between the LRFD specifications for cold-formed carbon steels and cold-
formed stainless steels is the higher target safety indices (I3T) used for the latter. A target value of 2.5
for members and 3.5 for connections is recommended for cold-formed carbon steels, whereas target
values of 3.0 for members and 4.0 for connections are recommended for stainless steel. The higher
safety indices are used for stainless steel in order to maintain the higher factors of safety used in the
deterministic stainless steel specifications (AISI, 1974).
8. CONCLUDING REMARKS
Use of probabilistic methods for the analysis, design, and reliability assessment of steel structures is
sure to increase in coming years. However, special attention should be paid to realistic models for time-
dependent loading effects, scatter of material and geometric properties, imperfections, and other envi-
ronmental effects.
REFERENCES
AISC (American Institute of Steel Construction) (1986). Manual for Steel Construction, Load and Resistance
Factor Design. Chicago, Illinois: American Institute of Steel Construction.
AISI (American Iron and Steel Institute) (1974). Stainless Steel Cold-Formed Structural Design Manual. Wash-
ington, D.C.: American Iron and Steel Institute.
ASCE (American Society of Civil Engineers) (1990). Specification for the Design of Cold-Framed Stainless Steel
Structural Members. New York: American Society of Civil Engineers.
BATIlON, L. A. (1992). Probabilistic Determination of Failure Load Capacity Variations for Lattice Type Structures
Based on Yield Strength Variations including Nonlinear Post-Buckling Member Performance. Ph.D. Thesis.
Portland, Oregon: Portland State University.
Committee on Fatigue and Fracture of the Committee on Structural Safety and Reliability of Structural Division.
(1982). Journal of the Structural Division, ASCE 108(STl):80.
CSA (Canadian Standards Association) (1974). Steel Structures for Buildings-Limit States Design. Standard CSA
S16.1-1974. Rexdale, Ontario, Canada: Canadian Institute of Steel Construction.
662 Applications in Steel Structures
CSN (Czechoslovak Institute for Standards) (1969). Standard Design of Steel Structures [in Czech]. UNM Praha,
Czech Republic: Czechoslovak Institute for Standards. (1969 and 1984 editions).
ELUNGWOOD, B., and T. V. GALAMBOS (1982). Probability-based criteria for structural design. Structural Safety
pp.15-26.
EUROCODE (1984). Common Unified Rules for Steel Structures, rev. ed. 1992. Eurocode No.3. Brussels, Belgium:
Commission of the European Communities.
GALAMBOS, T. v., and M. K RAVINDRA (1977). The basis for load and resistance factor design criteria for steel
building structures. Canadian Journal of Civil Engineering 4: 178-189.
GALAMBOS, T. v., and M. K RAVlNDRA (1978). Properties of steel for use in LRFD. Journal of the Structural
Engineering Division, ASCE 104(ST9):1459-1468.
HSIAo, L. E., W. W. Yu, and T. V. GALAMBOS (1989). Load and Resistance Factor Design of Cold-Formed Steel:
Load and Resistance Factor Design Specification for Cold-Formed Steel Structural Members with Com-
mentary (12th Progress Report). Rolla, Missouri: University of Missouri.
LIN. S.-H., W. W. Yu, and T. V. GALAMBOS (1988). Load and Resistance Factor Design of Cold-Formed Stainless
Steel: Statistical Analysis of Material Properties and Development of the LRFD Provision (5th Progress
Report). Rolla, Missouri: University of Missouri.
LIN, S.-H., W. W. Yu, and T. V. GALAMBOS (1992). ASCE LRFD method for stainless steel structures. Journal of
Structural Engineering, ASCE 118(ST4):1056-1069.
LIND, N. C. 1972. Theory of Codified Structural Design. Waterloo, Ontario, Canada: University of Waterloo.
MAHAoEVAN, S., and A. HALDAR (1991). Stochastic finite element method based validation of LRFD. Journal of
the Structural Engineering Division, ASCE 117(ST5): 1393-1412.
MAREK, P., and W. J. VENUTI (1990). On the combination of load effects. Journal of Constructional Steel Research
16:193-203.
MAREK, P., W. J. VENUTI, and M. GUSTAR (1990). Combinations of design tensile yield stresses in built-up sections
[in French]. Construction Metallique 4:17-22.
MAREK, P., M. GUSTAR, and P. J. TIKALSKY (1993). Monte Carlo simulation-a tool for better understanding of
LRFD. Journal of Structural Engineering, ASCE 119(ST5):1586-1599.
MRAZIK, A. (1987). Reliability Theory of Steel Structures [in Slovak]. Bratislava, Slovakia: VEDA.
M-Star (1991). Monte Carlo Simulation Computer Program. Davis, California: HAB International.
RAVlNDRA, M. K, and T. V. GALAMBOS (1978). Load and Resistance factor design for steel. Journal of the
Structural Engineering Division, ASCE 104(ST9):1337-1353.
RAVlNDRA, M. K, and T. V. GALAMBOS (1979). LRFD criteria for connections. Journal of the Structural Engi-
neering Division, ASCE 106(ST9):1427-1441.
RESCOM (1990). Monte Carlo Simulation-Evaluation of Response Combinations (Simultaneous Effect of More
Loadings). Praha, Czech Republic: APRO, Ltd.
SUNDARARAIAN, C. (1985). Probabilistic structural analysis by Monte Carlo simulation. In: Decade of Progress in
Pressure Vessel and Piping Technology. New York: American Society of Mechanical Engineers, pp. 743-
760.
WEN, Y K (1990). Structural Load Modeling and Combination for Performance and Safety Evaluation. New
York: Elsevier Science Publishers.
28
APPLICATIONS IN CONCRETE
STRUCTURES
ANDREW SCANLON
1. INTRODUCTION
Probability theory provides a systematic approach to the treatment of uncertainties inherent in the design
and assessment of reinforced concrete structures. Uncertainties that have to be dealt with in design
include random variations in loads, material strengths, and member dimensions, as well as approxi-
mations used in theoretical behavioral models. In the assessment of strength of existing structures,
sampling errors have to be taken into account. This chapter describes several examples of the application
of probabilistic mechanics to concrete structures as presented in the technical literature. The examples
consist of the use of Monte Carlo simulation to obtain probability distributions for flexural strength of
prestressed concrete beams, an illustration of an approach to the derivation of load and resistance factors
for use in design codes, and the application of Bayesian statistics to evaluation of concrete strength in
existing structures. In addition to these specific examples, reference is also made to other applications
reported in the literature.
2.1. Notations
663
664 Applications in Concrete Structures
t Modulus of rupture
f,p Splitting tensile strength
fu Ultimate strength of steel
fy Yield strength of steel
L Live load
Mu Moment capacity
R Structural resistance (strength); also, loading rate
r Ratio of measured to nominal reinforcing bar area
U Load effect
V Coefficient of variation
v Pulse velocity
X Mean value of X
Y Safety margin
a Separation function; also, a constant in regression equation
13 Safety index; (reliability index) also, a constant in regression equation
'Y Load factor
f1 Mean or expected value
a Standard deviation
<l> Resistance factor
wp Reinforcement index
2.2. Abbreviations
For loading rates (R) other than 35 psi/sec, the mean and coefficient of variation are given by
where Ic is the design compressive strength (psi), VCCY1 is the coefficient of variation (COY) of cylinder
strengths, ~n situ is the COY of concrete strength in structure relative to cylinder strength, Y;n test is the
COY representing in-test variations, and R is the rate of loading (psi/sec).
Suggested values for ~n situ, Y;n test, and Vrate are
f, = 8.0f~/2 psi
Vf, = V(V~/4) + (0.19)2
where Icslrue is the mean compressive strength of concrete in structure, and Vestrue is the COY of com-
pressive strength of concrete in structure.
2.2105 [ ]3.8157
PDF = 3.7138 [ Iy - 36 ] 68 - Iy . (36 :S Iy :S 68 ksi)
32 32 '
2.0204 [ ]6.9545
PDF = 7.1562 [ Iy - 57 ] 108 - f, . (57 :S Iy :S 108 ksi)
51 51 '
Note: A reduction of 15% in the mean value should be applied for grade 14 and grade 18 bars but the
standard deviation should be taken as the same value as for smaller bars.
2.2105 [ ]3.8157
PDF = 23960 [ I. - 55.8 ] 105.4 - I. . (55.8 :S t. :S 105.4 ksi, I.;;:: Iy)
. 49.6 49.6 '
[
I. - 88.35 ] 2.0204 [ 167.40 - t. ]6.9454
(88.35 :S I. :S 167.40 ksi, I.:S Iy)
PDF = 4.6169 79.05 79.05 ;
E, = 29,200 ksi
VE, = 0.024
r = 0.988
V, = 2.4%
~
i
Rectangular column
Width, Thickness 11-30 +1/16 1/4 Normal 7-16 +1/32 1/8 Normal
Circular column
Diameter 11-13 0 3/16 Normal 11-13 0 3/32 Normal
~
670 Applications in Concrete Structures
Other sources of data for the statistical properties of variables affecting concrete structures include
Mirza and MacGregor (1979a, 1979b) and Mirza et al. (1979).
Strength and stiffness of concrete members are functions of material properties and structural dimen-
sions. Because some or all of these parameters are random variables, the strength and stiffness are also
random variables.
Monte Carlo simulation is a useful tool for computing the probability density function for strength
and stiffness of a structural member. Warner and Kabaila (1968) presented results of a study of the
variability of reinforced concrete column strength based on Monte Carlo simulation. Ramsay et al.
(1979) applied the procedure to investigate the variability of short-time deflections of reinforced concrete
beams. This study illustrated that deflection variability is highest when the applied moment in a beam
is close to the cracking moment, because the stiffness is strongly affected by flexural cracking and the
variability of tensile strength of concrete is quite high. MacGregor et al. (1983) used Monte Carlo
simulation to study reinforced concrete and prestressed concrete members. Ellingwood (1977) utilized
the method to investigate the strength of reinforced concrete beam-columns. Mirza and Shrabeck (1991,
1993) used it in their investigations of composite beam-columns. Results of this investigation showed
that the slenderness ratio, structural steel ratio, and the end eccentricity ratio significantly influence the
computed probability density function of the ultimate strength of composite beam-columns, but the
effect of specified concrete strength is significant only when the slenderness ratio is less than 33.
To illustrate the methodology, Kikuchi, Mirza and MacGregor's (1978) investigation of the flexural
strength of prestressed concrete beams is discussed here in some detail.
First, an accurate deterministic prediction relationship was developed for flexural strength on the
basis of nonlinear stress-strain relationships for concrete and prestressing strand. Because the flexural
strength of a prestressed concrete beam is affected by the variability of the concrete, reinforcing steel,
and prestressing steel, prestress force and losses, and structural dimensions, probability distributions
were developed for each of these variables. Random values of each variable were generated according
to its probability distribution. 1 These random values were used in the prediction relationship to compute
the theoretical flexural strength.
The accuracy of the theoretical model was determined by comparing predicted values with test beam
results for 45 beams, as reported in the literature. On the basis of this analysis the coefficient of variation
of the difference between the theoretical results and test results was determined to be 0.035.
A computer program developed for the study calculates the theoretical flexural strength using the
generated random values and determines the ratio of theoretical strength to nominal strength computed
using American Concrete Institute (ACI) Code procedure. By repeating the calculations a large number
of times2 (say, 1000) a population of the ratio is obtained from which statistical properties of the ratio,
including mean, standard deviation, coefficient of variation, coefficient of skewness, coefficient of kur-
tosis, minimum and maximum values, and median, are calculated. A general flowchart for the program
is shown in Fig. 28-l.
Results are presented in Figs. 28-2 and 28-3 for pretensioned and posttensioned beams, respectively.
'Generation of random values of random variables is discussed in detail in Chapter 4 of this book.
2Required number of repetitions depends on the problem and the desired level of accuracy. A method for determining the
required number of repetitions is discussed in Sundararajan (1985).
Applications in Concrete Structures 671
lWo sets of results are given: one for reinforcement index (wp) equal to 0.054 and another for wp =
0.295.
An extensive study to develop probability-based load criteria for structural design of buildings was
reported by Ellingwood et al. (1980), Galombos et al. (1982), and Ellingwood et al. (1982). Common
load factors and load combinations were recommended for all construction materials along with guid-
ance for development of resistance factors for the various construction materials. The objectives of the
proposed criteria were to achieve a more uniform level of reliability and to simplify design of structures
constructed of combinations of materials, for example, steel frames and concrete foundations.
INPUT 1:
Nominal Properties
of Variables
ACI Ultimate
Strength Model
I ..,.. "
ACI
Strength
INPUT 2:
~
Statistical Properties
" of Variables
" ...
-
Select a Random Value
of Each Variable
Theoretical
mtimate Strength
Model
Repeat
I ...... "
Theoretical I
1000 Times
I
~ ..
Strength
"
Strength Ratio:
TheoreticallACI
Intimate Strength
Output: Summarize
Statistical Analysis of
Strength Ratios
Figure 28-1. Flow chart for Monte Carlo simulation. (Source: Kikuchi, Mirza, and MacGregor 1978)
672 Applications in Concrete Structures
The recommended format for determining factored loads is expressed by the equation
(28-1)
in which U is the total factored load, ADDn is the factored dead load, AQQn is the factored principal
variable load, and AjQj is the factored arbitrary point-in-time loads. Here AD, AQ , and Aj are the load
factors. The individual time-varying loads are rotated in Eq. (28-1), each taking the position of the
principal variable load. The suggested format recognizes that the probability of two time-varying loads
achieving their maximum lifetime values at the same instant is extremely low.
Development of resistance factors consistent with the proposed load factors is the responsibility of
materials specifications groups such as the American Concrete Institute (ACI) in the case of reinforced
concrete buildings. Ellingwood et al. (1982) present two alternative formats that can be used to establish
resistance criteria. The first approach, which is used in ACI Standard 318 (ACI 318-89) (ACI, 1989)
and which has been used since 1963, considers the factored resistance as the product of a nominal
resistance Rm and resistance factor <\>. Uncertainties associated with material properties, dimensions, and
analytical models are lumped into the factor <\>, as are the mode and consequence of failure.
An alternative format, common in Europe and Canada, uses partial material factors. In this approach,
each material strength used in calculating the factored resistance is divided by a factor, for example,
Ie = I~ (specified)/'yo in which '"Ye is generally greater than unity. The advantages and disadvantages of
.9"' 2.0 t
u
~
"
:J
u..
0 J rL_
~"'" a.o , . . . . - - - - - - - - - - - - - - - - - - - - - ,
~ Mean - 1.028
:0
.g'"
CoeHicient
6.0 of Variation - 0.079
c:: CoeHicient
of Skewness = -0.15
Kurtosis - 3.03
~.O
"p - 0.295
h -IS"
2.0
0.50 0.5& 0.66 0.7~ 0'&2 0.90 0.9& 1.06 1.14 1.22 1.30
Ratio of Theoretical/ACI Ultimate Moments
Figure 28-2. Histogram of flexural strengths for pretensioned beams. (Source: Kikuchi, Mirza, and MacGregor)
Applications in Concrete Structures 673
various safety checking formats were examined by Ellingwood (1982), who concluded that the overall
resistance factor is satisfactory when the expression for resistance is in the form of a product of random
variables, but that the partial factor format is much more effective when the resistance is represented
by a sum of random variables.
The historical development of resistance factors in the ACI Code (ACI 318) is summarized by
MacGregor (1976), whereas the development in the United Kingdom and Europe is traced by Cranston
(1992).
MacGregor (1983) presented proposed changes to the load and resistance factors used in ACI Stan-
dard 318-79. These proposed changes, developed as part of the study by Ellingwood et al. (1980), are
based on a first-order probabilistic procedure originally presented by Rackwitz and Fiessler (1976).3
Recommendations for resistance factors for concrete members were also presented by Israel et al. (1986)
and Corotis and Ellingwood (1989).
MacGregor (1976) presented examples for the development of load and resistance factors when the
ratio of resistance to load (RIU) can be assumed to be lognormally distributed. The example by
MacGregor for flexure of a reinforced concrete beam is presented below. Using procedures developed
"'Q - 0.054 ;-
t,O
h - 16" r-
III
o
0.50 0.55 0.66
~
0.74 0,52 0.9 0 0.95 1,06
rh~ 1.1 4 1.21 1.30
Ratio of Theoretical/ACI Ultimate Moments
Figure 28-3. Histogram of flexural strengths for posttensioned beams. (Source: Kikuchi, Mirza, and MacGregor
1978).
674 Applications in Concrete Structures
(28-2)
in which R is the mean value of resistance (strength), U is the mean value of load effect, VR is the
coefficient of variation of resistance (strength), Vu is the coefficient of variation of load effect, a is the
separation function, usually assumed to be 0.75, and f3 is the safety index (reliability index).
The safety index f3 represents the number of standard deviations by which the mean value of the
safety margin exceeds zero. The safety margin Y can be taken as
Y = fn(R/U)
Because tn(R/U) = 0 when R/U = 1.0, tn(R/U) < 0 represents the situation in which the resistance R
is less than the load effect U.
Because design equations for resistance and load effect are usually expressed in terms of nominal
values (R', U') rather than mean values it is convenient to rewrite Eq. (28-2) as
(28-3)
in which
R'</> 2: U''A.
in which
and
The use of Eq. (28-2) in deriving load and resistance factors is illustrated through the following
example presented by MacGregor (1976) for the case of flexure of a reinforced concrete beam under
dead plus live load. Data given in Table 28-4 are used in the calculations.
The nominal flexural strength of an underreinforced rectangular beam based on ACI Code provisions
(ACI, 1989) is given by
~
'"
676 Applications in Concrete Structures
in which
(3.24)(60)]
M. = (3.24)(60) [ 18 - (2)(0.85)(4)(12) = 253 ftk
M. = A,Md - (Q/2)]
Assuming safety index ~ = 3.5 and a = 0.75, the resistance factor <I> is computed as
cI> = 1.071e-C3.SXO.1SXO.1l) = 0.802
Turning to the load factor side of the safety equation we note that the load is a combination of two
loads, dead (D) plus live (L), with different statistical properties. Using the first two terms of the series
Applications in Concrete Structures 677
(28-5)
Writing the terms in brackets in exponential form we obtain separate load factors for D and L:
(28-6)
The coefficient of variation for load involves two terms, one related to variability of the load itself and
the other related to uncertainty in the structural analysis. Thus,
This equation can be compared to the current values given in the ACI Code (ACI 318-89), that is,
The current trend among design code writing bodies is to establish a common set of load factors
and then to calibrate the resistance factors (<I») for the various construction materials such as concrete,
steel, masonry, and timber, to produce a desired safety level as determined by the chosen safety index
13. The final value of <I» selected for building code use would be determined by computing <I» values
over a wide range of member sizes and reinforcement ratios and then taking a weighted average. The
objective is to produce as uniform a level of safety as possible over the entire range of cases covered
by the code. 4
With the recent emphasis on infrastructure rehabilitation and life extension of industrial plants, evalu-
ation of the safety of existing structures is becoming important. In the evaluation of existing structures
4A general discussion of probability-based design codes (load and resistance factor design codes) may be found in Chapter
15 of this book.
678 Applications in Concrete Structures
it is often necessary to obtain an estimate of the compressive strength of the in situ concrete. Information
on the in situ strength can be obtained from laboratory tests on cores taken from the structure and from
nondestructive test results such as pulse velocity and rebound hammer number. Bayesian statistics
provides a procedure for combining data from various sources as well as subjective information that
may be available. Using such an approach, Kryviak and Scanlon (1987) presented a procedure for
estimating mean compressive strength of a concrete structure using prior subjective information, core
test data, pulse velocity readings, and rebound number readings. The prior information could be from
test data at the time of construction or just expert opinion.
The probability density function for compressive strength of concrete, assuming a normal distribution,
is given by
(28-9)
where x is the compressive strength (random variable), (T2 is the variance, and J-l is the mean or expected
value.
Using Bayes' theorem, the mean strength is assumed to be a random variable with a normal prob-
ability density function. The prior distribution is given by
(28-10)
where J-l is the mean compressive strength (random variable), (T~r is the prior variance of J-l, and J-lpr is
the prior mean value of J-l.
Assume n core test results of the existing structure are obtained, giving compressive strength values
Xv Xz, ..• , X n • These data can be combined with the prior information to obtain an updated posterior
distribution of mean compressive strength (Ang and Tang, 1975) given by
TI f(Xi IfL)f(fL)
f(fLlx u ... , xn) = ---,::00...,;'::::·=n1' - - - - - - (28-11)
The posterior distribution is Gaussian, with mean and variance given by Kryviak and Scanlon (1987):
(n/rr~'jX+ (l!rr2pr)fLpr
fLpO = (n/rr~) + (l!rr~r) (28-12)
First, a relationship between pulse velocity (v) and the concrete compressive strength (x) should be
established. This is done by taking pulse velocities and core sample testing (for compressive strength)
at m locations on the existing structure. These m pairs of pulse velocities and concrete compressive
strength can be related through a regression relationship such as
where E(xlv) is the expected value of x given v, and a and 13 are constants determined from the
regression analysis.
The calibration error (error in the regression relationship) is given by
where E(xilvi) is calculated using Eq. (28-13) and the ith pulse velocity measurement; Xi is the ith
compressive strength measurement, in which i = 1, 2, ... , m.
Figure 28-4 shows a typical plot of compressive strength vs. pulse velocity. The distribution of pulse
velocities is represented by the sample mean V and sample variance (S~). The variance of the mean
value of compressive strength with respect to the estimate E(x Iv) is given by
(28-15)
70r-----------------------------------------~
~ DATA o
I-
U
- Z
)(
::>
50 u.
+
I -~-
I-
CJ x/ v -~-- >-
l!>
Z
w
~-- + t:
:p-+~
lI)
a: z
l- W
ll) 40 D
w
I- ~
w -l
a: iii
u «
z co
u 30
0 o
a:
Q..
The probability density function of the true concrete strength at the location of the ith pulse velocity
measurement is given by
It is now possible to develop a likelihood function that gives the conditional probability density of
obtaining the expected compressive strength [E(xd Vi)], assuming that the mean strength distribution is
as given by the prior. Using the theorem of total probability,
(28-17)
Mean = E(xlv)
Variance = (O"xlv)2 + (O"EY + (0"0>2 = (O"sY
In addition to the m pairs of pulse velocity and compressive strength measurements, r more meas-
urements of pulse velocity are taken at r locations. The likelihood function is given by
n'f [
i=1 E(x.! vi)1 f.L
] = n'i=1
(211")-112
(O"sY exp
{I 2
[E(x.! Vi) -
(O"sY
IJJ} (28-18)
where E(Xi Iv.) is computed using Eq. (28-13) and the ith pulse velocity measurement, where i = 1, 2,
... , r.
The posterior distribution of the mean concrete strength given by the pulse velocity readings is
Gaussian, with mean and variance given by
[ f.Lpr I
~ (1)] + [2 f:t~
f:t;;r 0"pr
E(x.!vi) ~
--;;r-If:t
(1)]
;;r
(28-19)
(28-20)
Applications in Concrete Structures 681
These values can now be used as prior values /Lpn and (J~r for combination with core test data,
using Eq. (28-12), to give the final posterior values for mean and variance of the mean concrete
strength.
To combine two sets of indirect data (e.g., pulse velocity and rebound hammer) with direct data
(core tests) a three-step procedure is followed. One set of indirect data is combined with the prior data
by using Eqs. (28-19) and (28-20) to give an initial posterior distribution. This posterior distribution is
then treated as a new prior and combined with the second set of indirect data, again using Eqs. (28-
19) and (28-20), to obtain a second updated posterior. Finally this updated posterior distribution is
combined with the direct data to give the final posterior distribution of mean compressive strength,
using Eq. (28-12).
Use of the procedure outlined above was illustrated by Kryviak and Scanlon (1987) using a database
from a field investigation consisting of 21 compressive strength tests on cores, 460 pulse velocity
readings, and 460 rebound hammer readings.
Figure 28-5 presents the variation of the posterior mean value of the mean strength with increasing
number of cores for various values of assumed prior mean strength (only core test data-not pulse
velocity measurements-were used). Figure 28-6 illustrates the effect of varying the number of pulse
velocity readings with two different sets of core test data. 1\vo methods were used to select core test
data. The first method was based on a random selection process from the entire pool of core test data.
The second approach (referred to as quasirandom) involved organizing the data in four groups arranged
according to pulse velocity readings and selecting randomly from each group.
In other applications of the Bayesian approach, Bartlett and Sexsmith (1991) demonstrated its ap-
plication to material grade identification in existing bridges, and Bazant and Chen (1984) applied the
method to the prediction of creep and shrinkage .
Figure 28-5. Variation of mean strength with increasing number of cores. (Source: Kryviak and Scanlon 1987.
Reprinted with permission.)
682 Applications in Concrete Structures
40
m··
o •• oL.:.J
Q..
~ 36
.r;'
Q,
c
Q)
L- fspec = 20.7 MPa (Specified Compressive Strength)
V) 92
c apr = 0'0
oQ)
~
ESTIMATES BASED ON n CORE. STRENGTH TESTS &
'0 26
Q) ~... -c:J PRIOR: Diff use
:J
o +--+PRIOR: Jlpr = 10.3 MPo. = 0.50*fspec
>
1J C9--e)PRIOR: Jlpr 20.7 MPa. = 100*fspec
2u 24 ~PRIOR: Jlpr 24.3 MPa. = 1. 17 *fspec
412 MPa. = 2..00* fspec
Q)
0.
x ~PRIOR: Jlpr
W
20
2 6 6 10 12. 1.4. 16' 16 20
Sample Size,
Figure 28-6. Variation of mean strength with increasing number of pulse velocity readings with two different sets
of core test data. (Source: Kyrviak and Scanlon 1987. Reprinted with permission.)
7. CONCLUDING REMARKS
Examples of the application of probability-based methods to the design and assessment of concrete
structures have been presented in this chapter. These methods allow uncertainties to be accounted for
in a systematic way. Data on the statistical parameters related to concrete structures have been presented,
and procedures for developing load and resistance factors have been described. Further developments
in design code applications can be expected in the future as more statistical data on loads and resistances
are developed. Wider applications of probability-based methods can also be expected in the future in
the assessment of existing structures as greater attention is paid to infrastructure rehabilitation.
REFERENCES
ACI (American Concrete Institute) (1989). Building Code Requirements for Reinforced Concrete (ACI 318-89).
Detroit, Michigan: American Concrete Institute.
ANG, A. H-S., and W. H. TANG (1975). Probability Concepts in Engineering Planning and Design. New York:
John Wiley & Sons.
BARTLEIT, F. M., and R. G. SEXSMITII (1991). Bayesian technique for material grade identification in existing
bridges. ACI Materials Journal 88(2):164-169.
BAZANT, Z. P., and I-C. CHERN (1984). Bayesian statistical prediction of concrete creep and shrinkage. ACI Journal
81(3):319-330.
Canadian Standards Association (1984). Design of concrete structures for buildings. A National Standard of Canada
(CAN3-A23.3-M84). Toronto, Canada: Canadian Standards Association.
CORNELL, C. A. (1969). A probability based structural code. ACI Journal 66(12):974-985.
COROTIS, R. B., B. ELLINGWOOD, and A. SCANLON (1989). Reliability bases for codes for design of reinforced
Applications in Concrete Structures 683
concrete structures. In: Proceedings of the 5th International Congress on Structural Safety and Reliability,
Vol. 3, New York: American Society of Civil Engineers, pp. 2035-3042.
CRANSTON, W. B. (1993). Reflections on Limit States Design. ACI Special Publication, SP-133. Detroit, Michigan:
American Concrete Institute, pp. 277-298.
ELUNGWOOD, B. (1977). Statistical analysis of reinforced concrete beam-column interaction. Journal of the Struc-
tural Division, ASCE 103(7):1377-1388.
ELUNGWOOD, B. (1982). Safety checking formats for limit states design. Journal of the Structural Division, ASCE
108(7):1481-1493.
ELUNGWOOD, B., T. V. GALAMBOS, l G. MACGREGOR, and C. A CORNELL (1980). Development of a Probability
Based Load Criterion for American National Standard A58. NBS Special Publication No. 577. Washington,
D.C.: National Bureau of Standards.
ELLINGWOOD, B., l G. MACGREGOR, T. V. GALAMBOS, and C. A CORNELL (1982). Probability based load criteria:
Load factors and load combinations. Journal of the Structural Division, ASCE 108(5):978-997.
GALAMBOS, T. v., B. ELLINGWOOD, J. G. MACGREGOR, and A C. CORNELL (1982). Probability based load criteria:
Assessment of current design practice. Journal of the Structural Division, ASCE 108(7):959-977.
ISRAEL, M., B. ELLINGWOOD, and R. B. COROTIS (1986). Reliability-based code formulations for reinforced con-
crete buildings. Journal of Structural Engineering, ASCE 113(10):2235-2252.
KIKucm, D. K., S. A MIRZA, and l G. MACGREGOR (1978). Strength variability of bonded prestressed concrete
beams. Structural Engineering Report No. 68. Edmonton, Alberta, Canada: University of Alberta.
KRYVIAK, G. l, and A SCANLON (1987). Estimation of concrete strength in existing structures. ACI Journal 84(3):235-
245.
LIND, N. C. (1971). Consistent partial safety factors. Journal of the Structural Division, ASCE 97(6):1651-1670.
MACGREGOR, J. G. (1976). Safety and limit states design for reinforced concrete. Canadian Journal of Civil
Engineering 3(4):484-513.
MACGREGOR, J. G. (1983). Load and resistance factors for concrete design. ACI Journal 80(4):279-287.
MACGREGOR, J. G., S. A MIRZA, and B. ELLINGWOOD (1983). Statistical analysis of resistance of reinforced and
prestressed concrete members. ACI Journal 80(3):167-176.
MIRZA, S. A, and J. G. MACGREGOR (1976). A Statistical Study of Variables Affecting the Strength of Reinforced
Concrete Normal Weight Members. Structural Engineering Report No. 58. Edmonton, Alberta, Canada: Uni-
versity of Alberta.
MIRZA, S. A, and J. G. MACGREGOR (1979a). Variability of mechanical properties of reinforcing bars. Journal of
the Structural Division, ASCE 105(4):921-937.
MIRZA, S. A, and J. G. MACGREGOR (1979b). Variations in dimensions of reinforced concrete members. Journal
of the Structural Division, ASCE 105(4):751-766.
MIRZA, S. A, and B. W. SKRABEK (1991). Reliability of short composite beam-column strength interaction. Journal
of Structural Engineering, ASCE 117(8):2320-2339.
MIRZA, S. A, and B. W. SKRABEK (1992). Statistical analysis of slender composite beam-column strength. Journal
of Structural Engineering, ASCE 105(6):1021-1037.
MIRZA, S. A, M. HATZINIKOLAS, and l G. MACGREGOR (1979). Statistical descriptions of strength of concrete.
Journal of the Structural Division, ASCE 105(6):1021-1037.
RACKWITZ, R., and B. FIESSLER (1976). Note on Discrete Safety Checking When Using Nonnormal Stochastic
Models for Basic Variables. Load Project Working Session Report. Cambridge, Massachusetts: Massachusetts
Institute of Technology.
RAMSEY, R. J., S. A MIRZA, and l G. MACGREGOR (1979). Monte Carlo study of short time deflections of
reinforced concrete beams. ACI Journal 76(8):897-918.
SUNDARARAJAN, C. (1985). Probabilistic structural analysis by Monte Carlo simulation. In: Pressure Vessel and Piping
Technology: A Decade of Progress. New York: American Society of Mechanical Engineers, pp. 743-760.
WARNER, R. F., and A P. KABAiLA (1968). Monte Carlo study of structural safety. Journal of the Structural
Division 94(12):2847-2859.
29
APPLICATIONS IN TIMBER
STRUCTURES
WILLIAM M. BULLEIT
1. INTRODUCTION
The use of probabilistic structural mechanics in timber structures was a logical step. The strength
property variation in many materials is significantly lower than the load variability. Wood, on the other
hand, exhibits strength property variability that may be as great or greater than the load variability.
Even though probability-based or reliability-based design was logical from a material and load vari-
ability standpoint, the change to reliability-based design was still slow in coming. By the late 1970s,
the concepts of limit state design and reliability-based design for timber were being suggested by a few
authors (Aplin and Keenan, 1977; Sexsmith and Fox, 1978; Suddarth et al., 1978; Zahn, 1977). Aplin
and Keenan (1977) discussed the application of limit state design concepts to wood engineering. The
reliability aspects were discussed only briefly, whereas areas pertinent to wood, such as probability
distributions for wood properties, size effect, and duration of load, were considered in more depth.
Sex smith and Fox (1978) examined the limit states design concept further, including discussion of the
safety index and examination of glued-laminated (glulam) beam flexural strength, accounting for size
effect. The work of Suddarth et al. (1978) pertained to the use of reliability as a means of comparison
of the performance of different structural elements. They used the stress-strength interference method
for probability of failure calculations. Their work was primarily related to wood truss behavior.
The paper by Zahn (1977) was an attempt to introduce the wood engineering community to the
concepts of probabilistic design. It is noteworthy because the author covered a wide range of material
in one paper. The stress-strength interference method and first-order, second-moment methods were
explained. Zahn then suggested safety indices (reliability indices) for three limit states of a wood joist
floor system: excessive deflection, rupture of a single member, and collapse of the system. He used
these safety indices in a design example for a joist floor; he considered an ultimate limit state (rupture
of a single joist) and a serviceability limit state (excessive deflection). Composite action and load sharing
were considered in the deflection design.
The appearance in 1980 of National Bureau of Standards (NBS) Special Publication 577 (Ellingwood
et al., 1980) increased the desire to develop reliability-based design specifications. The NBS publication
and the above four papers were a starting point for the use of probabilistic mechanics and reliability-
based design in timber engineering.
684
Applications in Timber Structures 68S
This chapter is separated into four main areas: (1) material properties and material behavior, (2)
probabilistic analysis of single members, (3) probabilistic analysis of connections, and (4) probabilistic
analysis of wood structural systems. Examples of reliability-based design specifications are the American
Society of Civil Engineers specifications (ASCE, 1992) and the Canadian Standards Association spec-
ifications (CSA, 1989).
2.1. Notation
A,B, C Constants
D. Nominal dead load
Fb Ultimate bending stress
F, Ultimate tensile stress
fb Applied bending stress
t. Applied tensile stress
K.,Kk Constants
L. Nominal live load
Probability transition matrix
Initial state vector
kth state vector
Average resistance
R. Nominal resistance
time
a Damage parameter
Reliability index (safety index)
Target reliability index (target safety index)
Load duration factor
Constants
Applied stress
(J' Stress ratio
Threshold stress ratio
Resistance factor including load duration
Resistance factor not including load duration
2.2. Abbreviations
value ranges from 0.2 to 0.9. The value of 0.7 has been supported by the data from the in-grade test
program. Considering southern pine, Douglas fir-larch, and hemlock-fir, the average correlation co-
efficient for MOE versus MOR was 0.73 (Green and Kretschmann, 1991).
3.1.1. Moisture effects. Lumber properties are affected by moisture content changes in the wood.
In general, strength properties increase with a decrease in moisture content. The effects of moisture on
the statistics of lumber properties have been studied in some depth for flexural behavior only. The work
was performed for Douglas fir (Aplin et al., 1986; Green et al., 1988) and southern pine (McLain et
al., 1984; Green et al., 1986). Both these references include analytical methods for adjusting the MOR
and MOE of lumber in flexure. Equations to modify the distribution parameters for the MOE and MOR
under variations in moisture content are also included.
3.1.2. Along-the-Iength correlation. Probabilistic modeling of wood members and wood struc-
tures often requires information on along-the-Iength correlation. 1\\'0 approaches to modeling this cor-
relation have been developed and each works better than the assumption of no correlation (Taylor et
al., 1992). The first model, developed by Kline et al. (1986), is a Markov model. A second-order model
was used to generate flatwise bending MOEs along 30-in. segements of a piece of lumber (Kline et al.,
1986). A first-order model was used for tensile strength simulation (Showalter et al., 1987). The second
method uses a multivariate normal distribution to obtain a vector of pseudorandom variates that are
transformed to obtain individual segment, flatwise bending MOEs and segment tensile strengths (Taylor
and Bender, 1991). Taylor and Bender (1991) also considered the cross-correlation between the MOE
and tensile strength.
Lam and Varoglu (1991) performed semivariogram and regression analyses on No.2 spruce-pine-
fir lumber (38 X 89 mm) that indicated a correlation length of 1.83 m for tensile strengths, that is,
tensile strengths within a piece of lumber separated by more than 1.83 m can be considered statistically
independent. Examination of data from Taylor and Bender (1991) indicates a correlation length on the
order of 12 m for tensile strengths of 302-24 and L1 high-quality tension laminations. It makes sense
that the lower grade No. 2 material should have a shorter correlation length, although whether this large
difference is reasonable is not known.
of laminations, from 0.10 with 4 laminations to 0.04 with 32 laminations. The concept of a transformed
cross-section combined with the statistics for a linear combination of random variables was used in the
calculations. The MOE for the lumber in each lamination was normally distributed. It should be apparent
from this discussion that another potential advantage of glulam is reduced strength property variation.
Statistical data from tests for other types of glulam members, such as tapered beams, double-tapered
pitched and curved beams, or glulam arches, are not, in general, available. The costs of obtaining the
data are too great. For instance, 12 pitched-tapered beams were tested by Fox (1974). The collapse
loads are available but initial failures often occurred from radial tension stresses in the apex region,
with collapse occurring at a higher load. For the remaining beams, failure was initiated by bending
stresses. The beams exhibit multimode failure behavior, which complicates prediction of failure. This,
combined with the expense of beam fabrication, makes testing the large number of beams necessary to
obtain the required data prohibitively expensive.
Models of glulam member behavior that can be used in Monte Carlo simulations are the approach
that is likely to prove most fruitful for obtaining the wide range of information necessary for prob-
abilistic studies of glulam. Development of glulam strength models began in the late 1970s (Bender et
al., 1985; Foschi and Barrett, 1980). The work on glulam models continues, with much of the research
on along-the-Iength correlation of the MOE and tensile strength (see Section 3.1.2) being directed toward
use in these models. Further work, such as finger joint simulation (Burk and Bender, 1989), has also
been performed. Much of this past work has been used to develop a potentially useful probabilistic
model for glulam beam strength and stiffness (Hernandez et al., 1992). This model, PROLAM, simulates
the assembly of glulam beams and then uses stochastic models for a number of random variables that
affect strength and stiffness. PROLAM uses stochastic models to simulate the length of each piece of
lumber in the beam, the MOE and tensile strength of 61O-mm (2-ft) segments, including autocorrelation
and cross-correlation, and the MOE and tensile strength of end joints. PROLAM uses virtual work to
determine the apparent midspan MOE of the glulam beams and includes progressive failures in its
determination of MOR. Simulations of 1000 douglas fir beams were compared to tests of thirty 16-
lamination beams. PROLAM predicted a mean MOR of 39,630 kPa (5748 psi) and a COY of 0.15.
This compares fairly well to an MOR of 41,680 kPa (6045 psi) and a COY of 0.15 for the test beams.
PROLAM overpredicted the mean MOE by about 15%. The model still requires further development
and further validation.
The influence of lamination MOE variability on stresses in tapered-curved glulam beams has been
examined using the finite element method (Gopu and Mansour, 1989). Only the variation between the
laminations was considered; variation within a given lamination was not included. The radial tension
stresses were unaffected but significant variations in maximum bending stresses were observed.
3.3. Others
Property data for certain products, such at panel-webbed I-sections, parallel strand lumber, and par-
allel laminated veneer lumber, are limited because of the proprietary nature of the products.
The data for panel products, such as plywood and structural composite panels, are also limited.
Moment capacity of panels has been characterized using a large, heterogeneous data set containing both
plywood and nonveneer panel products (O'Halloran et al., 1988). A lognormal distribution was found
to be adequate with R/Rn = 2.757 and COY = 0.374, where R is the average moment capacity and Rn
is the nominal moment capacity for a particular type of panel.
A limited amount of data on large sawn timbers can be found in Wood (1950). This information
may not be representative of large timbers cut today, because of changes in the forest resource.
Applications in Timber Structures 689
where t is time and ( l is the damage parameter varying from 0 (no damage) to 1 (failure). The term 0'
is the ratio of the applied stress to the failure stress under short-term ramp loading. A, B, C, and 0'0 are
constants to be determined from test data. 0'0 is the threshold stress ratio; if a < 0'0 then no damage
accumulates. This model was first fit (Barrett and Foschi, 1978) to the clear Douglas fir data of Wood
690 Applications in Timber Structures
(1951). Later, the model was fit to data on 38 X 138 mm (nominal 2 X 6 in.) western hemlock lumber
(Foschi and Barrett, 1982).
The second widely used model is the EDRM (exponential damage rate model) (Gerhards and Link,
1983). It is of the following form:
where the terms are as described above. It too has been fit to lumber data (Gerhards and Link, 1983,
1986; Gerhards, 1988). Table 29-1 shows some representative values for the model constants in Eqs.
(29-1) and (29-2).
The use of these types of models with a simulated random load history was first performed by Barrett
and Foschi (1978). They used the BFDM fit to the data from Wood (1951) in combination with snow
load modeled as a triangular pulse process with the possibility of a superimposed rain load pulse. Monte
Carlo simulations were performed with 5000 trials being performed when snow load alone was acting
and 8000 when snow plus rain was examined. They compared load duration effects for two Canadian
cities, Vancouver, British Columbia and Winnipeg, Manitoba. Their primary conclusion was that the
existing approach to including load duration in design may be somewhat flawed because the computed
reliability levels for the two cities were different. This difference implied that the load duration factor
may be location dependent.
Bulleit and Schoch (1986) combined the BFDM fit to western hemlock data (Foschi and Barrett,
1982) with a floor live load model, which included durations of applied loads (Corotis and Tsay, 1983),
in Monte Carlo simulations. For office loads, when 5000 trials were performed, it was shown that the
failure rate of the members was either decreasing or constant, with most being approximately constant.
This was the first indication that member failures might be occurring over one, or possibly a few, load
cycles because a constant failure rate can occur when failure is produced by a single random event.
This behavior was much more apparent in the simulation work of Hendrickson et al (1987). The
authors considered four damage models, the two discussed above, the implied damage model in the
Madison curve, and the other model suggested by Barrett and Foschi (1978). They used a Bernoulli
pulse model for the snow load history and a live load model based on the work of Chalk and Corotis
(1980). Their simulations, 5000-10,000 trials for each case considered, showed two major results: (1)
The probability of failure was not particularly sensitive to the damage model chosen; and (2) the duration
and magnitude of the 50-year maximum pulse had the most effect on probability of failure. These
results were confirmed in later studies by Rosowsky and Ellingwood (1990). Therefore, although load
duration is important, the damage tends to accumulate over one load pulse for realistic random load
histories.
Table 29-1. Representative Constants for Barrett and Foschi Damage Model and Exponential Damage Rate
Model
Small, clear 0.2 4.08 X 107 day-l 34.2 0.019 BFDM Barrett and
Douglas fir Foschi
specimens (1978)
Western hemlock 0.5 1.73 X 107 day-l 34 0.036 BFDM Foschi and
(2 X 6, No.2) Barrett
(1982)
Douglas fir select 40.00 In (day) 49.75 EDRM Gerhards and
structural Link (1983)
Applications in Timber Structures 691
Load duration was incorporated into a load and resistance factor design (LRFD) format using damage
models (ASCE, 1988; Foschi et al., 1989). The method consisted of examining resistance factors without
including load duration <I>m., and then using damage models to determine the resistance factor including
load duration, <l>L. The load duration factor was then determined by calculating <l>NL for a specific case,
say, glulam in bending, and a specific target reliability index, 130. The corresponding <l>L for that target
reliability index was then picked from a graph similar to that shown in Fig. 29-1. Different graphs are
required for different loads. The load duration factor A was then found from
(29-3)
This procedure was used for all failure modes in which load duration needed to be included.
The effect of moisture content (MC) on the load duration effect has been examined (Fridley et al.,
1992a). Considering steady state environments, load duration behavior is unaffected by moisture content
as long as the change in the MOR is appropriately modified for the environmental change. The work
of Fridley et al. (1992a) was combined with the EDRM to examine the effect of moisture content on
an LRFD format design code (Rosowsky and Fridley, 1992). They suggested a modification factor of
0.85 when MC > 19% and a factor of 1.0 for MC < 15%. The first factor is similar in magnitude to
the existing value in the NDS, but the second factor is less than the existing value (NFPA, 1991).
Rosowsky and Fridley (1992) also discussed the possibility of including a factor that accounts for
uncertainties in the actual moisture environment a member will encounter over its life. This is an
interesting concept that requires further consideration.
The load duration effect is a result of creep. However, because load duration has been considered
more important than creep behavior, probabilistic studies of creep have, until recently, been nonexistent.
Recently, Fridley et al. (1992b) modeled creep of lumber, using a four-element viscoelastic model. In
this model creep is not only a function of the applied load; the model also considers thermal and
moisture effects, including the interaction between applied stress and changing moisture content. The
strain is modeled by
= -T+ (T)[
- 1- (Kkt)] Tt
+-
e(t)
K. Kk exp - - -
~k ~v
(29-4)
Po ......... .
Figure 29-1. Resistance factor with the load duration effect included.
692 Applications in Timber Structures
where e(t) is the strain history, T is a constant stress, and K., Kb /Lk> and /Lv are model constants. The
mean and COY for each of these four constants were determined for six combinations of temperature
and relative humidity (RH). The data were obtained from tests on Douglas fir lumber, 38 X 89 mm
(nominal 2 X 4 in.). Each constant was distributed lognormally. The mean values of the constants were
then related to the moisture content, the temperature, and the mean values of the constants at 22.8°C
(73°F) and 50% RH. The COY was found to be essentially invariant to temperature and moisture
changes. This model could prove valuable for examining serviceability reliability for wood members
and wood systems.
Reliability analyses have been performed on layered columns and spaced columns made by joining
two or more wood members with mechanical connectors such as nails or bolts (Malhotra and Sukumar,
1991). Monte Carlo simulations, 2000 for each column configuration, were used to develop the buckling
strength distributions for the columns. The reliability index was then determined assuming load and
resistance are each lognormally distributed. The authors also developed a FORM approach for calcu-
lation of 13. Both methods gave similar results. The authors discussed application of their results to
code design criteria.
Column design application of a FORM, included buckling, can also be found in Foschi et al. (1989).
The Canadian effort (Foschi et al., 1989) produced a document that covers a wide range of topics
relating to single wood bending members. Basic FORM analyses were performed for the bending
strength and shear strength of single lumber members, built-up lumber beams, and gIulam beams.
Incorporation of size effect and load duration is included and discussed as appropriate. Basic service-
ability (deflection limit) reliability is also examined. All the analyses were performed to allow calibration
to an LRFD format.
(29-5)
where t. and /., are the applied tensile stress and bending stress, respectively, and F t and Fb are ultimate
tensile stress and ultimate bending stress, respectively. Note: Fb = MOR.
The question of correlation must be dealt with in this analysis. t. and /., may be correlated and
estimation of that correlation depends on the load case. The question of correlation between Ft and Fb
has been addressed. Because Ft and Fb cannot both be determined in one piece of lumber, they are each
related to a property that can be evaluated nondestructively, such as bending MOE, and then the sta-
tistical relationship between F t and Fb is determined.
The above limit state function and method for relating F t and Fb was used by Suddarth et al. (1978)
in their analysis of lumber in combined bending and tension. Tichy (1983), using the procedure to relate
Ft and Fb, found an estimated correlation coefficient of 0.31 between F t and Fb. The difficulty was that
3.0
~ _ _F;'(Nortb)
- - - - Spruce-Pine-Fir
'------ & Hem-Fir
- - - - - - - Southern Pine
2.0
Douglas tir-
Larch
1.0 -
-------------- Douglas tir-
Larch (North)
0.0 - - ' - - - - - - - - - - - - - - - -
012345678
LJDn
Figure 29-2. Variation of the reliability index as a function of species (prior to the in-grade testing program)
(Source: Adapted from Bulleit [1985]).
Applications in Timber Structures 695
he found that the standard error of the estimate was almost three times the value of the estimate. Thus,
any reliability analyses that require the correlation between FI and Fb should include a range of possible
values.
4.4.2. Compression plus bending. Reliability analyses for combined compression plus bending
require that correlation be included and require a rational approach to dealing with the load interactions.
For the U.S. effort toward a prestandard document (ASCE, 1988) an interaction equation suggested by
Zahn (1986) was used. Correlation was dealt with by letting all possible correlations have a correlation
coefficient of 1.0. This has been shown to be conservative for typical beam-column interaction equa-
tions (Zahn, 1990). The reliability analyses were then performed using a FORM.
The Canadian effort also used a FORM for the analyses. The bending-compression interaction was
included by using a moment capacity/axial load capacity approach similar to that used in reinforced
concrete (Foschi et ai., 1989). They considered two eccentricities with either a uniformly load beam or
a single concentrated load. The correlation between the MOE and ultimate compression stress was
assumed to be 0.6. The derivatives required for the FORM were determined by numerical differentiation.
3.0
~Sd.SW. ------No.2
- - - - - - - - - - - - - No. 3
@
~
'"
'C
....=
:E
:E
2.0
--- No.1
01
~
1.0
0.0-'----------------
012345678
LJDD
Figure 29-3. Variation of the reliability index as a function of grade for hemlock-fir 2 X 8s (prior to the in-grade
testing program).
696 Applications in Timber Structures
Probabilistic analyses of connections of wood members are limited. This is an area where further
research is required.
In the development of the Canadian limits state code, the only connection considered was glulam
rivet connections in tension (Foschi et al., 1989). These are connections consisting of specially designed
nails that are used with steel plates to hold glulam members together. The new design code has implied
13 values of from 3.0 to 3.3 for those connections. This seems potentially low because generally con-
nections should exhibit implied reliability levels greater than the members they connect.
The pre standard report discusses reliability analyses of bolted connections (ASCE, 1988). 1\vo sam-
ple analyses were performed using yield theory (Soltis and Wilkinson, 1987) and using data from bolted
connection tests (Soltis et aI., 1986). Using a FORM, yield theory gave 13 equal to about 4, and 13
equaled about 3 for the tested connections. This is not surprising because failure in the tested connections
was defined as a proportional limit and the yield theory was more indicative of ultimate capacity. Using
all U.S. data available, Zahn (1992) examined the ultimate capacity reliability of bolted wood connec-
tions, and found 13 equal to 5.0 for bolted connections made of softwoods and 13 equal to about 6.0 for
hardwoods.
simulated in each group. The floors were loaded with a uniformly distributed ramp load. When a joist
failed, that is, its maximum stress exceeded its MOR, then its stiffness was reduced to 10% of its
original value and the loading continued. Ultimate capacity was defined as the inability to carry in-
creasing load. Three important results from this study are as follows: (1) The variability of system
ultimate load is much lower than the variability in the lumber MOR, for example, a COV on MOR of
0.41 versus a COV of 0.11 for system ultimate load; (2) the system ultimate capacity ranged from just
over to more than twice the load that caused the first member to fail; (3) ultimate capacity of the system
generally occurs when two adjacent joists rupture.
Tests on wood-stud walls in bending also showed that rupture of two adjacent members prevents
the wall from carrying additional lateral load (Polensek, 1976). This failure criterion combined with a
finite element analysis for wood-stud walls (FINWALL) was used to simulate the behavior of walls
under lateral load (Polensek and Gromala, 1984). The simulations used in-grade test-based load-
deflection curves for the studs, and thus correlation between MOE and MOR was automatically included.
FINWALL includes two-way and partial composite action. Simulations were performed on wood-stud
walls with various types of sheathing and either Douglas fir or southern pine studs. A three-parameter
Weibull distribution was fit to the results for ultimate load, and also for deflection under a lateral load
of 30 Ib/ft.2 The distribution parameters are presented in the paper. Further work by Polensek and Kazic
(1991) considered the reliability for wall studs and their coverings under combined bending and com-
pression. They are modeled as I-beams with interlayer slip between the stud and its coverings. The
stud/covering system exhibits four stages prior to reaching ultimate capacity. The probability of entering
each of these stages under 50-year extreme wind load, considering Northhead, Washington and Key
West, Florida, was examined using a FORM. The reliability index for collapse ranged from about 3 to
4. The probability of entering the first stage, a proportional limit in the tension joints between the wall
stud and the gypsum wall board on the inside of the wall, was 1.0 for combined axial and wind loads.
Foschi (1984) included system behavior in an analysis of the reliability with respect to first member
failure for wood joist floors. The system analysis was performed using a program developed by Foschi
(1982) called FAP (floor analysis program). This reliability analysis also included load duration. The
BFDM fit to hemlock-fir, 38 X 140 mm (nominal 2 X 6 in.) lumber was used. Monte Carlo simulations
for a 30-year life were performed for snow load histories from Vancouver, British Columbia and Quebec
City, Quebec. Five thousand trials were performed for span lengths ranging from 3.66 to 4.57 m (12
to 15 ft). The first member failure reliability index ranged from 4.4 to 3.5 at Vancouver and from 2.8
to 1.4 for Quebec City. Foschi also determined conditional probabilities of failure at 1, 15, and 30
years, that is, the probability that the member will fail in the next year given that it has survived to the
ith year, where i = 1, 15, or 30 years. These results showed that a constant failure rate was typical as
long as 13 < 2. This supports the observations made by Bulleit and Schoch (1986).
The program FAP was combined with a FORM algorithm to develop a system factor for design by
Canadian standards (Folz and Foschi, 1989; Foschi et al., 1989). A system factor is a multiplicative
factor that increases the allowable load on a single member when it is used in a system. Again, in this
work, only first member failure was considered. Load duration was not included. The FORM algorithm
used numerical differentiation and included the effects of multiple correlation, using the method of Der
Kiureghian and Liu (1986). Ditlevsen bounds on the probability of failure were then found for a series
system, because first member failure was the limit state. System factors found by this method ranged
from 1.4 to 1.7.
Rosowsky and Ellingwood (1991) used simulations to examine the system factor. They assumed a
range of load redistribution schemes: (1) load shed to adjacent members when a member fails, (2) load
shed to all members, with the load being inversely proportional to their distance from the failed member,
and (3) load shed equally to all unfailed members. They did not include any partial composite action.
Load duration was included using the EDRM (Gerhards and Link, 1986) and the BFDM (Foschi and
698 Applications in Timber Structures
Barrett, 1982) with a stochastic floor live load model. The system failure criterion was failure of any
two joists for systems with fewer than eight members and failure of two adjacent joists for systems
with eight or more members. They determined that a system factor in the neighborhood of 1.3 was
reasonable for an LRFD specification. 1\vo of their results are both interesting and somewhat contro-
versial. The first is that the system factor was not sensitive to assumptions about load redistribution.
This implies that for ultimate capacity reliability of wood systems the accuracy of the analysis of the
system is not as critical as it at first appears. The second result is that load duration affects the system
factor. The magnitude of the system factor and load duration are coupled. If one does not include load
duration effects, then the analysis tends to overestimate the magnitude of the system factor. Both of
these results need to be confirmed by further research.
The author (Bulleit, 1986, 1987, 1990) has been developing techniques for determining the ultimate
capacity reliability of parallel member wood structural systems. The first attempt used a simple approach,
which assumed each member failure was an independent event (Bulleit, 1986). System ultimate capacity
was defined as either failure of two adjacent members or failure of a set number of members. This
approach was only of limited value because of the significant limitations associated with the assumption
of independent failure events.
In more recent work, the failure of a sheathed lumber system was modeled as a Markov chain
(Bulle it, 1987, 1990). This model was initially limited such that only one member was allowed to fail
under a given load cycle, where a load cycle is an increment of the load history, such as 1 year for a
snow load history. System failure was assumed to occur with failure of two adjacent members. In the
Markov formulation, each damage state refers to a member failure and the probability of entering a
new damage state is a function only of the present damage state. Bulleit (1987) assumed that under
any single load cycle, the system could shift only from having i failed members to having i + 1 failed
members. The probability transition matrix [P] contains the terms Pi} defining the probability that the
system enters the jth state during the next load cycle given it is in the ith state in the present cycle.
Initial work (Bulleit, 1987) assumed that the Markov process was stationary, which means that the
probability transition matrix [P] does not change in time. When formulated for the two-adjacent member
system strength limit state definition and with shifts limited to one member failure per cycle, the Pi,j+l
terms reflect the probability of a member failing that is not adjacent to a previously failed member, and
the Pi,n terms are the probabilities that a member fails that is adjacent to a failed member, an event
indicating system failure. The probability of such a system being in each state after k cycles is defined
by the kth state vector, {Pk}, where, for stationary processes,
(29-6)
complex nature of stochastic damage accumulation associated with member failure (Rosowsky and
Ellingwood, 1990). A nonlinear relationship between load and member failure probability exists as a
result of the highly nonlinear damage accumulation models typically used to model the cumulative
damage of wood. Damage increments during successive loads are apparently not statistically independ-
ent, although the results, discussed in Section 3.5, indicating constant failure rate of wood members
subjected to damage accumulation would seem to indicate that the assumption of stationarity may be a
reasonable approximation. Some work based on system simulations has examined the possibility of
allowing more than one member to fail during a single load cycle and the inclusion of nonstationarity
(Bulleit and Vacca, 1989; Bulleit, 1990).
Serviceability reliability of wood parallel member systems has been somewhat neglected. Some
Applications in Timber Structures 699
probabilistic modeling of deflection behavior of beams (Foschi et al., 1989) was mentioned in Section
4.3. A limited amount of work on reliability for floor vibration limit states has been performed.Reli-
ability analyses were performed using the floor analysis program FAP (Foschi et al., 1989). The analyses
were based on a performance function related to a static deflection limit. This static deflection limit
was established from a survey of owner attitudes about the performance of their floors (Onysko, 1986).
Foschi and Gupta (1987) attempted, with limited success, to examine the reliability of floors, using
criteria based on vibration frequency and amplitude.
The resistance distribution for a metal plate wood truss was developed using simulation results (Gupta
and Gebremedhin, 1992). Three truss models were used: pin connected, rigidly connected, and semi-
rigidly connected. Member property correlations were included using the multivariate normal approach
of Taylor and Bender (1991). Failure of the truss was defined as failure of a single member. The member
failure criterion was based on the combined bending plus axial stress limit state equation shown in Eq.
(29-5). This form of the interaction equation was used for both tension and compression stresses.
Moment magnification and buckling were not included. Member failures alone were considered; con-
nection failures were not included in the analyses. For each of the three truss models, 400 trusses were
simulated and the failure loads for each were fit to a lognormal distribution. The reliability indices for
each of three truss types under snow load were 3.0, 3.5, and 3.5 for pin, rigid, and semirigid, respec-
tively. These values must be considered upper bounds because load duration, connection failure, and
buckling failures were not included.
Rojiani and Tarbell (1984) used mean value methods and FORM to estimate the reliability of truss
members under combined loads. Member property correlations were included. They found member
reliability indices ranging from 13 = 2.0 to 5.0. No attempt was made to estimate the system reliability
using this information.
Foschi et al. (1989) examined the reliability of pin-connected wood trusses. Failure was defined as
failure of the first tension member. The truss was viewed as a series system composed only of tension
members. Failure probability bounds were found from reliability analyses of each of the tension mem-
bers. By considering two locations in Canada, three species, various lengths of trusses, and two heights
of parallel chord trusses, the reliability index ranged from 2.0 to 5.5. The length of the truss had the
greatest influence on reliability, with long trusses (9-12 m) exhibiting much lower reliabilities than
short trusses (4-6 m). Thus, size effect in trusses is important but has yet to be incorporated into design
code provisions.
that the assumed deflection pattern of the bridge affected the ultimate capacity calculation. The as-
sumption of uniform deflection for all members proved to be conservative.
The work of Bakht and Jaeger (1991) has led to load sharing factors (system factors) for multiple
member bridges. Using the conservative assumption that all members in the system are subjected to
the same deformation at ultimate capacity, they determined system factors as a function of number of
members in the system and grade of lumber. They determined their system factors by setting the
reliability for a single component equal to that for multiple components, recognizing that the variability
in ultimate load decreased as the number of members in the system increased. Their system factors
range from 1.13 to 1.69, with the smallest value occurring for the system with the fewest number of
members and the least variable material. This is the expected behavior for a system factor of this sort.
7. CONCLUDING REMARKS
A wide range of applications of probabilistic mechanics and probabilistic modeling to wood structures
has been described in this chapter. The coverage of topics is intended to be deep enough to familiarize
the reader with the procedures and results from the various studies. A comprehensive list of references
is also provided. The reader with a deeper interest in any given topic should find enough references to
gain further knowledge.
REFERENCES
AITC (American Institute of Timber Construction) (1985). Timber Construction Manual, 3rd ed. Vancouver, Wash-
ington: American Institute of Timber Construction.
ANSI (American National Standards Institute) (1982). Minimum Design Loads for Buildings and Other Structures.
ANSI A58.1-1982. New York: American National Standards Institute.
APUN, E. N., and F. J. KEENAN (1977). Limit states design in wood. Forest Products Journal 27(7):14-18.
APUN, N. E., D. W. GREEN, J. W. EVANS, and J. D. BARRETI (1986). The Influence of Moisture Content on the
Flexural Properties of Douglas-Fir Dimension Lumber. USDA Forest Service Research Paper FPIA75.
Madison, Wisconsin: Forest Products Laboratory.
ASCE (American Society of Civil Engineers) (1988). Load and Resistance Factor Design for Engineered Wood
Construction. New York: American Society of Civil Engineers.
ASCE (American Society of Civil Engineers) (1992). Load and Resistance Factor Design: Specification for En-
gineered Wood Construction. Standards Committee on Design of Engineered Wood Construction. New York:
American Society of Civil Engineers.
BAKlIT, B. (1983). Statistical analysis of timber bridges. ASCE.Journal of Structural Engineering 109(8):1761-
1779.
BAKlIT, B., and L. G. JAEGER (1991). Load sharing factors in timber bridge design. Canadian Journal of Civil
Engineering 18:312-319.
BARRETI, J. D. (1974). Effect of size on tension perpendicular to grain strength of Douglas-fir. Wood and Fiber
6(2):126-143.
BARRETI, J. D., and R. O. FOSCHI (1978). Duration of load and probability of failure in wood. I and II. Canadian
Journal of Civil Engineering 5(4):505-532.
BECHTEL, F. K. (1988). A model to account for length effect in the tensile strength of lumber. In: Proceedings of
the 1988 International Conference on Timber Engineering, Vol. 1. Madison, Wisconsin: Forest Products
Research Society, pp. 355-361.
702 Applications in Timber Structures
BECHTEL, F. K. (1992). Length effect in a correlated element model of tensile strength. Forest Products Journal
42(2):53-56.
BENDER, D. A, F. E. WOESTE, E. L. SCHAFFER, and C. M. MARx (1985). Reliability Formulation for the Strength
and Fire Endurance of Glued-Laminated Beams. USDA Forest Service Research Paper FPL 460. Madison,
Wisconsin: Forest Products Laboratory.
BOHANNON, B. (1966). Effect of Size on Bending Strength of Wood Members. USDA Forest Service Research Paper
FPL 56. Madison Wisconsin: Forest Products Laboratory.
BULLEIT, W. M. (1985). Relative reliability of dimension lumber in bending. ASCE Journal of the Structural
Division 111(9):1948-1963.
BULLEIT, W. M. (1986). Reliability model for wood structural systems. ASCE Journal of the Structural Division
112(5): 1125-1132.
BULLEIT, W. M. (1987). Markov model for wood structural systems. ASCE Journal of Structural Engineering
113(9):2023-2031.
BULLEIT, W. M. (1990). Experiences with a Markov model for structural systems with time variant member
resistances. Structural Safety 7(2-4):209-218.
BULLEIT, W. M., and J. G. ERNST (1987). Resistance factor for wood in bending or tension. ASCE Journal of
Structural Engineering 113(5): 1079-1091.
BULLEIT, W. M., and C. G. SCHOCH, III. (1986). Simulation of load-duration effects in wood. Wood Science and
Technology 20:157-167.
BULLEIT, W. M., and P. J. VACCA, JR. (1989). In-time behavior of wood structural systems. In: Proceedings of the
2nd Pacific Timber Engineering Conference, Vol. 1. New Zealand: University of Auckland, pp. 203-205.
BULLEIT, W. M., and J. L. YATES (1991). Probabilistic analysis of wood trusses. ASCE Journal of Structural
Engineering 117(10):3008-3025.
BURK, A G., and D. A BENDER (1989). Simulating finger-joint performance based on localized constituent lumber
properties. Forest Products Journal 39(3):45-50.
CHALK, P., and R. B. COROTIS (1980). A probability model for design live loads. ASCE Journal of the Structural
Division 106(STlO):2017-2034.
COROTIS, R. B., and w.-Y. TSAY (1983). Probabilistic load duration model for live loads. ASCE Journal of Struc-
tural Engineering 109(4):859-874.
CRISWELL, M. E. (1979a). Response of realistic wood joist floors. In: Probabilistic Mechanics and Structural
Reliability. New York: American Society of Civil Engineering, pp. 156-160.
CRISWELL, M. E. (1979b). Selection of limit states for wood floor design. In: Probabilistic Mechanics and Structural
Reliability. New York: American Society of Civil Engineers, pp. 161-165.
CRISWELL, M. E. (1983). New floor design procedures. In: Wall and Floor Systems: Design and Performance of
Light Frame Structures. Madison, Wisconsin: Forest Products Research Society, pp. 63-86.
CRISWELL, M. E. (1990). Enhancement of system performance by component interaction in wood framing subas-
semblies. Structural Safety 7:281-290.
CSA (Canadian Standards Association) (1989). Engineering Design in Wood (Limit States Design). CAN/CSA-
0.86.1-M89. Toronto, Ontario, Canada: Canadian Standards Association.
DER KIUREGHIAN, A (1985). Finite element methods in structural safety studies. In: Structural Safety Studies. New
York: American Society of Civil Engineers, pp. 40-52.
DER KIUREGHIAN, A, and P.-L. Lm (1986). Structural reliability under incomplete probability information. ASCE
Journal of Engineering Mechanics 112(1):85-103.
ELLINGWOOD, B. R. (1981). Reliability of wood structural elements. ASCE Journal of the Structural Division
107(STl):73-87.
ELLINGWOOD, B. R., T. V. GALAMBOS, J. G. MCGREGOR, and C. A CORNELL (1980). Development of a Probability
Based Load Criterion for American National Standard A58. Special Publication No. 577. Washington, D.C.:
National Bureau of Standards.
Applications in Timber Structures 703
FOLZ, B., and R. O. FOSCHI (1989). Reliability-based design of wood structural systems. ASCE Journal of Structural
Engineering 115(7):1666-1680.
FOSCHI, R. O. (1982). Structural analysis of wood floor systems. ASCE Journal of the Structural Division 108(STI):
1157-1574.
FOSOII, R. O. (1984). Reliability of wood structural systems. ASCE Journal of Structural Engineering 110(12):
2995-3013.
FOSCHI, R. 0., and J. D. BARRETI (1976). Longitudinal shear strength of Douglas-fir. Canadian Journal of Civil
Engineering 3(2):198-208.
FOSCHI, R. 0., and J. D. BARRETI (1980). Glued-laminated beam strength: A model. ASCE Journal of the Structural
Division 106(ST8): 1735-1754.
FOSCHI, R. 0., and J. D. BARRETI (1982). Load-duration effects in western hemlock lumber. ASCE Journal of the
Structural Division 108(STI): 1494-1510.
FOSCHI, R. 0., and A. GUPTA (1987). Reliability of floors under impact vibration. Canadian Journal of Civil
Engineering 14(5):683-689.
FOSCHI, R. 0., B. R. FOLZ, and F. Z. YAO (1989). Reliability-Based Design of Wood Structures. Report No. 34.
Vancouver, British Columbia, Canada: Department of Civil Engineering, University of British Columbia.
Fox, S. P. (1974). Strength and deformation of pitched-tapered Douglas-fir glued-laminated beams. Wood and Fiber
6(3):242-252.
FRIDLEY, K. J., R. C. TANG, L. A. SOLTIS, and C. H. Yoo (1992a). Hygrothermal effects on load-duration behavior
of structural lumber. ASCE Journal of Structural Engineering 118(4):1023-1038.
FRIDLEY, K. J., R. C. TANG, and L. A. SOLTIS (1992b). Creep behavior model for structural lumber. ASCE Journal
of Structural Engineering 118(8):2261-2277.
GERHARDS, C. C. (1977). Effect of Duration and Rate of Loading on Strength of Wood and Wood-Based Materials.
USDA Forest Service Research Paper, FPL 283. Madison, Wisconsin: Forest Products Laboratory.
GERHARDS, C. C. (1979). Time-related effects of loading on wood strength: A linear cumulative damage theory.
Wood Science 11(3):139-144.
GERHARDS, C. C. (1988). Effect of grade on load duration of Douglas-fir lumber in bending. Wood and Fiber
Science 20(1):146-161.
GERHARDS, C. C., and C. L. LINK (1983). Use of a cumulative damage model to predict load duration characteristics
of lumber. In: IUFRO Division 5 Conference, Madison, Wisconsin.
GERHARDS, C. c., and C. L. LINK (1986). Effects of loading rate on bending strength of Douglas-fir 2 by 4's.
Forest Products Journal 36(2):63-66.
GOODMAN, J. R., M. D. VANDERBILT, and M. E. CRISWELL (1983). Reliability-based design of wood transmission
line structures. ASCE Journal of Structural Engineering 109(3):690-704.
Gopu, V. J., and M. H. MANSOUR (1989). Influence of MOE variability on stresses in tapered-curved laminated
timber beams of constant depth. Forest Products Journal 39(3):39-44.
GREEN, D. w.,and J. W. EVANS (1988). Mechanical Properties of Visually Graded Lumber, Vols. 1-8. Springfield,
Virginia: National Technical Information Service.
GREEN, D. W., and D. E. KRETSCHMANN (1991). Lumber property relationships for engineering design standards.
Wood and Fiber Science 23(3):436-456.
GREEN, D. w., C. L. LINK, A. L. DEBONIS, and T. E. MCLAIN (1986). Predicting the effect of moisture content
on the flexural properties of southern pine dimension lumber. Wood and Fiber Science 18(1):134-156.
GREEN, D. w.,J. W. EVANS, 1. D. BARRETI, and E. N. APLIN (1988). Predicting the effect of moisture content on
the flexural properties of Douglas-fir dimension lumber. Wood and Fiber Science 20(1):107-131.
GUPTA, R., and K. G. GEBREMEDHIN (1992). Resistance distributions of a metal-plate connected wood truss. Forest
Products Journal 42(7/8):11-16.
HAUPT, H. (1867). General Theory of Bridge Construction. New York: Appleton, pp. 60-62.
704 Applications in Timber Structures
HENDRICKSON, E. M., B. R. ELLINGWOOD, and 1. MURPHY (1987). Limit state probabilities for wood structural
members. ASCE Journal of Structural Engineering 113(1):88-106.
HERNANDEZ, R., D. A. BENDER, B. A. RICHBURG, and K. S. KLINE (1992). Probabilistic modeling of glued-
laminated timber beams. Wood and Fiber Science 24(3):294-306.
HoYLE, R. J., JR., W. L. GALLIGAN, and J. H. HASKELL (1979). Characterizing lumber properties for truss research.
In: Proceedings of the Metal Plate Wood Truss Conference. Madison, Wisconsin: Forest Products Research
Society, pp. 32-64.
JAEGER, L. G., and B. BAKHT (1986). Probabilistic assessment of the failure of laminated timber decks. In: Trans-
portation Research Record 1053. Washington, D.C.: Transportation Research Board, pp. 41-48.
KEENAN, E 1., 1. KRYLA, and B. KYOKONG (1985). Shear strength of spruce glued laminated timber beams.
Canadian Journal of Civil Engineering 12:661-672.
KLINE, D. E., E E. WOESTE, and B. A. BENDTSEN (1986). Stochastic model for modulus of elasticity of lumber.
Wood and Fiber Science 18(2):228-238.
LAM, E, and E. VAROGLU (1991). Variation of tensile strength along the length of lumber. Wood Science and
Technology 25:351-359.
LITTLEFORD, T. W., and R. A. ABBOTT (1978). Parallel-to-Grain Compressive Properties of Dimension Lumber
from Western Canada. Information Report VP-X-180. Vancouver, British Columbia, Canada: Forintek Can-
ada Corp.
MALHOTRA, S. K., and A. P. SUKUMAR (1991). Reliability-based design of mechanically connected built-up wood
columns. Canadian Journal of Civil Engineering 18: 171-181.
McCUTCHEON, W. J. (1977). Method for Predicting the Stiffness of Wood Joist Floor Systems with Partial Com-
posite Action. USDA Research Paper FPL 289. Madison, Wisconsin: Forest Products Laboratory.
McCUTCHEON, W. J., M. D. VANDERBILT, 1. R. GOODMAN, and M. E. CRISWELL (1981). Wood Joist Floors: Effects
of Joist Variability on Floor Stiffness. USDA Research Paper FPL 405. Madison, Wisconsin: Forest Products
Laboratory.
MCGOWAN, W. M., B. ROVNER, and T. W. LITTLEFORD (1977). Parallel-to-Grain Tensile Properties of Dimension
Lumber from Several Western Canadian Species. Information Report VP-X-I72. Vancouver, British Colum-
bia, Canada: Forintek Canada Corp.
MCLAiN, T. E., A. L. DEBONIS, D. W. GREEN, E J. WILSON, and C. L. LINK (1984). The Influence of Moisture
Content on the Flexural Properties of Southern Pine Dimension Lumber. USDA Forest Service Research
Paper FPL 447. Madison, Wisconsin: Forest Products Laboratory.
McMARTIN, K. C., A. T. QUAlLE, and E J. KEENAN (1984). Strength and structural safety of long-span light wood
trusses. Canadian Journal of Civil Engineering 11(4):978-992.
MOODY, R. C., P. P. DESOUSA, and J. K. LITTLE (1988). Variation in stiffness of horizontally laminated glulam
timber beams, Forest Products Journal 38(10):39-45.
NFPA (National Forest Products Association) (1982). National Design Specification for Wood Construction. Wash-
ington, D.C.: National Forest Products Association.
NFPA (National Forest Products Association) (1991). National Design Specification for Wood Construction. Wash-
ington, D.C.: National Forest Products Association.
NOWAK, A. 1., and M. K. BOUTROS (1984). Probabilistic analysis of timber bridge decks. ASCE Journal of Struc-
tural Engineering 110(12):2939-2953.
O'HALLORAN, M. R., 1. A. JOHNSON, E. G. ELIAS, and T. P. CUNNINGHAM (1988). Consideration of reliability-
based design for structural composite products. Forest Products Journal 38(4):35-43.
ONYSKO, D. (1986). Serviceability criteria for residential floors based on a field study of consumer response.
Ottawa, Ontario, Canada: Forintek Canada Corp.
PEARSON, R. G. (1980). Potential of the SB and SBB distributions for describing mechanical properties of lumber.
Wood and Fiber 12(4):244-253.
PELLICANE, P. J. (1984). Application of the SB distribution to the simulation of correlated lumber properties data.
Wood Science and Technology 18:147-156.
Applications in Timber Structures 70S
PELLICANE, P. J. (1985). Goodness-of-fit analysis for lumber data. Wood Science and Technology 19:117-129.
PEYROT, A H., M. E. CRISWELL, M. D. FOLSE, and J.-P. AzNAVOUR (1982). Reliability analysis of wood trans-
mission poles. ASCE Journal of the Structural Division 108(ST9):1981-1994.
POLENSEK, A (1976). Finite element analysis of wood-stud walls. ASCE Journal of the Structural Division 102:
(ST7):1317-1335.
POLENSEK, A, and D. S. GROMALA (1984). Probability distributions for wood walls in bending. ASCE Journal of
the Structural Division 110(3):619-636.
POLENSEK, A, and M. KAzIC (1991). Reliability of nonlinear wood composites in bending. ASCE Journal of
Structural Engineering 117(6): 1685-1702.
ROJIAN!, K B., and K A TARBELL (1984). Reliability of wood members under combined stress. In: Probabilistic
Mechanics and Structural Reliability. New York: American Society of Civil Engineers, pp. 86-89.
ROSOWSKY, D. V., and B. R. ELLINGWOOD (1990). Stochastic Damage Accumulation and Probabilistic Codified
Design for Wood. Civil Engineering Report No. 1990-02-02. Baltimore, Maryland: The Johns Hopkins
University.
ROSOWSKY, D., and B. ELLINGWOOD (1991). System reliability and load-sharing effects in light-frame wood con-
struction. ASCE Journal of Structural Engineering 117(4):1096-1114.
RosowsKY, D. V., and K J. FRIDLEY (1992). Moisture content and reliability-based design for wood members.
ASCE Journal of Structural Engineering 118(2):3466-3472.
SALINAS, J. J., R. G. GILLARD, and K C. McMARTIN (1985). Strength and structural safety of long-span light
wood roof trusses. Reliability analysis using safety index. Canadian Journal of Civil Engineering 12(1):
114-125.
SEXSMITH, R. G., and S. P. Fox (1978). Limit states design concepts for timber engineering. Forest Products
Journal 23(10):49-54.
SHOWALTER, K L., F. E. WOESTE, and B. A BENDTSEN (1987). Effect of Length on Tensile Strength in Structural
Lumber. USDA Forest Service Research Paper FPL 482. Madison, Wisconsin: Forest Products Laboratory.
SOLTIS, L. A, and T. L. WILKINSON (1987). Bolted Connection Design. USDA Forest Service Research Report,
FPL GTR 54. Madison, Wisconsin: Forest Products Laboratory.
SOLTIS, L. A, F. K HUBBARD, and T. L. WILKINSON (1986). Bearing strength of bolted timber joints. ASCE
Journal of Structural Engineering 112(9):2141-2154.
SUDDARTH, S. K, and F. E. WOESTE (1977). Influences of variability in loads and modulus of elasticity on wood
column strength. Wood Science 10(2):62-67.
SUDDARTH, S. K, F. E. WOESTE, and J. T. P. YAO (1975). Effect of E-variability in the deflection behavior of a
structure. Forest Products Journal 25(1):17-20.
SUDDARTH, S. K, and F. E. WOESTE, and W. L. GALLIGAN (1978). Differential Reliability: Probabilistic Engi-
neering Applied to Wood Members in Bending/Tension. USDA Forest Service Research Paper FPL 303.
Madison, Wisconsin: Forest Products Laboratory.
TAYLOR, S. E., and D. A BENDER (1991). Modeling localized tensile strength and MOE properties in lumber.
Wood and Fiber Science 23(4):501-519.
TAYLOR, S. E., D. A BENDER, D. E. KLINE, and K S. KLINE (1992). Comparing length effect models for lumber
tensile strength. Forest Products Journal 42(2):23-30.
THOMPSON, E. G., M. D. VANDERBILT, and J. R. GOODMAN (1977). FEAFW: A program for the analysis of
layered wood systems. Computers and Structures 7:237-248.
TICHY, R. J. (1983). Concomitant strength relationship for lumber. ASCE Journal of Structural Engineering 109(8):
1854-1868.
TuRKSTRA, C. J. (1970). Theory of structural safety. In: Solid Mechanics Study 2. Waterloo, Ontario, Canada:
University of Waterloo.
WEIBULL, W. (1939). A statistical theory of the strength of materials. In: Proceedings Swedish Royal Institute for
Engineering Research 141:45.
706 Applications in Timber Structures
WHEAT, D. L., D. S. GROMALA, and R. C. MOODY (1986). Static behavior of wood-joist floors at various limit
states. ASCE Journal of the Structural Division 112(7):1677-1691.
WOESTE, F. E., S. K. SUDDARTH, and W. L. GALLIGAN (1979). Simulation of correlated lumber properties data-
a regression approach. Wood Science 12(2):73-79.
WOOD, L. (1950). Variation of Strength Properties in Wood Used for Structural Purposes. Report R1780. Madison,
Wisconsin: Forest Products Laboratory.
WOOD, L. (1951). Relation of Strength of Wood to Duration of Load. Report R1916. Madison, Wisconsin: Forest
Products Laboratory.
ZAHN, 1. J. (1977). Reliability-based design procedures for wood structures. Forest Products Journal 27(3):21-28.
ZAHN, 1. 1. (1986). Design of wood members under combined load. ASCE Journal of Structural Engineering
112(9):2109-2126.
ZAHN, J. J. (1990). Empirical failure criteria with correlated resistance variables. ASCE Journal of Structural
Engineering 116(11):3122-3137.
ZAHN, J. J. (1992). Reliability of bolted wood connections. ASCE Journal of Structural Engineering 118(12):3362-
3376.
ZHAO, w., and F. E. WOESTE (1991). Influence of correlation on tensile strength prediction of lumber. Forest
Products Journal 41(2):45-48.
30
APPLICATIONS IN CERAMIC
STRUCTURES
1. INTRODUCTION
The interest in ceramic materials for construction of engineering components has grown considerably
during the last decade. This is not surprising because ceramics offer excellent physical properties that
are necessary to meet the demands of many high-technology applications. Examples of such properties
are high-temperature endurance, extreme wear resistance, nontoxicity, and biocompatibility. On the other
hand, the brittleness and low fracture resistance of ceramic materials can be major shortcomings. Unlike
metals, ceramics do not yield plastically under sudden load and impact, and they are usually highly
susceptible to scratches and flaws arising during production or use. Consequently, special attention must
be paid by the design engineer to avoid high peak tensile stresses and to use only specimens that are
absolutely flawless, at least when viewed macroscopically. Moreover, because of microscopic flaw size
variations the strength within a batch of ceramic specimens can vary considerably. Another problem is
that the performance behavior of ceramics is time dependent. A ceramic part can fail over time as a
result of stress-corrosion cracking (i.e., the subcritical growth of microscopic cracks inside the stressed
ceramic material resulting from water vapor or other environmental influences) even if the tensile
stresses are below the critical level.
Hence, to guarantee the reliability of a ceramic part, it is necessary to have suitable strength and
toughness data and it is imperative to investigate how these quantities vary statistically within a partic-
ular batch. Moreover, the state of stress in the component during use needs to be analyzed. In the
following, it is shown how these three ingredients, that is, suitable material data, stress analysis, and
statistical methods, can be used to obtain a lifetime diagram for a ceramic structure. The procedure
outlined here has proved to be effective in reducing the complexity (and thus costs) involved in this
process. The material constants can be determined using comparatively simple experiments, such as
bending tests with elementary ceramic specimens (notched or unnotched rectangular bars). The influence
of the actual component geometry and loading conditions can be accounted for later, during the statistical
and stress analysis.
707
708 Applications in Ceramic Structures
2.1. Notations
A First subcritical crack growth parameter; specimen surface; index of A-type specimens
Ao Reference surface
a Major axis of Griffith ellipse; crack length parameter; indentation radius
ai Initial crack length
2a c Critical crack length
B Height of bar specimen; index of B-type specimens
b Minor axis of Griffith ellipse
c Crack length plus radius of Vickers indent
d Width of bar specimen; thickness of ceramic annulus
E Young's modulus
e Lever arm of four-point bending specimen
f(s) Stress-density function for volumetric flaw distributions
G Energy release rate
GIc Critical energy release rate
g(s) Stress-density function for surface flaw distributions
Hv Vickers hardness
KI Stress intensity factors for Mode I
Kn Stress intensity factors for Mode II
KIll Stress intensity factors for Mode III
KIc Fracture toughness
Lower support of bending specimen
m Weibull modulus
N Number of specimens within a batch
n Second subcritical crack growth parameter ("n-value")
P Load applied to bar specimens; probability of failure
r Radial distance in front of crack tip; arbitrary radial point of ceramic annulus
ri Inner radius of ceramic annulus
ro Outer radius of ceramic annulus
S Surface energy per unit thickness; arbitrary static stress; arbitrary tensile stress
Su Smallest tensile stress in uniaxially stressed specimen
s Nadler's parameter
T Lifetime associated with arbitrary static stress
Time
tc Time to failure
tc' Static lifetime
t~yn Dynamic lifetime
U Change in elastic energy due to presence of crack (per unit thickness)
V Specimen volume
Vo Reference volume
v Subcritical crack growth velocity
Y Correction function of stress intensity factor
Applications in Ceramic Structures 709
2.2. Abbreviations
For temperatures below 1500°F, linear elastic fracture mechanics (LEFM) has been found to be the
appropriate tool to characterize the fracture mechanical behavior of monolithic and (some) composite
ceramics. The basic concepts of LEFM, such as Griffith's fracture criterion for brittle materials, Irwin's
concept of stress intensity factors, and the standards for determining LEFM data for glasses and ceram-
ics, are briefly summarized in this section. Note that, for high-temperature applications, nonlinear effects
will become more and more relevant. One example is the occurrence of liquid glassy phases resulting
from sinter additives that exhibit viscoelastic material behavior. However, the means to account for such
phenomena are subject to current research and are not dealt with in this chapter. For a thorough intro-
duction to this topic, see the dissertations of Bornhauser (1983), Haug (1985), and Vogel (1987).
(30-1)
The first term in this equation, U, represents the change in elastic energy associated with the change
710 Applications in Ceramic Structures
of the state of stress due to the presence of the elliptical crack. The second term, S, accounts for the
surface energy contained in the system ('Y denotes the specific surface energy; both quantities, U and
S, are per unit thickness of the plane). Note that the factor 4 in the expression for S results from the
fact that the crack has two surfaces of length 2a. Finally, in the case of plane stress, E' is equal to
Young's modulus E whereas for plane strain conditions it becomes E/(1 - v2 ), v being Poisson's ratio.
The interaction between both energies, U and S, is shown in Fig. 30-2. The surface energy S increases
linearly with crack length parameter a, whereas the elastic energy U that is released is a quadratic
function in a. In other words, a small crack consumes more surface energy in order to grow by the
amount lla, than can be gained from elastic relaxation. Consequently, such a crack proves to be stable.
On the other hand, a large crack releases more elastic energy than necessary to compensate for the
creation of additional surface by growing by the amount of lla. Such cracks are unstable and lead to
catastrophic failure. Figure 30-2 shows the critical crack length 2a c necessary to cause instability at
constant external stress (J. It can be computed from the first derivative of Eq. (30-1) with respect to the
crack length parameter a:
which leads to
(30-3)
Alternatively, the critical stress (Jc necessary to destabilize a crack of length 2a can be computed
from Eq. (30-2) and becomes
(30-4)
t t t 1
The next section shows how these results can be generalized to arbitrary geometries and loading
conditions.
K]
(]' =--cos- <1>( . <I> • 3<1»
n vz;rr;: 2 1- sm-sm-
2 2
K]
(]' =--cos- <1>( . <I> . 3<1»
+ sm-sm-
yy vz;rr;: 2 1 2 2
(30-5)
(30-6)
For all other geometries and loading conditions, the SIF becomes
(30-7)
Crack Length
elastic energy
released, U
where Y( . ..) denotes a correction function. Lists of such correction functions are compiled in several
handbooks (e.g., Tada et al. [1985] or Murakami [1988]).
Consequently, the fracture criterion shown in Eq. (30-2) can be rewritten as
(30-8)
The right-hand side of this equation is obviously a measure for the resistance of a body to fracture.
It is commonly known as fracture toughness K 1e , and allows the fracture criterion of Eq. (30-2) to be
rewritten in the following concise form:
(30-9)
which means that the body will break as soon as the stress intensity factor reaches its critical value
(fracture toughness).
and (]' is the maximum tensile stress in three- and four-point bending arrangements:
e being the lever arm of a four-point bending specimen (typically e = 10 mm). Figure 30-5 shows a
typical four-point bending arrangement (Ramme, 1991).
To minimize the influence of subcritical crack growth, high loading rates (=12 N/s; Ramme, 1991)
are used to perform the test. The depth of the crack, a, is typically around 1 mm. Precracking has been
suggested to guarantee a sharp crack (Fujii and Nose, 1989). However, it should be pointed out that
this procedure, similar to that used in Klc testing of metals, is not common practice for ceramics. The
width of the notch should not exceed 100 J..Lm in order to obtain a reliable KIc value (Bertolotti, 1973;
Kriz, 1983). This can be achieved, with reasonable effort, using commercially available diamond saws.
Typically, between 10 and 20 bars should be used to account for the scatter within a batch of specimens.
Besides bending tests, the so-called indentation method was established as a technique for the de-
termination of the fracture toughness of brittle materials. A Vickers indenter is used to create an indent
with cracks originating at its comers (Fig. 30-6). The length of these cracks, c - a, is then related to
the fracture toughness by semiempirical formulas, such as the relation developed by Evans and Charles
(1976):
(30-13)
B
g d
P/2 P/2
e B
(30-14)
2a being the indenter width, and Hv being the Vickers hardness. Note that this test is prone to large
scatter and errors, owing to local inhomogeneities in the material. Consequently, it should be used as
Vick.e rs indent
-- cracks
Fracture toughness
Material (MPa . m1l2)
Glass 0.7-0.8
Alumina (AI 20 3) 3-4
Zirconia (Zr02) 5-8
Silicon carbide (SiC) 3-5
Silicon nitride (Si3N4) 5-8
an initial screening test rather than as a method for determining the actual fracture toughness of a brittle
material.
Table 30-1 shows typical toughness data of commonly used brittle engineering materials (adapted
from Engineering Ceramics, 1990).
(30-15)
where the energy release rate G and its critical counterpart GIc have been introduced. For a crack
SUbjected to tensile loads normal to its plane, G can be written as the square of KJ divided by E'.
However, in addition to that, in-plane and out-of-plane shear can be applied, as indicated in Fig. 30-7.
To distinguish the three kinds of loading, subscripts I, II, and III are used in reference to the modes of
failure. It can be shown (Anderson, 1991) that the total energy release rate for a crack growing in its
own plane due to the combined action of all three loading modes is given by
(30-16)
due to the interaction of all three modes to be predicted. The state of the art is far from doing this.
Only recently (Suresh et al., 1990) have first attempts been made to measure reliable values of the
fracture toughness of alumina with respect to Mode II and III loading conditions. According to this
work, K IIe is comparable to the Mode I fracture toughness K1e, whereas for Kme much higher values are
reported. Also, multiaxial fracture envelopes have been established that allow critical conditions to be
predicted for at least some alumina materials.
Because of the sparseness and uncertainty of data, no further attempt is made in this chapter to
present a unified theory of multiaxial fracture of brittle materials. For the time being, appropriate safety
factors seem to be the only way to account for multiaxial fracture in a brittle structure.
As we have seen in the preceding sections, brittle materials can fail catastrophically as soon as a certain
critical load level or critical crack size has been reached. However, even specimens that from a mac-
roscopic viewpoint, do not show any flaws and that are subjected to a tensile load much lower than
necessary to break them can and will fail after a certain time when they are subjected to a nonvacuum
environment. De facto no specimen is completely faultless. On a microscopic scale there will always
be small flaws, cracks, cavities, etc. Simplistically speaking, water vapor, or other chemical agents
present, will penetrate into the material and move toward the tip of such a microflaw subjected to stress.
By adsorption of the chemical, the surface energy of the material will be reduced, and the crack will
extend slowly over time until, finally, critical conditions are reached and the specimen breaks. This
phenomenon will even be enhanced and accelerated by the enormous stresses present in the neighbor-
hood of the crack (see, Eq. [30-5]), which explains the technical term for the phenomenon: stress
corrosion cracking.
For a more detailed and thorough treatment of the atomistic interpretation of stress corrosion cracking
in brittle materials, the works by Charles (1961) and Wiederhorn (1968, 1978) should be consulted.
The following sections concentrate on how subcritical crack growth in brittle materials can be treated
mathematically from a phenomenological point of view. This discussion leads to a subcritical crack
growth law of the Paris type, which is governed by two material constants, the subcritical crack growth
parameters. Experimental methods for determining these constants are also presented.
information regarding the physical interpretation of trimodal (v, K.) curves can be found in Wiederhom
(1968, 1978).
For the design engineer only region I is of immediate interest, because lifetimes associated with the
other two regions are much too short to be considered. Thus their influences are neglected completely
in all the following discussions. The behavior within region I is described by the following Paris-type
subcritical crack growth law:
1\vo new material constants are introduced, A and n, the subcritical crack growth parameters that
,
,,,
- ---~-
--~-~~-:":-:-:-~-::- ~- - - -~
, , --
II ,,
,,, ,
,,
,,
,,
___ 4I ________ _ _ I~ _____ ~I ____ ~I _ ___ ~ ___ .
I I I I I
I I I I I
I I I '
I
"
I I '
,
,,,
,,
I I
,
I
,
I
,,
-------~----------~-----~-~---~---.---
"
,,,
",
,,
,,,
,
,,
2 3 4 5 6 7
Stress Intensity K,. (MPam 112 )
Figure 30-9. Subcritical crack growth in ceramic materials.
718 Applications in Ceramic Structures
must be determined from experiments. The next two sections explore how n determines the lifetime of
a specimen made of brittle material subjected to static (Le., constant applied stress <T) and dynamic (Le.,
constant applied loading rate &) loading conditions. As will be shown, it is not absolutely necessary to
use the aforementioned technique of double-torsion specimens in order to determine the n-value. Rather,
it is sufficient to measure strength data of at least two batches of unnotched bar specimens at different
loading rates, and analyze them statistically.
Subcritical crack growth due to cyclic loading is not discussed here in detail. Further information
with regard to this topic can be found in the articles by Evans and Fuller (1974) and Evans (1980).
According to experiments reported in these papers no evidence exists that cycling enhances the slow
crack growth observed in ceramic materials.
(30-19)
Now, by neglecting the second term (according to Eq. [30-11], the correction function Y is almost
constant until alB = 0.6), the following relation results:
(30-20)
(30-21)
Note that for other specimen configurations it might not be allowed to neglect the derivative of the
correction function and numerical integration could become necessary to take this term into account.
Using the empirical Eq. (30-17) the integration can be performed:
(30-22)
As will be shown, the n-value of a brittle material is typically around 20 or more. Moveover, we have
(30-23)
Applications in Ceramic Structures 719
(30-24)
where
C = 2/ynA(n - 2) (30-25)
Note that in order to derive this formula use has been made of Eq. (30-10). We now apply Eq. (30-
24) to two specimens, q and r, made of the same material, assuming that they are subjected to different
loads O'q and 0', with the same initial crack length ai' In this case, the constant C drops out and we
obtain
(30-26)
Thus, if O'q > 0', (for example), specimen r will live longer, and the duration of its lifetime relative to
that of specimen q is governed by the n-value of the material.
<T(t) = at (30-27)
Or 1--------. Or
l'
]
t: Time---+ Time---+
Figure 30-10. Static and dynamic loading.
720 Applications in Ceramic Structures
(30-29)
and the result can be compared with Eq. (30-22) for the static lifetime. Hence, we conclude
(30-30)
This relation confirms, in a quantitative manner, our initial presumption that the dynamic lifetime
should be longer than the static one.
(30-32)
where
- 2(n + l)al2-n)12
C = Y:A
• (n - 2) =constant (30-33)
Thus, the fracture strength values of two specimens, q and r, subjected to two different loading rates,
are related as follows:
(30-34)
As will be shown in the next two subsections, this relation is used to determine the n-value.
Applications in Ceramic Structures 721
4.4.2. Increasing rank analysis. Consider two batches, 1 and 2, of N specimens each (N is typ-
ically on the order of 20 to 30). The two batches are fractured at two different loading rates, and the
corresponding strength data are recorded. Next, for each batch, the data points are ranked in increasing
order such that
«(J~, cri, ... , ai, (J':+h"O' (11), (ai, <T~, •• o, (J'~, (J'~+h"" (J~),
Finally, corresponding strength data from both batches are correlated to form N pairs of strength
couples, which are plotted as a (In cr 1) versus (In cr 2) diagram, as shown in Fig. 30-11. Referring to Eq.
(30-35), this should result in a linear plot with slope of unity and an intercept that is a function of n.
In the example shown in Fig. 30-11, an n-value of 46 was obtained.
4.4.3. Median fracture strength plot. Consider now several batches, i = 1, ... , m, which consist
of Ni specimens each (typically m ;::: 5, and Ni varies between 20 and 30 from batch to batch). Each
batch is fractured at a different loading rate, which, preferably, is increased in units of 10. The strength
data for each batch are recorded and a mean value is computed according to
2:
N,
-0 ' . =1- i
0'.' (i = 1, ... , m) (30-37)
I Ni j=l J'
where cri is the mean value of strength for batch i and crJ equals the measured strength of specimen j
and batch i.
The logarithms of these mean values are now plotted as a function of the corresponding logarithms
of the loading rate. A comparison with Eq. (30-35) shows that the corresponding regression line should
be linear and its slope should be proportional to l/(n + 1). An example of this method is shown in
Fig. 30-12, resulting in an n-value of 56.
600
,,
,
I: :. :
500 "
---------------------~--------------~-----1I-T---
Material: Al 20 S -
_____________ , ________ , ,,
_______ L ____ _
,,
-~ ~ ~
100
100 200 30 400 500 600
Strength, Batch 1 (MPa)
Figure 30-11. Increasing rank plot.
722 Applications in Ceramic Structures
Note that when properly performed, both methods, the increasing rank analysis as well as the method
of median fracture strengths, should lead to the same results.
The two methods used for experimental determination of the subcritical crack growth parameter n (see,
sections 4.4.2 and 4.4.3) indicate that the strength CT within a batch of specimens made of a brittle
material varies randomly. This is not surprising, because a batch contains specimens with flaws of
different size and severity. Consequently, there is considerable scatter in the strength values.
As was pointed out before, the final goal is to arrive at a lifetime prediction for components made
of brittle materials. Because lifetime is governed by the strength of the material, and because the strength
data of brittle materials show random scatter, an appropriate statistical approach is necessary to combine
both aspects. A decisive step in this direction is the Weibull analysis, which is explained in the sub-
sequent sections.
,,
I I I I I I
500
............... ~ ....................... -: ........................... t ..........................:- ...................... ~ .................... -:- ........... ...
P
I I I I I I
material: AI 3" II ,: ,:
. ---~------
I I I I
400
, n=56 ,, ,
----r---------~--------r---- ,
,,, ,,
,
,, ...
, , ,
-
T"
.J..
300
-----~--------. --------.!-----
,
...,,
,
........... ,- ............... -:- ................... +...................... -~ ...................... t ...................... -:- ........... . .
I I I I
200
I I I I I
: I
,, ,
I I I I I I
150 -----~---------~--------~---------~--------~---------~-----
I I I I I I
: I I I I I
5
III
1::1
....au
Q)
4
Q)
g.
rn 3
eo..
...
0
Q)
..c 2
z9
1
o
o 100 :m 300 400 500 fIX) 700 8X) !m 1000
Strength (MPa)
Figure 30-13. Fracture strength histogram.
Experience shows that the (<T, P) strength data of brittle materials can be fitted reasonably well using
a two-parameter Weibull function:
100
. . . . . . _._. . . L. .____L............__ . j . . L__.~. . . -
. . .-
00
8) .-....... -.- Material Si_N4 . - .. +....l~~-
;:r' I •!
...---.-;.----........---... ···t···_- ······-.--·····1 . . . . .----.. . . . . -
70 -_._._.................._ ..,..
..., ! • l.
00
_ . _. . _L.. . _. .__ . . L_.
__..............___L.......................... .........L._
. __ . _....··· .. .
1::1
e
Q)
50 -··-.--r-····· "j
40
...:.!-
~
0 •
400 500 700
Strength (MPa)
Figure 30-14. Cumulative distribution of strength.
724 Applications in Ceramic Structures
i - 0.3 1
p.=---
• N + 0.4'
·=enen--
Y• 1 - P:
Xi = en ai' A = -m en ao,
N N N N N N N
m =
N 2: X;Ji - 2: 2:
Xi Yi
_.:.::i=:.,.1_ _---.:;i=:..:.I_....:i::..:=I_
2: 2: K; - 2:
;=1
Yi
i=l ;=1
Xi 2:X;Ji
;=1
N N N A= N N N (30-39)
N 2: x~ 2: 2:
;:::1
-
i=1
Xi
;=1
Xi N 2: K; - 2: 2:
;=1 i=1
Xi
i=1
Xi
The latter method, however, leads to an implicit equation that, in general, must be solved numerically
for m and ao:
2: a~ en a )l!m
2: en
N
1. 2: ~
i
N
(N/m) + ai - N i=1 N = 0, ao = (
(30-40)
i=1 ~ m N i=1
L.J aj
i=1
From a practical standpoint, it is worth mentioning that the maximum likelihood method tends to
reduce the influence of outliers among the data on the final results, whereas the method of least squares
takes all data into account equally. For a more complete discussion regarding the advantages and dis-
advantages of both methods, in particular from a mathematical point of view, see Pai and Gyekenyesi
(1988).
Finally, note that a three-parameter Weibull distribution is sometimes used to fit strength data arising
from simple bending experiments:
for a> au
(30-41)
for a :5 au
1. Statistical homogeneity and isotropy of the material: On average, the strength of a specimen part does not
depend on its position within the specimen.
2. Statistical independence: The probabilities of failure of subvolumina and subsurfaces of a specimen are to
be multiplied in order to obtain the probability of failure of the whole specimen.
3. The weakest link concept: The weakest part (subvolume/subsurface) of a specimen determines its strength.
On the basis of these principles, the probability of failure for a body subjected to a spatially varying,
f i f [rr(x~, r
but uniaxial, state of stress can be written as
P = 1 - exp{ - ~o z) dx dy dz } (30-42)
P = 1 - exp { - Ao
I
1r f [rr(x,
~
Y)Jm dx dy } (30-43)
a PB-point bending, c.s.-cross-section; I-length of lower support; It-length of upper support; 12 =I - It; for
other symbols, see text.
Source: Adapted in part from Nadler (1972, 1974, 1989).
(30-44)
(30-45)
for surface-flaw-governed specimens. In these equations, CT denotes the maximum tensile stress in the
uniaxially stressed specimen, and Su = Su/CT, where Su is the smallest tensile stress in the system (which
mayor may not be equal to zero). The stress-density functions, f(s) and g(s), for specimens governed
by volumetric and surface flaws, respectively, can be determined from the following equations:
Lf(s)- ds-
1
V(s) d V(s)
=- => f(s) = - - -
s V ds V
(30-46)
Lg(s)- ds-
1
A(s) dA(s)
=- => g(s) = - - -
s A ds A
The parameter s, introduced by Nadler (1989), is given by s = S/CT, where S denotes any tension
smaller than the maximum tensile stress occurring in the system. Consequently, V(s) and A(s) denote
the volume and surface of the specimen subjected to that stress S, and more. A list of volumetric and
surface-flaw-governed stress-density functions for commonly used engineering specimens can be found
in Tables 30-2 and 30-3.
The advantage of Nadler's method of stress density functions is evident; instead of having to solve
complicated three-dimensional integrals, it now becomes necessary only to evaluate one single integral,
which can often be done by using elementary integration schemes.
1\\'0 examples shall help to understand the use of stress-density functions better. Consider first the
three-point bending specimen of rectangular cross-section as shown in Fig. 30-4 (without crack). The
only nonvanishing stress component is given (Timoshenko and Goodier, 1951) by
(30-47)
Applications in Ceramic Structures 727
where (x, y) refer to a rectangular coordinate system, situated in the center of the neutral axis of the
specimen (x coincides with points of the neutral axis).
Thus parameter s is given by
(30-48)
V(s) =2d i
0
lxl=<I-Sl' [
B
"2 + Y(X)] dx = (1 - s + s en s)2
Bid
(30-49)
Because the total volume V is given by Bid, the stress-density function turns out to be
d1 1
[(s) = ---(1 - s + s en s) = -- en s (30-50)
ds2 2
Second, consider the ceramic annulus of Fig. 30-15, which is preshrunk onto a circular rod made of
another material. The misfit results in stresses in the ceramic material, which are given (Milller and
Schmauder, 1993) by
(30-51)
where
17 (30-52)
E =~'
in which t is the misfit (in percent), and EIJ E2 and Vb V2 are Young's modulus and Poisson's ratio of
the annulus and the rod, respectively. Moreover, rj and ro characterize the inner and outer radius of the
annulus, respectively, and r is any radial point in between them.
Note that the radial stresses <TIT are compressive in nature, whereas the angular stresses <Tee act in a
tensile mode with the stress maximum being at the inner surface of the annulus. Consequently, the
parameters s and Su become
E (
s=--l+~, r2) S =--
2E
(30-53)
1 +E r2 u 1 +E
728 Applications in Ceramic Structures
Finally, it should be mentioned that, strictly speaking, in the case of the annulus the stress distribution
is no longer uniaxial; there are three principal stresses, the first one (0"rr) being compressive, the second one
(0"00) tensile, and the third (O"zz) equal to zero (if plane stress conditions are assumed). However, this approach
is consistent with Weibull's treatment of failure under multiaxial stress conditions, as shown in Section 5.3.
5.2.3. Portability of Weibull parameters using stress-density functions. Equations (30-42) and
(30-43) or Eqs. (30-44) and (30-45) bring up the problem of how to compute the strength distribution
for a specimen configuration B provided that the parameters m and 0"0 are known from measurements
using a different configuration, A.
To answer this question, we compare Eqs. (30-38) and (30-44) and identify the scale parameter of
A-type specimens, O{, as
(30-56)
Consequently, the probability of failure for B-type specimens, PB' can be expressed as
(30-57)
Thus, if the Weibull modulus m and the scale parameter O"~ are known from experiments using A-
type specimens, the Weibull distribution for another B-type specimen can be computed from Eq. (30-
57), provided that the B-type specimen is subjected to uniaxial stress, that the stress density functions
of both specimens are known, and that the fabrication process of making A- and B-type specimens is
the same. If the third condition is not satisfied (e.g., if not only volumetric but also surface flaws affect
the probability of failure) then the arguments leading to Eq. (30-57) must be repeated using Eq. (30-
45) as a starting point instead of Eq. (30-44).
Using the definition for the mean value of strength CT,
-rr Lrr-ap
=
oo
o arr
drr (30-58)
the following expression can be established, linking the mean values of strength, CTA and CTB , of both
specimen configurations:
(30-59)
Note that, by replacing the volumetric stress density functions JA(S) and JB(S) with gArS) and gB(S),
respectively, the corresponding relation for specimens governed by surface flaws can be established.
p =1 - ( II f I {rr~
exp - Vo Jv + a,'
<T;;' + rr;} dx dy dz ) (30-60)
However, this equation clearly neglects the interaction between the principal stress components. Thus,
it might lead to unconservative predictions.
Various other attempts have been made to improve the situation. Of special merit besides Weibull's
original work are the flaw density and orientation concept developed by Batdorf and Crose (1974) and
Batdorf and Heinisch (1978), the elemental strength approach of Evans (1978), and the hypothesis of
positive principal strains developed by Beierlein (1988). A critical comparison of the predictions of
various multiaxial statistical fracture theories for brittle materials has been published by Chao and Shetty
(1990). Also compare the articles (and references cited therein) by Manderscheid and Gyekenyesi (1988)
and Nemeth et al. (1989). No further attempt is made here to present a unified statistical theory of
multiaxial failure. This field of reliability analysis of brittle materials is still the subject of ongoing
research. Experimental verification of the predictions of the various theories is sparse, and again the
reader is referred to the aforementioned and other current literature.
Note that, because of the complexity of multiaxial problems, the stresses in Eq. (30-60) will usually
no longer be computed from analytical equations. Rather, stresses will be determined using suitable
numerical methods, such as finite elements (Nemeth et al., 1989).
730 Applications in Ceramic Structures
6. LIFETIME PREDICTIONS
The techniques and results of Sections 3, 4, and 5 are used in this section to construct lifetime diagrams
for components made of brittle materials. The diagram combines maximum applied static load S, prob-
ability of failure P, and time to failure T, and is also known as SPT representation in the literature.
Material: Zr02
100
f
00 predicted line for
4 PB from 3 PB tests
8)
70 ......-.-------...~...-
~
m
Qj
~ 00
Qj
Il.
40
00
m
10
o
400 fm 1000 1200 1400
Strength (MPa)
Figure 30-16. Weibull distributions (experiment vs. theory) of various ceramics (part I).
Applications in Ceramic Structures 731
Material SiC
100
00
8)
70
...
1:1 m
ell
~
ell
m
~
40
00
m
10
0
:m 3IiO 400 450 IiOO 5IiO roo 660 700
Strength (MPa)
Figure 30-17. Weibull distributions (experiment vs. theory) of various ceramics (part II).
where <1 is the loading rate used during the bending tests that were used to determine the two Weibull
parameters, m and (J"~. Thus, Eq. (30-61) can be rewritten as follows:
( S"fs(s) ds
1
in in - - = in -118 J~ + m in a - m in a
A
(30-64)
1-P l1A(
1 S"t.(s) ds
0
n +1 1( S"!s(s) ds}
LS"!A(s) ds
1 { 118 (<to)"+1
in T = - n in S + - - ( i n in - - - in - 1 ) + in (30-65)
m 1- P l1A (n + 1)0-
~
This equation allows prediction of the lifetime of a specimen of configuration B, provided that the
material parameters m and ~ are known from experiments using a simple, inexpensive configuration
A.
Note that a similar equation can be derived for surface-flaw-governed specimens by using the stress-
density function gA(S) and gB(S) instead of Us) and fs(s), respectively.
732 Applications in Ceramic Structures
7. COMPUTER SOFTWARE
The methods presented in the preceding sections require elementary but tedious data processing and
evaluation. To guarantee an accurate and quick analysis, such work is preferably performed using
suitable computer software.
The mainframe program CARES (an acronym for ceramics analysis and reliability evaluation of
structures) is an excellent example of such software. Developed at the National Aeronautics and Space
Administration Lewis Research Center (Cleveland, Ohio), it uses Weibull and Batdorf fracture statistics
to predict the reliability of isotropic ceramic components. Finite element analysis output from such
programs as NASTRAN or ANSYS is used to provide the stress distribution within the structure. An
outline of the capabilities of CARES can be found in the paper by Nemeth et al. (1989).
An alternative is the IBM PC-AT-driven computer program CERAMTEST developed by Muller
(1991). CERAMTEST is a user-friendly, menu-driven, graphics-oriented program that allows complete
analysis of up to 50 specimens per batch, which are subjected to uniaxial loading. The analysis is based
on the methods outlined in the previous sections. Recently, the stress-density function concept has been
implemented (see Tables 30-2 and 30-3 for a list of specimen geometries and loading conditions), which,
by using material data obtained for a specimen configuration A, allows the user to predict the theoretical
lifetime of another configuration B. Multiaxial stress situations, however, cannot be handled at present.
--
co
0
'"
(..,
u
.§
-
.2 •0
;J
co
0
An outline of the state-of-the-art techniques used to assess the quality and lifetime of ceramic materials
and products has been presented. Three major steps that form the essential core of the analysis procedure
can be clearly distinguished:
The final result of this procedure is a lifetime diagram for the ceramic structure that combines lifetime,
probability of failure, and maximal applied (static) stress (see Section 6.2).
Although the lifetime and reliability of simple specimen geometries and loading conditions (bars and
rods) can currently be accurately predicted, more complex structures, such as monolithic turbine wheels
or rotors, cannot yet be handled efficiently in their whole complexity. The experimental as well as
theoretical ingredients necessary for their analysis are the subject of current research and development
programs. The same is true for ceramic composite materials.
REFERENCES
ANDERSON, T. L. (1991). Fracture Mechanics---Fundamentals and Applications. Boca Raton, Florida: CRC Press.
BATDORF, S. B., and J. G. CROSE (1974). A statistical theory for the fracture of brittle structures subjected to
nonuniform polyaxial stresses. Journal of Applied Mechanics 41:459-465.
BATDORF, S. B., and H. L. HEINISCH, JR. (1978). Weakest link theory reformulated for arbitrary fracture criterion.
Journal of the American Ceramic Society 61(7-8):355-358.
BEIERLEIN, G. (1988). Festigkeitsverhalten keramischer Werkstoffe unter mehrachsiger mechanischer Beanspru-
chung. Ph.D. Thesis. Zwickau, Germany: Ingenierhuochschule Zwickau.
BERTOLOTTI, R. L. (1973). Fracture toughness of polycrystalline Al 2 0 3 • Journal of the American Ceramic Society
56:107.
BORNHAUSER, A. C. (1983). Direkte Beobachtung der unterkritischen Rif1ausbreitung in keramischen Werkstoffen
bei hohen Temperaturen. Ph.D. Thesis. Stuttgart, Germany: Universitat Stuttgart.
CHARLES, R. J. (1961). A review of glass strength. In: Progress in Ceramic Science. J. E. Burke, Ed. Oxford,
England: Pergamon Press, pp. 1-38.
CHAo, L. Y., and D. K. SHETTY (1990). Equivalence of physically based statistical fracture theories for reliability
analysis of ceramics in multiaxialloading. Journal of the American Ceramic Society 73(7):1917-1921.
Engineering Ceramics (1990). Think Ceramics. Prospectus from Kyocera, Kyoto, Japan.
EVANS, A. G. (1972). A method for evaluating the time-dependent failure characteristics of brittle materials and
its application to polycrystalline alumina. Journal of Material Science 7(10):1137-1146.
EVANS, A. G., and E. A. CHARLES (1976). Fracture toughness determinations by indentations. Journal of the
American Ceramic Society 59(7-8):371-372.
EVANS, A. G. (1978). A general approach for the statistical analysis of multiaxial fracture. Journal of the American
Ceramic Society 61(7-8):302-308.
EVANS, A. G. (1980). Fatigue in ceramics. International Journal of Fracture 16(6):485-498.
EVANS, A. G., and E. R. FULLER (1974). Crack propagation in ceramic materials under cyclic loading conditions.
Metallurgical Transactions 5:27-33.
EVANS, A. G., and S. M. WIEDERHORN (1974). Crack propagation and failure prediction in silicon nitride at elevated
temperatures. Journal of Material Science 9(2):270-278.
734 Applications in Ceramic Structures
fuJII, T., and T. NOSE (1989). Evaluation of fracture toughness for ceramic materials. Industrial Standards of Japan
International (ISIJ International), Vol. 29. pp. 717-725.
GRIFFIm, A. A. (1920). The phenomena of rupture and flow in solids. Philosophical Transactions Series A 221:
163-198.
HAUG, T. H. (1985). Der Einfluf3 einer Glasphase in einer Al20 r Keramik auf die langsame Rif3ausbreitung bei
Raumtemperatur und im Hochtemperaturbereich. Ph.D. Thesis. Stuttgart, Germany: Universitat Stuttgart.
IRWIN, G. R. (1957). Analysis of stresses and strains near the end of a crack traversing a plate. Journal of Applied
Mechanics 24:361-364.
LAWN, B. R., A. G. EVANS, and D. B. MARSHALL (1980). The median/radial crack system. Journal of the American
Ceramic Society 63(9-10):574-581.
KRIz, K. (1983). Einfluf3 der Mikrostruktur auf die langsame Rif3ausbreitung und mechanische Eigenschaften von
heif3gepref3tem Siliziumnitrid zwischen Raumtemperatur und 1500°C. Ph.D. Thesis. Erlangen, Germany:
Universitat Erlangen.
MANDERSCHEID, J. M., and J. P. GYEKENYESI (1988). Fracture Mechanics Concepts in Reliability Analysis of
Monolithic Ceramics. Report No. E-3743. Washington, D.C.: National Aeronautics and Space
Administration.
MURAKAMI, Y., ED. (1988). Stress Intensity Factors Handbook. Oxford, England: Pergamon Press.
MULLER, W. H. (1991). PC-Ceramtest: Ein Softwareprogramm fiir die keramische Werkstoffpriifung. Fortschritts-
berichte Deutschen Keramischen Gesellschaft Goumal).
MULLER, W. H., and S. SCHMAUDER (1993). Interface stresses in fiber-reinforced materials with regular fiber
arrangements. Composite Structures 24(1):1-2l.
NADLER, P. (1972). Die Anwendung der statistischen Festigkeitstheorie in der keramischen Werkstoffpriifung, Teil
1. Hermsdorfer Technische Mitteilungen 33:1031-1038.
NADLER, P. (1974). Die Anwendung der statistischen Festigkeitstheorie in der keramischen Werkstoffpriifung, Teil
2. Hermsdorfer Technische Mitteilungen 40:1262-1271.
NADLER, P. (1989). Beitrag zur Charakterisierung und Beriicksichtigung des spezifisch keramischen Festigkeits-
verhaltens. Ph.D. Thesis. Freiberg, Germany: Berkakademie Freiberg.
NEMEm, N. N., J. M. MANDERSCHEID, and J. P. GYEKENYESI (1989). Designing ceramic components with the
CARES computer program. Ceramic Bulletin 68(12):2064-2072.
PAl, S. S., and J. P. GYEKENYESI (1988). Calculation of Weibull Strength Parameters and Batdorf Flaw-Density
Constants for Volume- and Surface Flaw-Induced Fracture in Ceramics. Report No. E-4128. Washington,
D.C.: National Aeronautics and Space Administration.
RAMME, R. (1991). Unpublished research.
SIH, G. c., and H. LIEBOWITZ (1967). On the Griffith energy criterion for brittle fracture. International Journal of
Solids and Structures 3:1-22.
SURESH, S., C. F. SHIH, A. MORRONE, and N. P. O'DoWD (1990). Mixed-mode fracture toughness of ceramic
materials. Journal of the American Ceramic Society 73(5):1257-1267.
TADA, H., P. C. PARIS, and G. R. IRWIN (1985). The Stress Analysis of Cracks Handbook St. Louis, Missouri:
Paris Productions, Inc.
TiMOSHENKO, S., and 1. N. GOODIER (1951). Theory of Elasticity. New York: McGraw-Hill.
TRADINIK, W. (1980). Aspekte der modernen Bruchstatistik Hausarbeit aus Physik. Institut fiir Festkorperphysik
der Universitat Wien und Max-Planck-Institut fiir Metallforschung, Institut fiir Werkstoffwissenschaften.
Stuttgart, Germany.
VOGEL, W. D. (1987). Einfluf3 der Mikrostruktur auf die Rif3ausbreitung in teilstabilisierten Zr02-Werkstoffen bis
zu hohen Temperaturen. Ph.D. Thesis. Stuttgart, Germany: Universitat Stuttgart.
Applications in Ceramic Structures 735
B c
Basic event prioritization (ranking), 194 (see also Ceramic structures, 6, 707
Component prioritization) basic concepts, 709
Bayesian updating, 623, 625, 677 computer software, 732
Beta-metric (l3-metric), 337 fracture behavior, 709
737
738 Index
Earthquakes-continued F
post-earthquake investigations, 455
probable maximum loss, 458 Factor of safety (see Safety factor)
Edgeworth's series approximation, 547 Failure:
Engineering systems (see Systems, engineering) causes of, 215
Error (see Human error) criterion, 9
Event tree analysis, 196, 388, 447, 537, 646 data, 198, 212, 396
Existing structures, 6, 181, 182, 185, 630, 677 domain, 10, 125
Expert opinion, 3, 198, 261, 392, 393, 395, 407, 458 function, 11
aggregation, 269 hypersurface, 125
biases: mode, 190 (see also Failure mode analysis and
cognitive, 263 Failure modes and effects analysis)
motivational, 263 surface, 30, 57
Delphi method, 273 Failure mode analysis (of structural systems), 168, 175
display: Failure modes, effects and criticality analysis, 393, 406
box and whisker display, 270 (see also Failure modes and effects analysis)
histogram, 270 Failure modes and effects analysis, 190 (see also Failure
line, box and circle display, 270 modes, effects and criticality analysis)
range display, 270 Fast probability integration, 71, 157
Extreme value statistics, 421, 425, 473 Fatigue, 8, 23, 71, 91, 97, 112, 117, 136, 418, 559, 568,
Extreme-wind risk assessment, 4, 465 571
atmospheric pressure change (APC), 466, 495, 497, fracture mechanics approach, 148, 151 (see also
499 Fracture mechanics)
bridges, 488 material properties data, 153
consequence analysis, 497, 498 probabilistic analysis (see Probabilistic fatigue
cyclones: analysis)
extratropical, 471, 473, 484 reliability (see probabilistic analysis)
tropical (see hurricanes) safety check expression, 158
data, 469, 471, 473, 474, 476, 477, 480, 483 S-N approach, 149
design codes, 486, 488 statistical analysis of data, 152
downbursts, 482 strain life model, 150
extreme value analysis, 473 under random stresses, 154
fragility analysis, 497 Fault tree, 131, 388, 448, 646
hazard analysis, 469, 497 analysis, 192
hazard curve, 469, 482 qualitative, 192
hurricanes, 198, 199, 202, 469, 477, 484 quantitative, 194
load, wind-induced, 484 of structural systems, 178
missile, wind-propelled, 489, 498 construction, 192
nuclear power plants, 489, 499 Finite element method:
plant risk assessment (see probabilistic risk probabilistic (see Probabilistic finite element method)
assessment) stochastic (see probabilistic)
probabilistic risk assessment, 497 First order reliability method (FORM), 1, 2, 27
computer software, 499 applications, 71, 72, 85, 96, 157, 357, 545, 554, 609,
response (see structural response) 613, 614, 615, 626, 633, 692-697, 700
structural reliability analysis, 497 computer software, 48
structural response, 484 correlated variables, 43
system reliability analysis, 498 methodology, 38
thunderstorms, 476, 484 Flaw size distribution (see Crack size distribution)
tornadoes, 198, 199, 202, 479, 484, 489, 490, Fossil power plants, 137, 204, 391, 411, 551, 554
496 Fracture index, 402
uncertainties, 465, 469, 498 Fracture mechanics, 8, 23, 71, 87, 418
windspeed map, 481 basics concepts, 109, 709
740 Index
N
L
Naval vessels (see Ships)
Life: Neuman expansion, 46, 71
cumulative distribution function, 426, 642 Neural network, 3, 317
definition, 420, 422 Nondestructive examination, 3, 112, 238
extension, 567, 630, 634, 646, 677 (see also acoustic emission, 240, 390
prediction) aircraft structures, 247, 253
prediction, 4, 98, 136, 138, 390, 391, 416, 567, 604, distance amplitude correction (DAC) approach, 249
730 (see also extension) eddy currents, 240, 249, 252, 253
cumulative value modeling, 418, 427 fatigue cracks, 244, 249, 251, 252
extreme value modeling, 418, 421 human factors, 245
stochastic process modeling, 418 liquid penetrant, 239, 247, 248
probability density function, 418 magnetic particle, 240, 247, 248, 390
Limit state function, 30, 156, 160, 621, 623, 626, 694 multiple examinations, 245
(see also Performance function) nuclear power plants, 238, 250
implicit, 160 piping, 247, 248, 250-252
Limit states, 10, 57, 585 PISC, 249-251
discontinuous, 597, 613 pressure boundary, 249
multiple, 613 pressure vessels, 247, 249-251
Limit states method (see Load and resistance factor design) probability of correct disposition, 242
Linguistic probability (see Fuzzy probability) probability of detection, 112, 121, 141, 241-254,
Liquefaction, 24 624, 628
Load-capacity interference method (see Stress-strength probability of nondetection, 136, 241
interference method) radiographic, 240, 247, 248
Load combination, 522, 584, 612, 652, 671 relative operating characteristic, 241
Load factor, 333, 344, 671 reliability, 238, 395, 408
Load sharing factor, 701 data sources, 247
Load and resistance factor design (LRFD), 6, 333 (see sampling plan, 256
also Design codes) sizing accuracy, 241, 249-254
bridges, 180,340, 647 steam generator tubing, 252, 253
concrete structures, 677 stress corrosion cracks, 244, 249, 250, 252, 253
offshore structures, 180, 631 structural reliability, effect on, 254, 257
steel structures, 650, 652, 656, 658, 659, 661 ultrasonic, 240, 243, 248-253, 390
timber structures, 691-695, 698 visual, 239
Logic tree (see Decision tree) Nuclear power plants, 5, 23, 24, 141, 201, 202, 204,
Lognormal format (for reliability analysis), 156, 659 268, 274, 275, 510
Loss of coolant accident (LOCA), 541, 548 earthquakes, 436, 510, 514, 516, 520-522
extreme-wind, 489, 499
M in-service inspection, 391, 400, 401, 404, 405, 406
nondestructive examination, 238, 250
Maintenance, 405, 420, 527 piping, 548
Marine structures (see Offshore structures and Ships) pressure vessels, 536
Markov chain, 131, 687, 698 wind (see extreme-wind)
742 Index
Probabilistic finite element method-continued Random variables generation (see also Random
probabilistic response analysis, 76, 77 numbers):
reliability analysis, 85 acceptance-rejection method, 57
second order reliability method, 71 inverse transformation method, 56
variational principle (see Hu-Washizu variational Ranking (see Prioritization)
principle) Redundanc~ 2, 188, 421, 645
Probabilistic fracture mechanics, 2, 23, 106, 202, 203, factor, 379
390, 427 Reliability:
analysis techniques, 121 after accidents (see of damaged structures)
applications: of damaged structures, 360, 374 (see also residual)
boiler headers, 137 of deteriorated structures, 185
bridges, 646 reserve, 183, 360
ceramic structures (see Ceramic structures) residual, 2, 183, 360
offshore structures, 615, 621 Reliability block diagram, 190
piping, 534 Reliability index (safety index), 31, 33, 86, 125, 158,
pressure vessels, 534 159, 180, 182, 334, 336, 337, 342, 343, 357,
ships, 134 519, 628, 629, 645, 658, 659, 661, 674, 676,
basic concepts, 114-121 677, 693, 696, 699, 700 (see also Beta-metric)
computer software, 131, 541, 548, 550 Reliability models:
data, 115, 133 levels of, 609
life prediction, 427, 730 Reparr, 121, 134, 141, 257, 627, 628
Probabilistic response analysis: Reserve reliability, 183, 360
linear, 76 Residual reliability, 2, 183, 360
nonlinear, 77 Resistance factor, 334, 343, 344, 659, 671, 691
Probabilistic risk assessment (PRA), 275, 388, 391, 395, Resources allocation, 628 (see also Cost-benefit analysis
511, 513, 534, 537, 548 and Prioritization)
external event, 202, 511 Risk, 191, 194
extreme-wind, 497 assessment (see Plant, risk assessment and
seismic (see Seismic, risk assessment) Probabilistic risk assessment)
internal event, 201, 202, 511 definition, 393
Process plants: in society, 212
piping, 547, 551 Rosenblatt transformation, 45, 125, 126
pressure vessels, 544
seismic risk, 452
s
Safety factor, 4, 23, 29, 334, 642
Q Safety index (see Reliability index)
Sea loads, 578, 616
Quality assurance (see Quality control) Second moment methods, 1, 31, 32, 40, 67, 157, 225,
Quality control, 181 585, 587, 594, 597, 609
Quantitative risk assessment, 188n (see also Second order reliability method (SORM), 1, 2, 27, 31,
Probabilistic risk assessment) 38
Quartile, 270 applications, 71, 85, 157, 160, 609, 613, 614, 615
computer software, 48
correlated variables, 43
R methodology, 40
Seismic (see also Earthquakes):
Rackwitz algorithm, 36 fragility analysis, 431, 434, 436, 455
Rackwitz-Fiessler method, 38, 86, 97, 124, 126, 127, data sources, 445
128, 129, 131, 157, 342, 673, 700 equipment, 442
Random field, 48, 73-76 structures, 440
Random numbers, uniform, generation, 55 fragility curve, 435, 439, 455
744 Index