An Introduction To Reliability Analysis DENOEL
An Introduction To Reliability Analysis DENOEL
Vincent DENOEL
January 2007
This redaction of this document and the development of the illustrations could be realized
thanks to Prof. H. KUSAMA, School of Design and Architecture, University of Nagoya City,
Japan. The author is grateful to the University of Nagoya City, its University Board of Directors
and the concerned Faculty and Department Meetings. The redaction of this original document
(text and figures) has been completed between November 23rd , 2006 and January 25th , 2007,
during an invited stay of the author at the University of Nagoya City. It is warmly acknowledged
for this invitation.
Contents
1 Introduction 3
2 Reliability analysis 6
2.1 First Order Second Moment (FOSM) . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Advanced First Order Second Moment (AFOSM) . . . . . . . . . . . . . . . . . . . 11
2.2.1 Presentation of the method . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Link between the FOSM and AFOSM methods . . . . . . . . . . . . . . . . 15
2.2.3 Details of the resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Advanced First Order Second Moment for Correlated variables (AFOSMC) . . . . 19
2.4 Second-Order Reliability Methods (SORM) . . . . . . . . . . . . . . . . . . . . . . 21
2.5 First-Order Gaussian Second Moment Method (FOGSM) . . . . . . . . . . . . . . 23
2.6 First-Order Gaussian Approximation Method (FOGAM) . . . . . . . . . . . . . . . 28
2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Illustrations 30
3.1 Bending model (uncorrelated variables) . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.1 Probabilistic analysis: analytical approach . . . . . . . . . . . . . . . . . . . 30
3.1.2 Reliability analysis: analytical approach . . . . . . . . . . . . . . . . . . . . 32
3.1.3 Reliability analysis: illustration of the numerical resolution (FOSM) . . . . 33
3.1.4 Reliability analysis: illustration of the numerical resolution (AFOSM) . . . 34
3.1.5 Reliability analysis: illustration of the invariance principle (FOSM .vs. AFOSM) 38
3.1.6 Reliability analysis: 4-variable problem . . . . . . . . . . . . . . . . . . . . 40
3.1.7 Reliability analysis: example of divergence . . . . . . . . . . . . . . . . . . . 43
3.1.8 Reliability analysis: a parametric study . . . . . . . . . . . . . . . . . . . . 46
3.2 Buckling model (correlated non gaussian variables) . . . . . . . . . . . . . . . . . . 47
3.2.1 Probabilistic analysis: analytical approach 1 . . . . . . . . . . . . . . . . . . 47
3.2.2 Reliability analysis: analytical approach . . . . . . . . . . . . . . . . . . . . 50
3.2.3 Reliability analysis: illustration of numerical resolution (AFOSMC) . . . . 52
3.2.4 Probabilistic analysis: analytical approach 2 . . . . . . . . . . . . . . . . . . 54
3.2.5 Reliability analysis: analytical approach . . . . . . . . . . . . . . . . . . . . 58
3.2.6 Reliability analysis: illustration of numerical resolution (AFOSMC) . . . . 60
3.3 Vibration model (non linear failure function) . . . . . . . . . . . . . . . . . . . . . 63
3.3.1 Probabilistic analysis: analytical approach . . . . . . . . . . . . . . . . . . . 63
3.3.2 Reliability analysis: illustration of numerical resolution (FOGSM) . . . . . 67
3.3.3 Reliability analysis: illustration of numerical resolution (FOGAM) . . . . . 72
A Computer Programs 76
A.1 AFOSMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.1.1 Call to AFOSMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.1.2 AFOSMC subroutine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.2 FOGSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A.2.1 Call to FOGSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
1
Denoel Vincent, An introduction to Reliability Analysis
CONTENTS
2
Chapter 1
Introduction
Since the early times of civil engineering the design of structures has been perfomed in a deter-
ministic way, i.e. under the assumption of given loads acting on structures with given properties,
which results in unique displacements and internal forces. During the whole life of the structure
it should however be accepted that the loading is not unique and that the material properties are
not accurately determined in advance. For these reasons a certain variability in the loading as
well as in structural properties could have to be taken into account. This results in a probabilistic
design.
Usually the design of a structure consists in two successive steps: first the analysis and then
the verifications:
The MCS technique is the most time-consuming method but leads, without any hypothesis,
to the most complete results. It is able to provide the full probability density function of the
response and hence any subsequent result. Once the deterministic theories are well understood
its application is straightforward. Concerning the method based on fuzzy arithmetics it simply
consists in common operations (+, -, *, /) but applied on fuzzy numbers. This methods allows thus
computing the fuzziness of the response (i.e. its variability which is equivalent to the probability
density function).
3
Denoel Vincent, An introduction to Reliability Analysis
CHAPTER 1. INTRODUCTION
Figure 1.1: Probabilistic Analysis (a) The random loading and random material properties are
specified by their probability density fucntions (b) The result of the analysis is the probability
density functions of the displacements
4
Denoel Vincent, An introduction to Reliability Analysis
CHAPTER 1. INTRODUCTION
Because of the evident complexity of some problems, the analytical approach can’t be applied
in any case; because of their evident time demand and mainly because of the huge quantity of
information coming out from the analysis, the MCS technique and the fuzzy arithmetics are rather
used for some localized checkings.
For these reasons, in civil engineering applications, the stochastic analysis and the stochastic
verifications are generally proceeded at once in a so-called reliability analysis1 . The use of this
method is also explained by the fact that, at the design stage, the designer is seldomly interested
in the whole pdf’s of structural displacements but well in the upper tails of them. Indeed the
upper tails of the pdf’s only will be used for the design. The idea is thus to focus on this area only
and not on the whole pdf’s: the analysis consists in evaluating the reliability of the structure, i.e.
the probability that, given some probabilistic loading and structural parameters, the load applied
on the structure overcomes its resistance. This definition shows that this procedure involve both
analysis and verifications at the same time. Compared to a Monte Carlo simulation, its application
is very fast. As illustrated below it can indeed be seen as computing one single point of the whole
pdf whereas,in its most basic formulation, the MCS technique establishes it all.
A reliability analysis consists in both the analysis and verification of the structure. Exactly as
for the verification stage of a deterministic design, a set of checking conditions have to be provided.
In order to simplify the problem, in this document, these conditions are considered one by one
and the structure is said to be safe if all the conditions are fulfilled. For the sake of simplicity one
condition only will be considered in this report.
Altough in a deterministic approach the checking condition can be proved to be fulfilled or
not —i.e. to the question "Is the resistance larger than the load?", the answer is yes or no—, in a
stochastic procedure, this condition can’t be assessed otherwise than with a probabilistic measure:
the probability of failure. The answer is of this kind: "yes the resistance is larger than the load
with a probability equal to 95%". In the very simple context of two variables, resistance R and
loading S, the failure condition is:
Z =R−S (1.1)
The probability of failure is expressed by:
ZZ
pf = prob(Z < 0) = pRS (R, S) dRdS (1.2)
(R,S)|Z<0
1 Because it combines both analysis and verifications it should be rather called a "reliability-based design"
5
Chapter 2
Reliability analysis
In a deterministic design, and in the context of a two-variable problem, resistance R and loading
S, the failure condition is assessed by the relation:
S>R (2.1)
If this condition is not fulfilled, the failure is said to occur. Within the context of such a
deterministic approach, this inequality can also be written with several equivalent formulations.
For instance:
R
R − S < 0 or −1<0 (2.2)
S
are strictly equivalent to Eq. (2.1). Although in a deterministic approach the checking con-
dition can be proved to be fulfilled or not —i.e. to the question "Is the resistance larger than
the load?", the answer is yes or no—, in a stochastic procedure, this condition can’t be assessed
otherwise than with a probabilistic measure: the probability of failure. The answer is then of this
kind: "yes the resistance is larger than the load with a probability equal to 95%". In a reliability
analysis, a failure condition has to be defined too. It is directly derived from the deterministic
relation and the failure condition is defined by a new random variable G. For example the random
variables corresponding to the deterministic relations in (2.2) are:
R
G1 (R, S) = R − S or G2 (R, S) = −1 (2.3)
S
In this simple context of a two-variable problem the failure condition is:
where pRS (R, S), the joint probability density function between R and S, has to be integrated
on the domain on which the failure condition is fulfilled (where the resistance is smaller than the
loading). This is illustrated at Fig. 2.1.
In practical applications the resistance of the structure is given as a function of several pa-
rameters depending on the considered problem (stiffness, cross-section area, bending modulus,
structural dimensions, etc.). Then in a more general context a failure condition is given as a scalar
implicit relation of more than two parameters:
6
Denoel Vincent, An introduction to Reliability Analysis
First Order Second Moment (FOSM)
the purpose of a reliability method consists in estimating the probability of failure as a result of
the integral:
Z Z
pf = prob (G({x}) ≤ 0) = ··· px (x1 , x2 , ..., xN )dx1 dx2 · · · dxN (2.7)
{x}|G({x})≤0
where px (x1 , x2 , ..., xN ) is the joint probability density function between all the variables.
In case of uncorrelated, gaussian variables and linear failure function the results of this integral
can be obtained in close form. However in practical applications, these three conditions are very
seldom satisfied together. The next sections will present how approximate methods have been
developed in order to give estimations of this integral. The First Order Reliability Methods
(FORM) consists in a large set of available methods and some of them are presented. The most
basic one, the First Order Second Moment, is presented and its lack of rigour is enlightened. Then
Advanced First Order Second Moment (AFOSM) are introduced as a result of the Hasofer-Lind
theory. At this stage the simple theory valid for Gaussian Processes as well as the Equivalent
Gaussian Method is presented.
The probability of failure, defined as the probability that Z1 is negative (Eq. (2.5)), is repre-
sented by the shaded area in Fig. 2.2 and can be computed by:
µ ¶ µ ¶
−µZ µZ
pf = Φ =1−Φ (2.11)
σZ σZ
7
Denoel Vincent, An introduction to Reliability Analysis
First Order Second Moment (FOSM)
where Φ is the normal cumulative distribution function. This relation shows that it is very
convenient to define a reliability index as:
µZ µ − µS
β= = p R2 (2.12)
σZ σ R + σ 2S
Historically the first idea of the reliability analysis was to compute this reliability index as the
ratio of the mean failure condition and its standard deviation, and then to express the probability
of failure by Eq. (2.13). This development shows that this way to estimate the probability of
failure is rigorous, i.e. returns the exact probability of failure, in case of uncorrelated gaussian
variables and linear failure function. Because it was developed from the consideration of a linear
failure function (First Order) and Gaussian processes (represented by their first two statistical
moments), this method is called the First-Order Second-Moment method (FOSM).
This procedure for the computation of the probability of failure can be used for the second
"equivalent" failure condition Z2 = R S − 1 < 0. In this case, the mean value of the failure condition
µZ and its standard deviation σ Z are not easy to compute is close form. They can however be
estimated easily if the failure condition is linearized around the mean value:
º º
∂Z2 ∂Z2
Z2 (R, S) ' Z2 (µR , µS ) + (R − µR ) + (S − µS ) (2.14)
∂R R=µR ∂S R=µR
S=µS S=µS
8
Denoel Vincent, An introduction to Reliability Analysis
First Order Second Moment (FOSM)
Figure 2.3: Example of linear failure function and thus linear failure condition
The probability of failure related to this second failure condition is then expressed by pf =
Φ (β) = 1 − Φ (β). Simply because the reliability indices are different (Eqs. (2.12) and (2.17))
the probability of failure is different than the one obtained with the first failure condition. This
result indicates that the FOSM method has to be used with an extreme care. Indeed it would be
expected that several failure condition giving the same result in a deterministic approach give also
the same result in a probabilistic approach. This example shows that it is not the case with the
FOSM method. This is known as the lack of invariance property and is illustrated in section 3.1.5.
Definition 1 A reliability analysis method presents the invariance property if any transformation
of a linear failure function returns the same reliability index. It is important to notice that the
invariance property is related to a linear failure condition and not necessarily to a linear failure
function. This difference seems to be subtle but is very important. Indeed a non linear failure
function, e.g. Z2 (R, S) = R/S − 1, can be associated to a linear failure condition (Z2 = 0 ⇒
R − S = 0 which is linear).This is illustrated at Figs 2.3 to 2.5. The first two are linear and non
linear failure functions but leading to the same linear failure condition: Z = 0 is the same straight
line on both graphs. Typically non linear failure function related to linear conditions (Fig. 2.4)
can be obtained by a transformation of a linear failure function. The invariance property concerns
these functions. Fig. 2.5 illustrates a non linear function with a non linear failure condition. In
this case the limit condition Z = 0 is not linear.
The FOSM method is rigorous in case of linear failure function only (Fig. 2.3 only) and not
for any linear failure condition. Therefore it does not fulfil the invariance property. The FOSM
method consists in replacing the exact failure function by a linearized relation, this linearization
being done at the mean point. In case of linear failure function, this linearization returns the
same result no matter the point around which the linearization is performed. But Fig. 2.4 shows
clearly that if the linearization is performed on a point which does not lye on Z = 0, the resulting
hyperplane is different (because level lines are not parallel). This is the exact reason for which the
FOSM presents the lack of invariance.
Despite this major drawback the FOSM method has been widely used for many years. Indeed
if the random variables are Gaussian and if the failure function is linear this method returns the
9
Denoel Vincent, An introduction to Reliability Analysis
First Order Second Moment (FOSM)
Figure 2.4: Example of non linear failure function but linear failure condition
Figure 2.5: Example of non linear failure function and non linear failure condition
10
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment (AFOSM)
exact probability of failure. If the probability distribution are slightly different than the gaussian
distribution or if the failure condition is slightly non linear, this method can however be used
to determine estimations of the exact probability of failure. The mathematical definition of the
FOSM method can be generalized in case of several random variables.
Definition 2 . FOSM: Let us suppose a non linear relation of random variables:
G({x}) = G(x1 , x2 , ..., xN ) (2.18)
describing the failure condition G({x}) = 0. The reliability index related to this failure condition
is defined as the ratio of the mean and standard deviation of the failure function:
µ
β= G (2.19)
σG
where the mean and standard deviation are estimated by linearizing the failure condition around
the mean variable {µx } :
º
∂G
G({x}) ' G ({µx }) + ({x} − {µx })T (2.20)
∂ {x} {x}={µx }
which gives:
µG = G({µx }) (2.21)
XN X N º º
∂G ∂G
σ 2G = covXi Xj (2.22)
∂xi {x}={µx } ∂xj {x}={µ
i=1 j=1 x}
where covXi Xj represents the covariance between variables Xi and Xj . The subsequent probability
of failure is rigorous in case of gaussian variable and linear failure function only.
11
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment (AFOSM)
This failure condition can also be written as a function of the new variables as:
b x}) = G(x1 (b
G({b x1 ) , x2 (b
x2 ) , ..., xN (b
xN )) (2.25)
Since the original failure condition was linear the linear transformation keeps the linearity of
b x}) can be written:
the limits of the domain of integration. Then G({b
b x}) = 0 ⇔ b
G({b b1 + b
a1 x b2 + ... + b
a2 x bN = b
aN x a0 ⇔ a}T {b
{b x} = b
a0 (2.26)
This is schematically illustrated on the 2-variable problem in Fig. 2.6. Because the variables are
uncorrelated, the principal axes of the joint probability denstiy function between R and S are
parallel to axes R and S (Fig. 2.6-(a)). The shaded area represents the domain of integration,
i.e. the zone in which G < 0, or again the domain in which the resistance R is smaller than the
loading S. With the new reduced variables (Fig; 2.6-(b)), the joint probability density function
is represented by concentric circular level curves and the failure condition Z b = 0 is still a linear
b
relation of R and S.b
Furthermore in the context of uncorrelated variables the joint probability density function can
be factorized in its marginal probability density functions as:
Z Z
pf = ··· px1 (x1 )px2 (x2 )...pxN (xN )dx1 dx2 · · · dxN (2.27)
{x}|G({x})≤0
where pxb i (b
xi ) = pxi (xi (b
xi )), i ∈ [1, N ] are the marginal probability density functions of the
reduced variables, i.e. the normal distributions because the distribution of the physical variables
is gaussian. By noting the normal probability density function as pz (x), Eq. (2.28) can thus also
be written: Z Z
pf = ··· pz (b
x1 )pz (b
x2 )...pz (b
xN )db
x1 db
x2 · · · db
xN (2.29)
{b b x})≤0
x}|G({b
The factorization of the joint probability density functions reduces the complexity of the definite
integral to the shape of its domain of integration only. This shape is however not really complex
because its limit, represented by the failure condition, is a hyperplane (a line in Fig. 2.6-(b)). In
order to make it simpler again, let us define another transformation:
{e
x} = [R] {b
x} (2.30)
where [R] is the adequate rotation matrix transforming the domain of integration into a domain
with axes parallel to the reduced directions. It can be understood intuitively that the probability
of failure can be expressed as:
Z +∞ Z +∞ Z +∞
pf = pz (e
x1 )de
x1 · · · pz (e
xN −1 )de
xN −1 pz (e
xN )de
xN (2.31)
−∞ −∞ β
where the domain of integration is running from −∞ to +∞ for each variable except for the
last one which has to cover the interval [β, +∞]. Because of Kolmogorov’s first axiom and because
the marginal probability density functions are normal, the probability of failure is thus finally
reduced to: Z +∞
pf = pz (e
xN )de
xN = 1 − Φ (β) = Φ (−β) (2.32)
β
12
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment (AFOSM)
Figure 2.6: Successive transformations: (a) Initial physical variables - (b) Reduced variables with
zero mean and unit variance - (c) Rotated reduced variables
13
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment (AFOSM)
where Φ is the normal cumulative density function and β is the reliability index defined as
the shortest distance, in the reduced space (Fig. 2.6-(b)), between the origin and the failure
condition. On the simple two-variable problem with gaussian variables and linear failure condition,
this geometric definition of the reliability index is equivalent to what was given in the previous
section (FOSM). It is however different in case of a non linear failure condition as illustrated in
secition 3.1.5.
Both the FOSM and AFOSM give the same reliability indices and hence probability of failure
in case of gaussian processes and linear failure functions. If the failure function is non linear but
the failure condition is linear the AFOSM return also the exact probability of failure. This can
be understood intuitively since the AFOSM method is based on a geometric approach and the
domain of integration is anyway limited by a hyperplane in case of linear condition (no matter the
linearity or not of the function). Furthermore the linearity of the failure function is not sought at
all in the AFOSM whereas it was the FOSM is based on this hypothesis.
In case of non linear conditions, the AFOSM method gives however estimations only of the
exact probability of failure. Because of the geometric definition of the reliability index, the AFOSM
method can be applied but it should be kept in mind that the resulting error in the probability of
failure increases as the non linearity of the condition increases.
Definition 3 . AFOSM: Let us suppose a non linear relation of uncorrelated Gaussian variables:
describing the failure condition. A set of reduced (zero-mean unit-variance) variables is defined by:
xi − µxi
bi =
x (2.34)
σ xi
and the failure condition, expressed with these new variables is:
b x}) = G(x1 (b
G({b x1 ) , x2 (b
x2 ) , ..., xN (b
xN )) = 0 (2.35)
The reliability index related to this failure condition is defined as the shortest distance in the
reduced space between the origin and this hypersurface.
Theorem 4 To find the minimum of a scalar function F ({b x}) under the condition G ({bx}) = 0
is a usual mathematical problem. It is usually solved by introducing Lagrange multipliers. Indeed
for an extremum of F to exist on G the gradient of F must line up with the gradient of G. One is
then the multiple of the other
OF = −λOG (2.36)
where λ is called the Lagrange multiplier. If these two vectors are equal then each of their compo-
nents are also equal:
∂F ∂G
+λ =0 for i = 1, ...N (2.37)
∂xi ∂xi
Together with G ({b
x}) = 0, these N relations form a set of N + 1 equations with N + 1 unknowns.
The optimum point is obtained by the resolution of this set of non linear equations.
The AFOSM requires the computation of the shortest distance between a given hypersurface
and the origin. This can be performed by introducing a Lagrange multiplier. The distance between
any point and the origin is expressed by:
q
T
β ({b
x}) = {b x} {bx} (2.38)
The application of Lagrange’s theory to the squared distance lead to this set of equations:
( ³ ´
∂ T ∂Gb
∂b
xi {b
x} {b
x } + λ ∂b
xi = 0 (2.39)
b (b
G b2 , ..., x
x1 , x bN ) = 0
14
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment (AFOSM)
in which G ({bx∗ }) = 0 has been taken into account. It is important to notice that the design point
∗
{b
x } is a priori unknown.
Following the FOSM’s procedure, the reliability index is expressed as the ratio of the mean
value of the failure condition µG and its standard deviation σ G . Thanks to the new linearization
they are expressed by:
º
∂G
µG = E [G({x})] = ({µx } − {x∗ })T (2.42)
∂ {x} {x∗ }
and
h i
σ2G = E (G({x}) − µG )2
⎡Ã !2 ⎤
º
T ∂G
= E ⎣ ({x} − {µx }) ⎦
∂ {x} {x∗ }
N X
X N h¡ ¢³ ´i ∂G º ∂G
º
= E xi − µxi xj − µxj
i=1 j=1
∂xi {x∗ } ∂xj {x∗ }
N X
X N º º
∂G ∂G
= covxi xj
i=1 j=1
∂xi {x∗ } ∂xj {x∗ }
where covXi Xj represents the covariance between variables Xi and Xj . In order to compare more
easily with the development related to the AFOSM method the same development can be written
in the reduced space ({µx } → 0, covxi xj → δ ij , x → x b). The resulting reliability index is then
expressed by: k
P b
− N i=1 x ∂G
b∗i ∂bxi
µ x∗ }
{b
β= G =s µ ¶2 (2.43)
σG PN k
b
∂G
i=1 ∂b
xi
x∗ }
{b
In the AFOSM method the reliability index is defined as the shortest distance between the
origin (in the reduced space) and the limit hyperplan. It has been shown that the resulting
expression of the reliability index is (Eq. (2.38)):
v
uN
uX
β=t b∗2
xi (2.44)
i=1
15
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment (AFOSM)
After some trivial transformations and using Eq. (2.40), this relation can also be written:
v
uN PN ∗2
uX 2
xb
β = t bi = qPi=1 i
x∗2 λ
2 N
i=1
λ b∗2
i=1 xi
PN ∗ ∂ Gb k
PN ∗ ¡ 2 ∗ ¢ bi ∂bxi
− i=1 x
− b −λx
x bi x∗ }
{b
= qPi=1 ¡i ¢ = s µ ¶2 (2.45)
N 2 ∗ 2 P k
i=1 − b
x
λ i N b
∂G
i=1 ∂b
xi
x∗ }
{b
which results in the same relation as Eq. (2.43). This indicates clearly that the geometric definition
of the reliability index (without requiring any linearization of the failure function!) is strictly
equivalent to the linearization of the failure function around the design point. This demonstration
establishes a direct and clear link between the FOSM method and the AFOSM method. A second
definition can thus be given for the AFOSM method.
describing the failure condition G({x}) = 0. The reliability index related to this failure condition
is defined as the ratio of the mean and standard deviation of the failure function:
µG
β= (2.47)
σG
where the mean and standard deviation are estimated by linearizing the failure condition around
the design point {x∗ }: º
∂G
G({x}) ' ({x} − {x∗ })T (2.48)
∂ {x} {x}={x∗ }
which gives:
º
T ∂G
µG = ({µx } − {x∗ }) (2.49)
∂ {x} {x}={x∗ }
N X
X N º º
∂G ∂G
σ 2G = covxi xj (2.50)
i=1 j=1
∂xi {x}={x∗ } ∂xj {x}={x∗ }
where covXi Xj represents the covariance between variables Xi and Xj . The subsequent proba-
bility of failure is rigorous in case of gaussian variable and linear failure condition only. Since
the linearization is performed around an a priori unknown point (the design point), an iterative
resolution scheme has to b adopted.
• the geometric definition of the reliability index (Def. 3) leads to a set of non linear equations.
This system has to be solved (2.40) with any numerical procedure. Usually a second-order
iterative scheme, like Newton-Raphson method, is used.
• the reliability index can be obtained as the ratio of the mean to the standard deviation of
the failure function, after having linearized it around the design point (Def. 5). Since the
design point is a priori unknown, this procedure requires also an iterative technique for the
computation of the reliability index.
16
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment (AFOSM)
The equivalence between these two methods is seldom presented and many author focus just on
one or the other without any justification. In this report, the equivalence has been demonstrated
(Section 2.2.2) and it is clear that both ways to estimate the reliability index will give the same
result1 . For this reason, in the following, the second way to consider the reliability index will be
adopted.
In his lecture note on reliability analysis ([5]), Kusama adopts this alternative and applies a
convenient iterative resolution procedure. The successive steps of this procedure can be summa-
rized in an algorithmic manner.
Step 5: Compute the estimated mean and standard deviation of the failure condition:
N
X ³ ´
(k) (k)
µG = G(k) + ni µxi − xi (2.54)
i=1
N
X (k) (k)
σG = αi ni σ xi (2.55)
i=1
increment k by 1 and loop from Step 2 to Step 7 until the convergence is reached.
1 The major difference between both ways to consider the reliability index concerns the convergence of the
iterative schemes. For some badly conditioned systems (i.e. some complex failure conditions), one method could be
much faster than the other, or even converge correctly whereas the other diverges dramatically.
2 The initial guess of the design point could simply be the mean value (the first iteration is then the exact
application of the FOSM method). In Kusama’s procedure the initial guess of the design point is given by initial
© ª
guesses on β (0) and α(0) where αi represent the orientation of the design point with respect to the mean values.
(0)
As a first guess, he recommends to choose β (0) = 3 and αi = √1 which seems to be appropriate in some
N
circumstances only. The initial design point is then computed by:
(0) (0)
xi = µxi − αi β (0) σxi
17
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment (AFOSM)
Figure 2.7: Illustration of the difference between: (a) the physical space (Algorithm 6) and (b)
the reduced space.(Algorithm 7).
The developments of the next sections will indicate that the use of reduced variables is not really
necessary but clearly convenient in case of correlated variables. For this reason, this modification
is already adopted at this stage. Algorithm 6 is slightly modified in order to perform the reliability
analysis in the reduced space (zero-mean and unit-variance variables).
Algorithm 7 AFOSM in the reduced space, for uncorrelated gaussian variables
Step 0.1: Define reduced variables:
xi − µi
bi =
x bi
⇔ xi = µi + σi x (2.58)
σi
Step 0.2: Write the failure condition with the reduced variables:
b ({b
G x}) = G ({x ({b
x})}) (2.59)
(0)
Step 1: Give an initial guess of the design point bi
and start the iterations at the next step
x
(with k = 0).
Step 2: There is no reason for this point to lie on the reduced failure condition. So, compute
the reduced failure condition at this point:
³n o´
b (k) = G
G b x b(k) (2.60)
Step 3: Compute the gradient of the reduced failure function at this point:
%
(k)
b ({b
∂G x})
ni = (2.61)
∂bxi
x}={x
{b b (k) }
Step 5: Compute the estimated mean and standard deviation of the failure condition:
N
X
µG b(k) −
= G
(k) (k)
ni xi (2.63)
i=1
N
X (k) (k)
σG = αi ni (2.64)
i=1
18
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment for Correlated variables (AFOSMC)
increment k by 1 and loop from Step 2 to Step 7 until the convergence is reached.
The selection of the first guess for Step 1 of this algorithm is subject to the same remarks as
in Algorithm 6. However it should be added that if the first guess is taken as the mean physical
variables (eventually reduced), Algorithms 6 and 7 correspond to the simple FOSM after the first
step. This algorithm and this remark are presented by an illustrative example in section 3.1.4 (p.
34).
The application of this algorithm is rigorous in case of linear failure condition only. In the op-
posite case, the method can also be applied but it should be kept in mind that it gives approximate
results only.
{b
x} = [A] ({x} − {µx }) (2.67)
where {µx } = E [{x}] is the vector of mean physical variables and [A] is the transformation
matrix which will produce uncorrelated reduced variables {bx}. The mathematical expectation of
this relation shows that this transformation provides zero-mean reduced variables. Furthermore
their covariance matrix is expressed by:
h i h ³ ´i
T T T T T
[Vxb ] = E {b
x} {b
x} = [A] E ({x} − {µx }) {x} − {µx } [A] = [A] [Vx ] [A] (2.68)
Since the aim of the reduction is to provide zero-mean and unit-variance variables, the matrix
[A] will be chosen such as to provide:
T
[Vxb ] = [A] [Vx ] [A] = [I] (2.69)
19
Denoel Vincent, An introduction to Reliability Analysis
Advanced First Order Second Moment for Correlated variables (AFOSMC)
Figure 2.8:
where [I] is the identity matrix. Hence [A] can simply be estimated as the eigen vectors of the
covariance matrix of the physical variables. The normalization of the eigen vectors is of course
performed in such a way that Eq. (2.69) is satisfied.
In this section it is still supposed that the failure condition is a linear function of the physical
parameters (Eq. 2.24). Since the transformation used to produce uncorrelated reduced variables
is linear, the failure condition expressed with the new variables remains again linear:
T
G({x}) = 0 ⇔ {a} {x} = a0
³ ´
b x}) = 0
G({b ⇔
T −1
{a} {µx } + [A] {bx} = a0
T −1 T
⇔ {a} [A] {b
x} = a0 − {a} {µx } (2.70)
| {z } | {z }
{b
a} T b
a0
The reduced variables are now uncorrelated, their joint probability density function is axisym-
metric and can be factorized. Then the same developments as presented in the previous section
are still valid. In particular, Eq. (2.29) still holds, the reliability index with the same definition as
previously can be determined and it is again linked rigorously to the probability of failure.
In view of these developments, the former algorithm of the AFOSM method can be slightly
transformed to suit the context of correlated variables.
{b
x} = [A] ({x} − {µx }) ⇔ {x} = {µx } + [A]−1 {b
x} (2.71)
where [A] is such that [A] [Vx ] [A]T = [I] and [Vx ] is the covariance matrix of the physical
variables {x}.
20
Denoel Vincent, An introduction to Reliability Analysis
Second-Order Reliability Methods (SORM)
Step 0.2: Write the failure condition with the reduced variables:
³ ´
b ({b
G x}) = G ({x ({bx})}) = G {µx } + [A]−1 {bx} (2.72)
(0)
Step 1: Give an initial guess of the design point xbi and start the iterations at the next step
(with k = 0).
Step 2: There is no reason for this point to lie on the reduced failure condition. So, compute
the reduced failure condition at this point:
³n o´
b (k) = G
G b x b(k) (2.73)
Step 3: Compute the gradient of the reduced failure function at this point:
%
(k)
b ({b
∂G x})
ni = (2.74)
∂bxi
x}={x
{b b (k) }
Step 5: Compute the estimated mean and standard deviation of the failure condition:
N
X
µG b(k) −
= G
(k) (k)
ni xi (2.76)
i=1
N
X (k) (k)
σG = αi ni (2.77)
i=1
increment k by 1 and loop from Step 2 to Step 7 until the convergence is reached.
The application of this algorithm is rigorous in case of linear failure condition only. In the op-
posite case, the method can also be applied but it should be kept in mind that it gives approximate
results only. If the physical variables are uncorrelated, this Algorithm degenerates to Alg. 7 and
the reduction matrix [A] is a diagonal matrix composed of the inverses of the standard deviations
of the random variables.
This Algorithm is illustrated through worked examples in section 3.2.
21
Denoel Vincent, An introduction to Reliability Analysis
Second-Order Reliability Methods (SORM)
Figure 2.9: Illustration of a non linear failure function and the corresponding linearized function
(at the design point)
22
Denoel Vincent, An introduction to Reliability Analysis
First-Order Gaussian Second Moment Method (FOGSM)
in Fig. 2.9. Because of the definition of the reliability index in a geometric way, both failure
conditions have the same reliability index. The probability of failure is rigorously related to the
reliability index when the failure condition is linear and when the physical variables are normally
distributed. For this reason, the security of usual structures against failure is often assessed by
means of the reliability index only.
Fig. 2.9 shows also that computing the probability of failure associated to a non linear condition
consists in replacing it by the tangent hyperplane at the design point (in this example both failure
condition Z1 and Z2 have the same reliability index). The resulting probability of failure is thus
underestimated if the curvature of the actual failure condition faces the origin (as in Fig. 2.9); on
the opposite, a curvature towards infinity will results in a overestimation of the exact probability
of failure.
The evaluation of the probability of failure in case of some simple non linear failure functions
can be found in the literature but the cases in which this integral can be computed are very seldom.
Anyway the explicit computation of the probability of failure for any quadratic failure function
is not imaginable! For this reason, some approximate methods have been developed. The most
usual is due to Breitung ([1], [2]) which gives a simple approximate expressions of the probability
of failure, as a function of the reliability index (obtained with a FORM method):
n−1
Y 1
pf ' Φ (−β) p (2.80)
i=1
(1 − βκi )
n−1
Y 1
pf ' Φ (−β) r³ ´ (2.81)
i=1 φ(β)
1− Φ(−β) κi
The approximate results are obtained by asymptotical developments of the integrands. The
probability of failure is also expressed as a function of the extrinsic curvatures of the failure function
κi at the design point. It can be checked that, if the curvatures are all equal to 0 (linear failure
condition), this result degenerates into Φ (−β) which is well correct if the failure condition is linear
(FORM). The second relation seems to provide better results because it smoothens the singularity
at βκi = 1. In this second expression φ (β) denotes the standard normal density function.
The development of such a method goes beyond the scope of this technical report and is
therefore not illustrated in the following examples. The error associated to the use of a FORM
in case of non linear failure function is however illustrated by comparison with a Monte Carlo
Simulation (Sections 3.2.6, 3.3.2 and 3.3.3).
23
Denoel Vincent, An introduction to Reliability Analysis
First-Order Gaussian Second Moment Method (FOGSM)
Figure 2.10:
ones. Typically this reduction is a non linear relation (because any linear function of a gaussian
process keeps its linearity). The failure condition is transformed accordingly (Fig. 2.10-(b)). Even
if the original failure condition was linear (in the physical space) the reduced failure condition
exhibits a non linear shape because of this reduction.
Then when the considered random variables are non Gaussian, the reduced failure function is
non linear. Adequate resolution techniques (e.g. SORM) should thus be used. However, provided
the non linearity remains slight, first order methods (like AFOSM) can already give reasonable
estimations of the probability of failure.
Theorem 9 Transformation of a single random variable. Let x and y be random variables with
their respective probability density functions px (x) and py (y). If y is expressed as a function of
x by a monotonic relation y = y(x) which leads to the univoque reverse relation x = x(y), both
probability density functions are related by:
¯ ¯
¯ dy(x) ¯
px (x) = py (y) ¯¯ ¯
dx ¯
¯ ¯
¯ dx(y) ¯
py (y) = px (x) ¯¯ ¯ (2.82)
dy ¯
The proof of this relation is straightforward when considering the cumulative density functions.
For example, the cumulative density function of y is:
24
Denoel Vincent, An introduction to Reliability Analysis
First-Order Gaussian Second Moment Method (FOGSM)
and the corresponding probability density function, obtained by simple derivation with respect
to y0 , is: ¯ ¯ ¯ ¯
dFy (y) dFx (x) ¯¯ dx ¯¯ ¯ dx ¯
py (y) = = ¯ ¯ = px (x) ¯¯ ¯¯ (2.84)
dy dx dy dy
This general relation can be used in two interesting context.
If x is uniformly distributed on [0, 1], its cumulative density function is expressed by:
⎧
⎨ 0 x0 < 0
Fx (x0 ) = x0 x0 ∈ [0, 1] (2.86)
⎩
1 x0 > 1
which demonstrates the theorem (Fx (x0 ) = x0 ). This property is commonly used for the
generation of non uniform random variables (Monte Carlo simulation techniques). Indeed, today’s
computers are all equipped with a random number generator. This generator is often limited to the
generation of a uniform random variable (between 0 and 1). Thanks to the previous relation any
random variable with a given cumulative density function (and hence probability density function)
can be generated.
Theorem 11 Transformation to a Gaussian distribution. Let us suppose that the probability dis-
tribution of x is known. The transformation y(x) that provides a normal variable y is given by:
Theorem 12 Transformation of a multiple random variables. Let {x} and {y} be sets random
variables with their respective joint probability density functions px ({x}) and py ({y}). If {y} is
univoquely expressed as a function of {x} and reversely as well, both joint probability density
functions are related by:
¯ dy ¯
¯ dy2 dyN ¯
¯ dx1 dx1 · · · dx1 ¯
1
¯ ¯ ¯ .. ¯¯
¯ d {y({x})} ¯ ¯ dy1 . . . . ¯
¯
px ({x}) = py ({y}) ¯¯ ¯ = py ({y}) ¯ dx2 ¯
d {x} ¯ ¯ .. ¯
¯ . ¯
¯ dy1 dyN ¯¯
¯ dx · · · dxN
¯ dxN dx ¯
¯ 1 2
· · · dxN ¯
¯ dy1 dy1 dy1 ¯
¯ ¯ ¯ .. ¯¯
¯ d {x({y})} ¯ ¯ dx1 . . . . ¯
¯ ¯ ¯ dy2
py ({y}) = px ({x}) ¯ ¯ = px ({x}) ¯ ¯ (2.88)
d {y} ¯ . . ¯
¯ . ¯
¯ dx1 dxN ¯¯
¯ dy ··· dy
N N
25
Denoel Vincent, An introduction to Reliability Analysis
First-Order Gaussian Second Moment Method (FOGSM)
Figure 2.11: Illustration of transformation functions. (a) From uniform distribution to Gaussian,
(b) From the standard beta (2, 2) distribution to Gaussian
26
Denoel Vincent, An introduction to Reliability Analysis
First-Order Gaussian Second Moment Method (FOGSM)
Step 3: Compute the gradient of the reduced failure function at this point:
%
(k)
b ({b
∂G x})
ni = (2.91)
∂bxi
x}={x
{b b (k) }
Step 5: Compute the estimated mean and standard deviation of the failure condition:
N
X
µG b(k) −
= G
(k) (k)
ni xi (2.93)
i=1
N
X (k) (k)
σG = αi ni (2.94)
i=1
increment k by 1 and loop from Step 2 to Step 7 until the convergence is reached.
27
Denoel Vincent, An introduction to Reliability Analysis
First-Order Gaussian Approximation Method (FOGAM)
Step 0.2: Write the failure condition with the reduced variables:
b ({b
G x}) = G ({x ({b
x})}) (2.98)
(0)
Step 1: Compute the corresponding reduced design point x bi
Step 2: There is no reason for this point to lie on the reduced failure condition. So, compute
the reduced failure condition at this point:
³n o´
b (k) = G
G b x b(k) (2.99)
Step 3: Compute the gradient of the reduced failure function at this point:
%
(k)
b ({b
∂G x})
ni = (2.100)
∂bxi
x}={x
{b b (k) }
28
Denoel Vincent, An introduction to Reliability Analysis
Summary
Step 5: Compute the estimated mean and standard deviation of the failure condition:
N
X
µG b(k) −
= G
(k) (k)
ni xi (2.102)
i=1
N
X (k) (k)
σG = αi ni (2.103)
i=1
increment k by 1 and loop from Step 2 to Step 7 until the convergence is reached.
2.7 Summary
In the previous sections several techniques have been presented for the computation of the prob-
ability of failure, by means of a so-called reliability analysis. The simplest procedure (FOSM)
based on the linearization of the failure function around the mean physical variables lead to erro-
neous results, even in case on non linear failure function but linear failure condition. The AFOSM
method consists in performing this linearization in the neighbourhood of the design point, i.e. the
most probable point of failure. It is possible to prove that this method provides the exact results
(for the probability of failure) in case of gaussian variables and linear failure condition. If these
conditions are not fulfilled, a reliability index can be computed (thank to a geometrical definition)
but it can’t be linked directly to the probability of failure.
A dedicated procedure, the AFOSMC, has been introduced for correlated variables. Basically
its aim is strictly equivalent to the AFOSM method. It is therefore also valid for gaussian variables
and linear failure conditions only.
In case of non Gaussian variables, the transformation to the reduced variables (as in the
AFSOM and AFOSMC methods) is not linear anymore. It transforms then any linear failure
condition into a non linear one. A procedure similar to the AFOSM has been presented (FOGSM).
It practical application is not always convenient because of this non linear transformation. For
this reason, another method, also presented in this document (the FOGAM) is sometimes applied.
The simplicity of this method lies in the replacement of the non gaussian variables by equivalent
gaussian ones.
29
Chapter 3
Illustrations
• to analyse the structure, i.e. compute the rotation θ, or more precisely its probability density
function;
• to decide whether this result is satisfactory or not
In this very simple example, the probability density function of the rotation can be obtained
in an analytical way. Indeed the cumulative distribution function of this variable is:
30
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
Then the probability density function of θ can be computed by simple derivation of the cumu-
lative distribution function:
Z +∞ µ ¶
dFθ 1 P0
pθ (θ0 ) = = P0 pP (P0 ) pk dP0 (3.5)
dθ0 2Lθ20 −∞ 2Lθ0
This relation is valid for any marginal probability density functions for k and P . For this
application, let us suppose that they are described by gaussian variables with given means and
standard deviations (µP , µk , σ P , σ k ):
( P0 −µP )2
1 −
2σ 2
pP (P0 ) = √ e P (3.6)
2πσ P
( k0 −µk )2
1 −
2σ 2
pk (k0 ) = √ e k (3.7)
2πσ k
After some simplifications, the introduction of Eqs. (3.6) and (3.7) into Eq. (3.5) gives:
∙ ¸
(µP −2Lµk θ0 )2
2L 2LµP θ0 σ 2k + µk σ2P − 12 (2Lσk θ0 )2 +σ2P
pθ (θ0 ) = √ ³ ´3/2 e (3.8)
2π 2 2
(2Lσ k θ0 ) + σ P
which indicates that the rotation θ is not a gaussian variable. This relation is the main result of
the analysis and can be used to compute any subsequent information as the cumulative distribution
function of the rotation: Z θ
Fθ (θ) = pθ (θ0 ) dθ0 (3.9)
−∞
This definite integral has no analytical formulation but could be estimated in a numerical way.
For example, the probability of failure in the sense of the reliability theory, i.e. the probability
that a given rotation is exceeded could be computed:
In order to facilitate the comparison with the following numerical resolution, a numerical
application is given. Let us suppose that L = 1m, and P and k are Gaussian variables with means
and standard deviations respectively equal to:
µP = 10kN ; σP = 1kN
µk = 150kN/m ; σk = 20kN/m (3.11)
Fig. 3.1 represents the probability density function of the rotation resulting from Eq. (3.8). This
example shows that the distribution is skewed to the right and hence that this distribution is
not gaussian anymore. The cumulative distribution function (Fθ (θL )) can be established in a
numerical way and the determination of the associated probability of failure is straightforward
(Fig. 3.1). For the numerical application, it is supposed that the maximum allowed rotation is
θL = 0.05rad. Then the probability of failure, i.e. the probability that θ > θL is:
Despite the remarkable simplicity of the data, the complexity of these results shows, as men-
tioned in section 1, that analytical developments are clearly limited to some simple cases. In the
following the same example will be studied with a reliability method.
31
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
Figure 3.1: Probability density function of the rotation obtained by analytical developments
b k − µk b P − µP
k= ; P = (3.15)
σk σP
and the failure condition is accordingly transformed:
b Pb, b µP + σ P Pb
Z( k) = θL − ³ ´ (3.16)
2L µk + σ k bk
32
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
and the shortest distance from the origin to the failure condition, i.e. the reliability index, is:
|a1 a2 | |2θL Lµk − µP |
β=p 2 2
=q (3.19)
a1 + a2 (σ P )2 + (2θL Lσ k )2
Since the physical variables are gaussian and the failure condition is linear, the reliability index
is rigorously linked to the probability of failure:
pf = 1 − Φ (β) = Φ (−β) (3.20)
The same numerical application as in the previous section can be considered. The successive
estimation of the intercepts, the reliability index and probability of failure results in:
a1 = 5 ; a2 = −2.5
β = 2.2361 (3.21)
pf = 0.012674
which is exactly the same as obtained in the previous section. The analytical developments
of the previous section are often seen as giving a more complete information about the proba-
bility distribution of the response. Indeed this function could be plotted as a direct result of the
computation (Fig. 3.1)1 .
failure. This result should thus be considered as the most interesting result that can be provided by the reliability
analysis. This should be seen as a drawback concerning this method because some other (of course more time
consuming) like the MCS are able to provide a lot of supplementary information: the whole pdf of the result.
Actually this result could also be provided by the reliability analysis. Indeed the probability of failure is closely
linked to the cumulative distribution function (Eq. (3.10)). So if the failure condition is expressed as :
Z (x1 , x2 , ...xN ) = P − f (x1 , x2 , ...xN ) (3.22)
and if the reliability analysis is repeated for several values of the parameter P , then the probability that P −
f (x1 , x2 , ...xN ) is smaller than zero is computed for several values of P . This is nothing else than the cumulative
distribution function of P ! In this application the failure condition has the same form as in Eq. (3.22) with P = θL .
The cumulative density function of the rotation angle is thus expressed by:
Fθ (θL ) = Φ (β (θL )) (3.23)
Then the probability density function of θ can be established:
º
dΦ (β (θL )) dΦ (β) dβ (θL )
pθ (θL ) = = (3.24)
dθL dβ β=θ L dθL
It can be checked that this relation is strictly equivalent to Eq. (3.8).
As a conclusion, even if a reliability-based design of often seen as including both the analysis and verification
stages, this example showed that the results of the analysis only can be obtained by choosing the failure condition
(verification condition) appropriately.
33
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
Concerning the estimation of the standard deviation of the failure condition (Eq. (2.22)), its
expression can be slightly simplified because the physical variables, P and k, are independent.
Indeed the double summation can be replaced by a simple one:
N
à º !2
X ∂Z
2
σZ = σ 2Xi (3.27)
i=1
∂xi {x}={µx }
The estimation of the reliability index and the corresponding probability of failure is then
straightforward (Eq. 2.19):
µZ 0.01667
β = = =3 (3.31)
σZ 0.00556
pf = 0.0013499 (3.32)
This result is clearly different than what was obtained in the previous section. This illustrates
that the FOSM method is unable to provide neither a good estimation of the reliability index nor
the probability of failure in case of non linear failure condition.
In this section the reliability analysis is performed in a numerical way. Since the physical variables
are not correlated, the algorithms presented in section 2.2 can be applied. Furthermore, since the
failure condition is linear, they should give the exact reliability index and probability of failure.
For this example, the steps of Algorithm 7 (p. 18) are considered.
Step 0.1: The new reduced variables are:
b k − 150
k= and Pb = P − 10 (3.33)
20
Step 0.2: The same failure condition as in the previous section is considered. Hence the reduced
failure condition is:
³ ´ Pb + 10
b Pb, b
G k = 0.05 − ³ ´ (3.34)
2 20b k + 150
Step 1: The first guess of the design point is given as by Kusama ([5]). It is supposed that the
reliability index and the direction of the design point are:
µ ¶
1 1
β=3 ; α= √ ,√ (3.35)
2 2
the initial design point is then computed by:
µ ¶
1 1
b(0) = −αβ = √ , √
x = (−2.1213, −2.1213) (3.36)
2 2
34
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
This point is represented in Fig. 3.2-(a). This figure represents also the level curves of the joint
probability density function of bk and Pb. Because of the variable reduction, they are concentric
circles. The level curves of the reduced failure function are also represented. The reduced failure
condition is the particular curve corresponding to a zero-level.
Step 2: This point clearly does not lie on the failure condition. Indeed the reduced failure
condition at this point is equal to:
(0)
b
∂G (0)
b
∂G
n1 = = −4.648.10−3 ; n2 = = 6.808.10−3 (3.39)
b
∂P b
∂k
This vector is indicated by the small arrow starting at xb(0) on Fig. 3.2-(a). As it is the gradient
of the reduced failure function, it is perpendicular to its level curve at xb(0) .
Step 4: This vector can be used to define a new direction, i.e. a unit vector:
¯ ¯ q
¯ (0) ¯ 2 2
¯n ¯ = (−4.648.10−3 ) + (6.808.10−3 ) = 8.244.10−3 (3.40)
n o © (0) ª µ ¶
n −4.648.10−3 6.808.10−3
α(0) = ¯ ¯
¯n(0) ¯ = , = (−0.5638, 0.8259) (3.41)
8.244.10−3 8.244.10−3
This direction will be used for the next iteration in the other way, i.e. as the direction from
the origin to the new design point. This is the basement of the algorithm. This direction can be
linked to the original α used in Step 1. It can be seen as a better approximation of this direction.
In this case the first design point was poorly chosen. Besides the direction of the new design point,
the new distance to the origin (β) has to be provided.
Step 5: This distance is deduced from the following procedure. The reduced failure function is
replaced by its tangent hyperplane at x b(0) . Since the new design point should lie on the zero-level
curve, the intersection between this hyperplane and the horizontal plane G b = 0 is sought. The
distance between the resulting line and the current location of the design point x b(0) is then used
for the computation of the next distance from the origin. It can be shown that this methodology
is equivalent to estimating the mean and standard deviation of the reduced failure condition by:
N
X
µG b(0) −
= G
(k) (k)
bi
ni x
i=1
£¡ ¢ ¤
= 0.0134 − −4.648.10−3 (−2.1213) + 6.808.10−3 (−2.1213)
= 0.01796 (3.42)
b
∂G b
∂G
σG = α1 + α2
b
∂P b
¡ ∂k ¢
= −0.5638. −4.648.10−3 + 0.8259.6.808.10−3 = 8.244.10−3 (3.43)
¯ ¯
Note: another advantage of the computation in the reduced space is that σ G is actually equal to ¯n(k) ¯
and thus does not necessarily be estimated.
35
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
Figure 3.2: Illustration of the iterative resolution (AFOSM in reduced space with uncorrelated
variables, Algorithm 7)
Step 6: ... and then computing the new distance from the origin, i.e. the reliability index, by:
µG
β (0) = = 2.1790 (3.44)
σG
Step 7: Finally the new approximation of the design point is obtained by:
n o n o
xb(1) = − α(0) β (0) = − (−0.5638, 0.8259) .2.1790 (3.45)
³ ´
= (1.2286, −1.7996) ≡ Pb(1) , b
k (1) (3.46)
This point is represented by the cross and point 1 on Fig. 3.2-(a). The iterations can now
start with this new guess for the design point. So going back to Step 3, the gradient of the reduced
failure function is estimated. It is represented on Fig. 3.2-(b) by the line starting from x(1) . This
gradient is used to compute a new direction (Step 4) and this direction will be used to determine
the next design point. The next distance from the origin, i.e. the next reliability index, is obtained
by replacing the actual reduced failure function by its tangent hyperplane at x b(1) or, again, by
the equivalent computation of µG and σG (Steps 6 and 7). This leads to a new estimation of the
design point, xb(2) , represented on Fig. 3.2-(b) too. It can be observed that the convergence is very
fast on this simple example.
Numerical values of some representative parameters are given in Table 3.1 for the first five
iterations. The results in the first line correspond to the first iteration and are obtained as explained
previously. The numerical values given in this table indicate that the convergence is faster for the
reliability index (β) than the position of the design point (Pb, b k). Indeed three iterations are
enough to let the former converge whereas five iterations are necessary for the convergence of the
latter. This illustrates the second-order convergence of the reliability index. It can be understood
intuitively thanks to the geometrical definition of the reliability index.
Table 3.2 provides more detailed results for the first three iterations. The intermediate results
for the six major steps of the AFOSM method (Alg.7, p. 18) are given. They should be useful to
the reader who desires to develop himself the same iterative procedure.
36
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
It. b1∗ = Pb
X Xb2∗ = b
k µG σG β pf
1 −2.1213 −2.1213 1.796.10−2 8.244.10−3 2.1790 1.4666.10−2
2 +1.2286 −1.7996 2.169.10−2 9.688.10−3 2.2388 1.2586.10−2
3 +1.0134 −1.9963 2.272.10−2 1.016.10−2 2.2361 1.2674.10−2
4 +0.9996 −2.0002 2.273.10−2 1.016.10−2 2.2361 1.2674.10−2
5 +1.0000 −2.0000 2.273.10−2 1.016.10−2 2.2361 1.2674.10−2
Iteration 1 2 3
Step 1 Xb1∗ = Pb −2.1213 +1.2286 +1.0134
Xb∗ = b k −2.1213 −1.7996 −1.9963
2
P +7.8787 +11.229 +11.013
k +107.57 +114.01 +110.07
Step 2 G +1.3380.10−2 +7.5510.10−4 −2.6977.10−5
b
∂G
Step 3 b
∂P
−4.6480.10−3 −4.3857.10−3 −4.5424.10−3
b
∂G
∂b
k
+6.8084.10−3 +8.6389.10−3 +9.0896.10−3
Step 4 α1 −0.5638 −0.4527 −0.4470
α2 +0.8259 +0.8917 +0.8945
Step 5 µG +1.7963.10−2 +2.1690.10−2 +2.2722.10−2
σG +8.2436.10−3 +9.6884.10−3 +1.0161.10−2
Step 6 β +2.1790 +2.2388 +2.2361
pf +1.4666.10−2 +1.2586.10−2 +1.2674.10−2
Step 1 — Guess design point —
Step 2 — Compute Failure Condition —
Step 3 — Compute the gradient of the Failure Condition —
Step 4 — Compute the orientation of the next design p oint —
Step 5 — Compute the mean and std of Failure Cond —
Step 6 — Compute the reliability index and probability of failure —
37
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
For the sake of simplicity the same numerical application is again considered (θL = 0.05, L =
1m, P = N (10kN ; 1kN ), k = N (150kN ; 20kN )). The resulting three functions are represented
by level curves in Fig. 3.3. It can be checked that the failure condition represented by the thick zero
level curve are all three equivalent. The concentric circles represent the joint probability density
function between the reduced variables Pb and b k. They are of course identical because this is a
given data. The reduced variables are obtained by the same relation as previously (3.15). Since
the failure condition and the joint probability density function are the same for all three functions,
they should return the same probability of failure. This is typically the invariance principle.
Table 3.3 gives the numerical results obtained with the FOSM and AFOSM methods for these
three functions.
P
Failure function: Z1 (P, k) = θL − 2kL
FOSM AFOSM
1st iter. 2nd iter. 3rd iter.
Reliab Index (β) 3 2.179 2.2388 2.2361
Prob. failure (pf ) 0.0013499 0.014666 0.012586 0.012674
FOSM AFOSM
1st iter. 2nd iter. 3rd iter.
Reliab Index (β) 2.4328 1.9394 2.2476 2.2360
Prob. failure (pf ) 0.0074915 0.026229 0.0123 0.012675
Concerning the FOSM method the same procedure as in section 3.1.3 is applied. The reliability
indices are different for the three functions (β = 3, β = 2.2361 and β = 2.4328). This illustrates
the lack of invariance. The exact result is actually β = 2.2361 which is precisely obtained for the
second function (Z2 ). In this case the failure function is linear, its level curves are parallel to each
other and the FOSM method returns the exact result. The conditions under which the FOSM
method returns the exact reliability index are thus:
38
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
Figure 3.3: Illustration of the invariance principle. Z1 and Z3 are non linear failure function, Z2
is a linear failure function but they all lead to the same linear failure condition.
39
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
• gaussian variables
• linear failure function
Concerning the AFOSM method the same iterative procedure as applied in section 3.1.4 is
applied. The evolution of the reliability index and the corresponding probability of failure are
given in Table 3.3 for the first three iterations. For all three functions the convergence is quite fast
and after the third iteration the reliability index is almost equal to 2.2361 which is the exact value.
The convergence can also be tracked on Fig. 3.3 where the successive estimations of the design
point are represented by crosses. Anyway the AFOSM method provides the same result for these
three functions, having the same failure condition. For this reason, the AFOSM method possesses
the invariance property. This is easily understandable thanks to the geometrical definition of the
reliability index. The conditions under which the AFOSM method returns the exact reliability
index are thus:
• gaussian variables
• linear failure condition
It is also interesting to note that no iterations are actually required for function Z2 . The
exact result is directly obtained at the first iteration, which means that any first guess results in
the exact reliability index. As a particular case, the mean physical variable results in the right
estimation, which justifies, in an other way, the reason for which the FOSM method provides the
exact result in for this function.
In the right column Fig. 3.3 illustrates the invariance principle by means of a Monte Carlo
simulation approach. A large number (N = 106 ) of couple (Pi , ki ) are generated in such a way
that the histograms of the generated series coincides with the given probability density functions
of P and k. For each couple, the corresponding value of the failure function is computed and this
operation is repeated for all three failure functions. This results thus, for each function, in a series
of (N = 106 ) values of the failure function. The probability density function of the failure function
is approached by the histograms of these series. These results are represented in Fig. 3.3. The
probability of failure is the area under the histograms and below Z = 0. They are identical for all
three functions. It is also interesting to note that the probability density function resulting from
function Z2 is gaussian (which is due to the linearity of the failure function in this case).
P = N (10kN ; 1kN )
k = N (150kN ; 20kN )
L = N (1m, 0.1m)
θL = N (0.05, 0.002) (3.50)
Back to the practical application it is easy to understand why the applied load P can be
defined as a random variable. This is also the case for the stiffness k which can exhibit an
significant variability in case of materials like timber or concrete. Concerning the beam length
L the origin of its randomness is rather linked to manufacturing processes. Thanks to today’s
improved technologies, the standard deviation of a beam length is generally very small (compared
to the mean length). The maximum allowable rotation θL can also exhibit a certain variability.
40
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
Indeed the ultimate rotation allowed in a section is sometimes linked to the ductility of the
materials, which exhibits clearly an important variability.
The reliability analysis is first performed with the FOSM method (Def. 2, p. 11). The mean
value of the failure function is approached by:
10
µZ = Z({µx }) = 0.05 − = 0.01667 (3.51)
2.150.1
and, as in the previous section, the standard deviation is simplified to:
N
à º !2
X ∂Z
2
σZ = σ 2Xi (3.52)
i=1
∂xi {x}={µx }
where the derivatives of the failure function can be expressed in closed forms:
º
∂Z 1 ∂Z −1
= − → = = −0.0033
∂P 2kL ∂P {x}={µx } 2.150.1
º
∂Z P ∂Z 10
= 2
→ = = 2.222.10−4
∂k 2k L ∂k {x}={µx } 2.1502 .1
º
∂Z P ∂Z 10
= → = = 0.0333
∂L 2kL2 ∂L {x}={µx } 2.150.1
º
∂Z ∂Z
= 1→ =1 (3.53)
∂θL ∂θL {x}={µx }
The estimated (unique) reliability index and the corresponding probability of failure are finally:
µZ 0.01667
β = = = 2.458 (3.55)
σZ 0.0067805
pf = 0.0069851 (3.56)
Despite its lack of rigour the FOSM method is so simple that is can illustrate simply some
interesting features. For instance, in this case, if it is supposed that the standard deviations of the
last two variables (L, θL ) tend towards zero, Eq. (3.52) shows that the result tends to the same
result as in the previous section, i.e. the result corresponding to the 2-variable problem. This
variance-continuity property is for sure as important as the invariance principle. It could be used
advantageously in a complex problem in order to reduce the number of random variables.
The AFOSM method for non correlated gaussian variables (Alg.7, p. 18) is applied to the
same problem. The numerical results for the successive steps of the first iteration are detailed in
the following.
Step 0.1: The new reduced variables are:
b k − 150
k = Pb = P − 10 (3.57)
20
L−1 θL − 0.05
b =
L θc
L = (3.58)
0.1 0.002
Step 0.2: The reduced failure condition is:
³ ´ ³ ´ Pb + 10
b Pb , b
G b θc
k, L, c
L = 0.002θ L + 0.05 − ³ ´³ ´ (3.59)
b + 1 20b
2 0.1L k + 150
41
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
Step 1: The first guess of the design point is given as by Kusama ([5]). It is supposed that the
reliability index and the direction of the design point are:
µ ¶
1 1 1 1
β=3 ; α= , , , (3.60)
2 2 2 2
which indicates that this first candidate is not the actual design point.
Step 3: The direction of the gradient of the reduced failure function can be computed in closed
form:
b
∂G −1
= ³ ´³ ´ (3.63)
b
∂P b + 1 20b
2 0.1L k + 150
³ ´
b
∂G 20 Pb + 10
= ³ ´³ ´2 (3.64)
∂b
k b + 1 20b
2 0.1L k + 150
³ ´
b
∂G 0.1 Pb + 10
= ³ ´2 ³ ´ (3.65)
b
∂L b+1
2 0.1L 20b
k + 150
∂G b
= 0.002 (3.66)
∂ θc
L
and its evaluation at the current design point (−1.5, −1.5, −1.5, −1.5) gives:
(0) ∂G b
n1 = = −4.902.10−3 (3.67)
∂ Pb
(0) ∂G b
n2 = = 6.944.10−3 (3.68)
∂k b
(0) ∂G b
n3 = = 4.902.10−3 (3.69)
∂L b
(0) ∂G b
n4 = = 2.10−3 (3.70)
∂ θcL
Step 4: This vector can be used to define the new direction (used for the next iteration), i.e.
a unit vector:
¯ ¯ q
¯ (0) ¯ 2 2 2 2
¯n ¯ = (−4.902.10−3 ) + (6.808.10−3 ) + (4.902.10−3 ) + (2.10−3 )
= 10.01.10−3 (3.71)
n o © (0) ª µ ¶
n −4.902.10−3 6.944.10−3 4.902.10−3 2.10−3
α(0) = ¯ ¯ =
¯n(0) ¯ , , ,
10.01.10−3 10.01.10−3 10.01.10−3 10.01.10−3
= (−0.4895, 0.6935, 0.4895, 0.1997) (3.72)
42
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
Step 5: The corresponding reliability index is deduced from estimations of the mean and
standard deviation of the failure condition, which are both expressed by:
N
X
µG b (0) −
= G
(k) (k)
ni xi
i=1
£¡ ¢
= 5.3333.10−3 − −4.902.10−3 (−1.5) + 6.944.10−3 (−1.5)
¤
+4.902.10−3 (−1.5) + 2.10−3 (−1.5) = 0.018750
¯ ¯
¯ ¯
σG = ¯n(0) ¯ = 10.01.10−3 (3.73)
Exactly as for the previous example of application of the AFOSM method, the most important
steps of the iterative procedure are summarized in Table 3.4 and given in more details in Table
3.5.
µP = 10kN ; σP = 1kN
µk = 150kN/m ; σk = 80kN/m (3.76)
It has to be underlined that the standard deviation of the stiffness is now four times larger
than the value considered in the previous illustrations. The 3-D representation of the surface
corresponding to the failure function is represented at Fig. 3.4. The failure condition, i.e. the
intersection of this failure function with the horizontal plane is represented by two secant lines.
More precisely one of these lines is not really part of the failure function because it corresponds to
the annulation of the denominator in the failure function (b klim = −µ
σ k ). It is however considered
k
so in a numerical way because the sign of the function changes from one side of this line to the
other (+∞ for b k<b klim , −∞ for b k>b klim ).
The successive iterates obtained with the usual procedure are represented on the left side. The
initial³point (represented
´ by 0) is chosen by imposing the reliability index β = 3 and the direction
α = √12 , √12 . This point lies behind the singularity. It can be observed that the successive
iterates lie on the same side of the line. Furthermore they are diverging very fast. On the other
43
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
It. b1 = Pb
X Xb2 = b
k Xb3 = L
b b4 = θc
X L β pf
1 −1.5000 −1.5000 −1.5000 −1.5000 1.8723 3.0579.10−2
2 +0.9165 −1.2984 −0.9165 −0.3739 1.9413 2.6111.10−2
3 +0.8099 −1.4256 −0.9733 −0.3650 1.9398 2.6205.10−2
4 +0.8059 −1.4342 −0.9651 −0.3535 1.9397 2.6205.10−2
5 +0.8057 −1.4353 −0.9636 −0.3532 1.9397 2.6205.10−2
Iteration 1 2 3
Step 1 Xb1∗ = Pb −1.5000 +0.9165 +0.8099
Xb2∗ = bk −1.5000 −1.2984 −1.4256
Xb3 = Lb −1.5000 −0.9165 −0.9733
b4 = θc
X L −1.5000 −0.3739 −0.3650
P +8.5000 +10.917 +10.810
k +120.00 +124.03 +121.49
L +0.8500 +0.9083 +0.9027
θL +4.7000.10−2 +4.9252.10−2 +4.9270.10−2
Step 2 G +5.3333.10−3 +8.0501.10−4 −1.6135e − 005
∂G b
Step 3
∂P b −4.9020.10−3 −4.4380.10−3 −4.5594.10−3
∂G b
∂bk
+6.9444.10−3 +7.8120.10−3 +8.1137.10−3
∂G b
∂L b +4.9020.10−3 +5.3335.10−3 +5.4600.10−3
∂G b
+2.0000.10−3 +2.0000.10−3 +2.0000.10−3
∂ θc
L
Step 4 α1 −0.4895 −0.4172 −0.4155
α2 +0.6935 +0.7343 +0.7393
α3 +0.4895 +0.5014 +0.4975
α4 +0.1997 +0.1880 +0.1822
Step 5 µG +1.8750.10−2 +2.0652.10−2 +2.1287.10−2
σG +1.0014.10−2 +1.0638.10−2 +1.0974.10−2
Step 6 β +1.8723 +1.9413 +1.9398
pf +3.0579.10−2 +2.6111.10−2 +2.6205.10−2
Step 1 — Guess design p oint —
Step 2 — Compute Failure Condition —
Step 3 — Compute the gradient of the Failure Condition —
Step 4 — Compute the orientation of the next design p oint —
Step 5 — Compute the mean and std of Failure Cond —
Step 6 — Compute the reliability index and probability of failure —
44
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
Figure 3.4: Example of divergent solution (left) and convergent solution (right)
45
Denoel Vincent, An introduction to Reliability Analysis
Bending model (uncorrelated variables)
hand, on the right side, the initial point is chosen at the origin of the axes in the reduced space.
This point corresponds to the means physical variables. In this case, the convergence is reached as
expected. This starting point is probably better than the former one because it always lies on the
safe side of the failure condition. The problem is then much better conditioned and the risks for
divergence are then decreased. Table 3.6 summarizes the numerical values of some representative
variables obtained during the five iterations (with both starting points). The upper part of this
table, related to the divergent solution, indicates clearly the divergence.
Divergent solution
It. Xb ∗ = Pb
1 X2∗ = b
b k µG σG β pf
1 −2.1213 −2.1213 2.025 0.8120e 2.4943 6.3094.10−3
2 −0.0779 −2.4931 0.5557 0.1626 3.4176 3.1592.10−4
3 −0.2125 −3.4109 1.791.10−1 2.625.10−2 6.8250 4.3960.10−12
4 −1.0581 −6.7425 7.874.10−2 2.686.10−3 29.3202 0.0000
5 −14.0180 −25.7521 5.148.10−2 2.654.10−4 193.9559 0.0000
Convergent solution
1 +0.0000 +0.0000 1.667.10−2 1.809.10−2 0.9214 1.7841.10−1
2 0.1698 −0.9057 4.679.10−2 6.795.10−2 0.6885 2.4556.10−1
3 +0.0653 −0.6854 2.793.10−2 4.476.10−2 0.6240 2.6633.10−1
4 +0.0732 −0.6196 2.497.10−2 4.026.10−2 0.6202 2.6757.10−1
5 +0.0767 −0.6154 2.481.10−2 4.001.10−2 0.6202 2.6757.10−1
46
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
Figure 3.5: Probability of failure obtained for various standard deviations of the stiffness
from which the (eulerian ) critical load Pcr = kL can be computed. In a probabilistic context
the applied load Q is given in a probabilistic way and the checking of the instability of the column
requires that Q − Pcr ≥ 0. In this section, it is considered that the applied load Q and the
structural stiffness k are random parameters and that the length of the compressed element is
given in a deterministic way. The safety against failure is expressed by means of a failure function:
Z (Q, k) = kL − Q (3.78)
but, contrarily to what is done in a reliability analysis, the aim of the more general probabilistic
analysis is the determine the probability density function of this failure function (instead of the
probability of failure associated to the failure condition Z = 0). The cumulative density function
of the failure function is expressed by:
Considering all the possible values that Q could take, this relation can also be written:
Z +∞
FZ (Z0 ) = prob (Q0 < Q ≤ Q0 + dQ0 ) prob (kL < Q0 + Z0 | Q0 < Q ≤ Q0 + dQ0 ) dQ0
−∞
Z +∞ µ ¶
Q0 + Z0
= pQ (Q0 ) Fk|Q | Q0 dQ0 (3.80)
−∞ L
where Fk|Q (k0 | Q0 ) represents a conditional cumulative density function, i.e. the probability
that k is smaller than k0 provided Q lies in the interval [Q0 ; Q0 + dQ0 ]. The probability density
function of Z can be computed by simple derivation of its cumulative distribution function:
Z µ ¶
dFZ 1 +∞ Q0 + Z0
pZ (Z0 ) = = pQ (Q0 ) pk|Q | Q0 dQ0 (3.81)
dZ0 L −∞ L
47
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
The conditional probability density function introduced in this relation pk|Q (k0 | Q0 ) can be
expressed as a function of the joint and marginal probability density functions, thanks to the
factorization theorem:
³ ´
Z +∞ pkQ
Q0 +Z0
, Q0
1 L
pZ (Z0 ) = pQ (Q0 ) dQ0
L −∞ pQ (Q0 )
Z µ ¶
1 +∞ Q0 + Z0
= pkQ , Q0 dQ0 (3.82)
L −∞ L
This relation is valid for any joint probability function between k and Q. In particular, if it is
supposed that these random variables are described by correlated gaussian variables, their joint
probability density function is:
" #
³ ´2 (k0 −µk )(Q0 −µQ ) ³ Q −µ ´ 2
−1 k0 −µk 0 Q
1 2(1−ρ2 ) σk −2ρ σk σ Q + σ
Q
pkQ (k0 , Q0 ) = p e (3.83)
2πσk σ Q 1 − ρ2
The introduction of this relation into Eq. (3.82) results, after some computations in:
∙³ ´ 2¸
−1 Z0 −µZ
1 2 σZ
pZ (Z0 ) = √ e (3.84)
2πσ Z
where µZ = Lµk − µQ and σ 2Z = σ2Q − 2ρσ Q Lσ k + L2 σ 2k . This relation shows that the
random variable Z is also a gaussian variable. This could have been expected in advance because
Z is defined (Eq. (3.78)) as a linear combination of gaussian processes (correlated indeed but the
gaussianity is the only condition to be fulfilled for this property). Any subsequent information
resulting from the probability density function can be estimated. For instance the cumulative
density function of Z can be computed:
µ ¶
Z0 − µZ
FZ (Z0 ) = Φ (3.85)
σZ
As a numerical example, these values are used as an illustration:
µk = 150kN/m ; σ k = 20kN/m
µQ = 100kN ; σQ = 30kN
L = 1m ; ρ = −0.6 (3.86)
p √
From these it is possible to compute µZ = 50 and σ Z = 202 − 2ρ ∗ 20 ∗ 30 + 302 = 1300 − 120ρ =
44.94 (for ρ = −0.6). The probability density function and the corresponding cumulative density
function are represented at Fig. 3.6. As justified however hereover the gaussianity of the resulting
random variable can also be observed on this figure.
The probabilistic analysis as performed so far provides more information than the reliability
analysis. The results of a reliability analysis can be recovered as a particular case. Indeed by
setting Z0 = 0 in the previous expression of the cumulative density function, the probability of
failure related to the failure condition is obtained:
µ ¶
0 − µZ
pf = 1 − FZ (0) = 1 − Φ = 1 − Φ (−1.1125) = 0.1330 (3.87)
σZ
The analytical developments indicate that the correlation coefficient affects the probability of
failure through the standard deviation of the failure function (σ Z ). Its influence on this standard
deviation is considerable. An idea of the range of variation is obtained by computing the standard
deviations obtained for the extreme values (ρ = −1, ρ = +1) of the correlation coefficient:
ρ = −1 → σ 2Z = σ 2Q + 2σ Q Lσ k + L2 σ 2k → σ Z = σ Q + Lσ k
ρ = +1 → σ 2Z = σ 2Q − 2σ Q Lσ k + L2 σ 2k → σ Z = |σQ − Lσ k | (3.88)
48
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
Figure 3.6: Probability density function and cumulative density function of the failure function
(numerical application Eq. (3.86), ρ = −0.6)
Figure 3.7: Probability of failure resulting from the probabilistic analysis (analytical developments)
- Influence of the correlation coefficient
Since now the standard deviation of Z appears at the denominator of the argument of the Φ
function (Eq. (3.87)) its influence of the resulting probability of failure is really significant. Fig.
3.7 represents the effect of the correlation coefficient on the probability of failure:
µ ¶ µ ¶
−µZ −50
pf (ρ) = 1 − Φ =1−Φ √
σZ (ρ) 1300 − 120ρ
This function starts from approximately pf = 0.16 for ρ = −1, decreases almost uniformly
until ρ = +0.9 and then much faster from ρ = +0.9 to ρ = +1.0 (see logarithmic scale on the
right side of the figure). The logarithmic representation of this function shows that a small error
on the correlation coefficient affects significantly the estimated probability of failure (in the range
ρ ∈ [0.9; 1.0]). For this reason a lot of effort is put nowadays in a more precise estimation of
correlation coefficients involved in many civil engineering applications.
It is also interesting to notice, in this simple application, that the decrease of the probability
of failure (as a function of the correlation coefficient) can be easily explained. Indeed a negative
correlation coefficient means that the random variable Q and k have more often different signs
(compared to their respective mean value) than the same ones. In other words, if Q is smaller
than its mean value, then it is very likely that k is larger than its mean value, and reversely. This
49
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
indicates that the probability to have a large Q and a small k simultaneously is high, and hence
that the probability of failure is high. On the contrary, if the correlation coefficient is positive,
the likelihood to have Q and k both larger than their corresponding mean values is higher. It is
then easy to understand that the probability of failure in then reduced in this case.
which will be advantageously used in the following developments. The failure condition is
accordingly transformed:
³ ´ ³ ´
Z( e e
e Q, k) = µk + σ k e
k L − µQ + σ Q Q e (3.93)
The mathematical expectation of Eq. (3.91) shows that the new reduced variables have a zero
mean. The computation of their covariance matrix leads to:
h i ∙ 1 ρ ¸
e
VQe ek = (3.94)
ρ 1
which shows that they have also a unit variance but the same correlation coefficient as the
physical variables. This indicates the need for another reduction aiming at providing uncorrelated
variables. The theoretical developments of section 2.3 show that this reduction can be realized by
means of the matrix [a] defined in such a way that:
h i
[a] VeQe ek [a] = [I]
T
(3.95)
h i
The eigen values of VeQe ek and the corresponding eigen vectors are:
v1 = 1 + ρ ; v2 = 1 − ρ (3.96)
½ ¾ ½ ¾
1 1
V1 = ; V2 = (3.97)
1 −1
50
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
After having normalized these eigen vectors properly, the evaluation of matrix [a] is then
straightforward:
" # ∙ √ √ ¸
1 √1 √1 1 1+ρ 1−ρ
1+ρ 1+ρ −1
[a] = √ 1 −1 ⇔ [a] = √ √ √ (3.98)
2 √1−ρ √1−ρ 2 1+ρ − 1−ρ
It can be checked that the injection of this expression of [a] into Eq. (3.95) satisfies trivially
the relation. The second reduction leads finally to the definition of new reduced variables:
{b
x} = [a] {e
x} = [a] [b] ({x} − {µx }) (3.99)
| {z }
[A]
which have now the required properties to be uncorrelated, with zero-mean and unit-variance.
The computation of the reliability index can thus now be performed. By recasting both succes-
sive reductions together, the global reduction matrix [A] is introduced (as in section 2.3). The
expression of the failure function with the new variables is:
à √ ! à !
b + √1 − ρb
1 + ρQ k
√ b − √1 − ρb
1 + ρQ k
b b b
Z(Q, k) = µk + σk √ L − µQ + σQ √ (3.100)
2 2
This reduced failure function is still linear. After some reorganisation this more usual form is
obtained: r r
b b b 1+ρ b 1 − ρb
Z(Q, k) = µk L − µQ + (σ k L − σ Q ) Q + (σ k L + σ Q ) k (3.101)
2 2
The failure condition, obtained by imposing a zero value to this function is then a plane. Its
equation can be written in the form:
T
{a} {x} = a0 (3.102)
where
⎧ q ⎫
⎨ (σ k L − σ Q ) 1+ρ ⎬
{a} = q 2
⎩ (σ L + σ ) 1−ρ ⎭
k Q 2
a0 = µQ − µk L (3.103)
The shortest distance β from the origin to a plane with the generic equation given by (3.102)
can be shown to be β = |a|a|0 | . This is exactly the relation required for the estimation of the
reliability index:
¯ ¯ ¯ ¯
¯µQ − µk L¯ ¯µQ − µk L¯
β=q =q (3.104)
1+ρ 2 1−ρ 2 2 − 2ρσ Lσ + L2 σ 2
2 (σ k L − σ Q ) + 2 (σ k L + σ Q ) σ Q Q k k
which is strictly equivalent to the subsequent development of the probabilistic approach (Eq.
(3.87)). If the numerical values given in Equ. (3.86) are considered, the reliability index and the
probability of failure are:
51
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
Z (k, Q) = kL − Q (3.107)
where Q and k are correlated Gaussian variable, characterized by their covariance matrix:
∙ ¸
σ 2k ρσ k σ Q
[VkQ ] = (3.108)
ρσ Q σ k σ2Q
The mean values and standard deviations of both random processes are given in Eq. (3.86);
Step 0.1: The first step consists in defining the reduced variables:
{b
x} = [A] ({x} − {µx }) ⇔ {x} = {µx } + [A]−1 {b
x} (3.109)
where [A] is such that [A] [VQk ] [A]T = [I], i.e. is related to the eigen vectors of the covari-
ance matrix. They are computed as follows: first the eigen values are computed as the roots of
det ([VQk ] − λ [I]) :
∙ ¸ ∙ ¸
202 −0.6 ∗ 20 ∗ 30 400 −360
[VkQ ] = = (3.110)
−0.6 ∗ 20 ∗ 30 302 −360 900
∙ ¸
400 − λ −360
det ([VkQ ] − λ [I]) = =0
−360 900 − λ
⇔ λ2 − 1300λ + 230400 = 0
⇔ λ1 = 211.71 or λ2 = 1088.29 (3.111)
Then for each eigen value the corresponding eigen vector is obtained by:
∙ ¸
400 − 211.71 −360
([VkQ ] − λ1 [I]) {x1 } = 0 ⇔ {x1 } = 0
−360 900 − 211.71
½ ¾
1.9119
⇒ {x1 } = a1
1
∙ ¸
400 − 1088.29 −360
([VkQ ] − λ2 [I]) {x2 } = 0 ⇔ {x2 } = 0
−360 900 − 1088.29
½ ¾
−0.5230
⇒ {x2 } = a2
1
where constants a1 and a2 can be chosen anyhow (except equal to zero). Both eigen vectors
can be placed side-by-side into one matrix [X] = [{x1 } , {x2 }]. The main property of the eigen
vectors (and hence this eigen matrix) is to be able to diagonalize the covariance matrix:
∙ ¸
T 985.6a21 0
[X] [VkQ ] [X] = (3.112)
0 1386a22
If a1 and a2 are chosen in such a way to provide unit element on the diagonal of this matrix,
i.e. for example a1 = −0.0319 and a2 = 0.0269, then it can be checked that the requested matrix
[A] is given by:
∙ ¸T ∙ ¸
1.9119 ∗ (−0.0319) −0.5230 ∗ 0.0269 −0.0609 −0.0319
[A] = [X]T = = (3.113)
−0.0319 0.0269 −0.0140 0.0269
52
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
This first step is not effortless but hopefully it must be performed only once because the
iterative method loops from steps 2 to 7. The inverse of matrix [A] is useful for the computation
of the inverse relation and must also be computed once only:
∙ ¸
−12.893 −15.290
[A]−1 = (3.114)
−6.743 29.232
The reduced variables are finally expressed by:
x} = [A] ({x} − {µx })
{b
( ) ∙ ¸½ ¾ ½ ¾
b
k −0.0609 −0.0319 k 12.32
b = −
Q −0.0140 0.0269 Q 0.5787
½ ¾ ½ ¾ ∙ ¸( )
k 150 −12.893 −15.290 b
k
⇔ = + b (3.115)
Q 100 −6.743 29.232 Q
T
where [A] is such that [A] [Vx ] [A] = [I] and [Vx ] is the covariance matrix of the physical
variables {x}.
Step 0.2: Write the failure condition with the reduced variables:
³ ´ ³ ´
Z x}) = kL − Q = 150 − 12.893b
b ({b k − 15.290Qb L − 100 − 6.743b k + 29.232Qb
= 50 − 6.1496b b
k − 44.522Q (3.116)
Step 1: Now the preliminary steps related to the reduction are done and the iterative procedure
(0)
bi have to be provided. In this analysis the mean
can start. An initial guess of the design point x
b = 0, b
physical variable (i.e. Q k = 0) is considered.
Step 2: The reduced failure condition at this point is:
b(0) = 50
Z (3.117)
which indicates that this guess is the design point does not lie on the failure condition and
needs thus to be more accurately estimated.
Step 3: The gradient of the reduced failure function can be computed in closed form:
(0)
b ({b
∂Z x})
n1 = = −6.1496
b
∂k
(0)
b ({b
∂Z x})
n2 = = −44.522 (3.118)
∂Qb
In this particular example, they are constant because the failure function is linear.
Step 4: The orientation corresponding to the next design point is given by:
(0)
(0) n 6.1498
α1 = ¯ 1 ¯= = −0.1368
¯n(0) ¯ 44.9447
(0)
(0) n −44.522
α2 = ¯ 2 ¯= = −0.9906 (3.119)
¯n(0) ¯ 44.9447
Step 5: The estimated mean and standard deviation of the failure condition are:
N
X
µG b (0) −
= G
(0) (0)
ni xi = 50
i=1
¯ ¯
¯ ¯
σG = ¯n(0) ¯ = 44.9447 (3.120)
53
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
Step 6: Finally the new estimation of the reliability index is obtained by:
µG 50
β (0) = = = 1.1125 (3.121)
σG 44.9447
Step 7: Compute a better estimation of the design point:
b
k (1)
(0)
= −α1 β (0) = − (−0.1368) ∗ 1.1125 = 0.1522
b (1)
Q
(0)
= −α2 β (0) = − (−0.9906) ∗ 1.1125 = 1.1020 (3.122)
Since the failure function is linear, no iteration is actually required (see Section 3.1.5). Indeed
it could be checked that going back to step 2 would indicate that this new design point lies on the
failure condition. Furthermore the direction of the gradient is the same as the direction obtained
at the previous step. As a conclusion it can be said that the application of the AFOSMC algorithm
is exactly the same as the AFOSM algorithm excepted for step 0.2 which requires a little bit more
computation for the former.
The same parametric study concerning the influence of the correlation coefficient on the relia-
bility index and probability of failure can be performed. Figure 3.8 illustrates the results obtained
with the AFOSMC algorithm, for several correlation coefficients. In the physical space (left sub-
figures) the same failure function is considered in each case. It is represented by the uniformly
distributed parallel lines. The joint probability density function of variables x1 = k and x2 = Q
are also represented by level curves. They are concentric ellipse inclined to the left for negative
correlation coefficients and to the right for positive values. In each diagram the probability of
failure is represented by the surface under this joint probability function, and behind the failure
condition (represented by the thick line). The evolution from the diagram corresponding ρ = −0.9
to the diagram corresponding to ρ = +0.9 shows clearly that the probability of failure decreases
with increasing correlation coefficients. In the reduced space (square subfigures) the reduced fail-
ure condition is different in each. Indeed it depends on the reduction matrix ([A]) which is a
function of the correlation coefficient. These diagrams indicate also the decrease of the reliability
index (and hence of the probability of failure) with increasing correlation coefficients.
Fig. 3.9 is a copy of Fig.3.7 in which dots have been added in order to represent the results
obtained with the numerical iterative method (AFOSMC). It shows that both methods provide
exactly the same results.
from which the (eulerian) critical load Pcr = kL can be computed. In a probabilistic context
the quantities involved in this relation (k, L) are defined by means of random variables. The
probabilistic analysis of the structure requires to determine, given the statistical characteristics
of k and L, the same information about the critical load Pcr . The computation of this general
information is a probabilistic analysis. In this simple example, the major part of this analysis can
be performed in an analytical way. These developments are presented in this section and will be
compared in the next section to analytical and numerical reliability analyses.
It is supposed that the stiffness and the beam length are represented by correlated gaussian
random variables characterized by their mean values (µk ,µL ), their standard deviations (σ k , σ L )
and the correlation coefficient ρ.The cumulative distribution function of the eulerian critical load
is expressed by:
54
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
Figure 3.8: Illustration of the influence of the correlation coefficient on the probability of failure -
Results of the reliability analysis (AFOSMC)
55
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
Figure 3.9: Probability of failure resulting from the probabilistic analysis (analytical developments,
continuous line) and from the reliability analysis (numerical iterative method, dots) - Influence of
the correlation coefficient
Considering all possible values that k could take, this relation can also be written:
Z +∞
FP (P0 ) = prob (k0 < k ≤ k0 + dk0 ) prob (k0 L < P0 | k0 < k ≤ k0 + dk0 ) dk0
−∞
Z+∞ µ ¶
P0
= pk (k0 ) FL|k | k0 dk0
−∞ k0
where FL|k (L0 | k0 ) represents a conditional cumulative density function, i.e. the probability
that L is smaller than L0 provided k lies in the interval [k0 ; k0 + dk0 ]. The probability density
function of P can be computed by simple derivation of its cumulative distribution function:
Z +∞ µ ¶
dFP pk (k0 ) P0
pP (P0 ) = = pL|k | k0 dk0
dP0 −∞ k0 k0
³ ´
Z +∞ P
pk (k0 ) pLk k0 , k0
0
= dk0
−∞ k0 pk (k0 )
³ ´
Z +∞ pLk P0 , k0
k0
= dk0 (3.125)
−∞ k0
where pLk (L0 , k0 ) is the joint probability density function between L and k. This relation is
valid for any joint probability function between k and L. For this application, it is supposed that
these random variables are described by correlated gaussian variables, then:
∙³ ´2 ³ ´ 2¸
1 L0 −µL (L0 −µL )(k0 −µk ) k −µ
1 − 2 σL −2ρ σL σ k + 0σ k
pLk (L0 , k0 ) = p e 2(1−ρ ) k
(3.126)
2πσ L σ k 1 − ρ2
56
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
Figure 3.10: Probability density function and cumulative density function of the eulerian critical
load (Pcr = kL) - Application of the analytical developments
57
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
The computation of the reliability index requires the reduction of the physical variables to
zero-mean unit-variance uncorrelated variables. This operation has to be performed exactly as in
Section 3.2.2. It is convenient to introduce two successive reductions:
{b
x} = [a] {e
x} = [a] [b] ({x} − {µx }) (3.134)
| {z }
[A]
where " # ∙ ¸
1 √1 √1 1
0
1+ρ 1+ρ σk
[a] = √ √1 √−1
; [b] = 1 (3.135)
2 1−ρ 1−ρ
0 σL
After some modifications, this reduced failure condition can also be written:
µ ¶
b b
Z( b = σk σL 1 + ρ b
k, L) k2 −
1 − ρ b2
L
2 2
r r
1 + ρb 1−ρb
+ (µL σ k + µk σ L ) k + (µL σ k − µk σ L ) L + µk µL − Q (3.138)
2 2
In this example the failure condition is non linear2 . More precisely Eq. (3.138) indicates that
the failure condition is a branch of hyperbola. For this reason the reliability index can be computed
but can’t be directly linked to the probability of failure. Furthermore when the failure condition
is not linear the location of the closest point to the origin is much more complex to compute. For
the sake of simplicity in the notations, let Eq. (3.138) be rewritten:
³ ´
b b
Z b = a1 b
k, L b 2 + a3 b
k 2 + a2 L b + a5
k + a4 L (3.139)
2 This was already introduced in the previous section.
58
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
The reliability index is found by the location of the point lying on this curve and situated the
closest to the origin. This condition requires to find the minimum of:
³ ´
β2 b b =b
k, L k2 + Lb2 (3.141)
Following
³ ´ Lagrange’s
³ ´theory, the position of this design point is obtained when the gradients
b b
of Z b and β 2 b
k, L b line up:
k, L
( ) ( )
− bb b
→ 2a1 b
k + a3 −
→ b
2k
O Z(k, L) = b + a4 = O β 2 (b b =λ
k, L) b (3.142)
2a2 L 2L
The design point (indicated by symbol ∗ ) is then such that the following relations fulfilled:
b a3
k∗ = (3.143)
2 (λ − a1 )
b∗ a4
L = (3.144)
2 (λ − a2 )
Because the design point must also lie on the failure condition, these relations can be injected
into Eq. (3.139), leading then to:
The explicit resolution of this equation is not easy but its solution can be preformed with many
usual solvers (after having rearranged this relation into a polynomial form). The resolution of this
relation leads to the value of λ, which gives, after replacement into Eqs. (3.143) and (3.144) the
location of the reduced design point. The estimation of the reliability index β is straightforward.
A probability of failure can be computed but it should not be forgotten that this probability of
failure is erroneous because the failure function is non linear.
As a numerical application, the values given in Eqs. (3.130) are considered. Their introduction
into Eq. (3.138) leads to:
a1 = 1.6 ; a2 = −0.4
a3 = 31.305 ; a4 = 2.23607 (3.146)
a5 = 60
The resolution of Eq. (3.145) gives λ = −5.73545, which in turn allows computing the position
of the reduced design point:
b a3
k∗ = = −2.13381 (3.147)
2 (λ − a1 )
b∗ a4
L = = −0.209548 (3.148)
2 (λ − a2 )
The reliability index is estimated now as the distance from this point to the origin:
q
β= b b ∗2 = 2.1441
k ∗2 + L (3.149)
59
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
This value is slightly different than the exact result obtained in the previous paragraph (pf =
0.016676). The deviation from the exact result is however quite low. This is due to the slight
curvature of the failure condition (as illustrated in the next section). Finally as an informative
indication, the physical design variables can be computed. Their estimation requires first the
computation of the transformation matrix [A] (Eq. (3.136)). After some computation, this leads
to:
k = 109.96
L = 0.81851 (3.151)
It can be checked that kL is equal to 90. Among all the combinations (k, L) such that kL is
equal to 90,(e.g. 90 ∗ 1 = 90, 60 ∗ 1.5 = 90, 30 ∗ 3 = 90, 900 ∗ 0.1 = 90,...), this combination of
both parameters is the most probable (for the considered probability distributions of k and L, i.e.
µk = 150, µL = 1, σ k = 20, σ L = 0.1, ρ = 0.6.
where [A] is such that [A] [Vx ] [A]T = [I] and [Vx ] is the covariance matrix of the physical
variables {x}. In this example, [Vx ] is equal to:
∙ ¸ ∙ ¸
σ 2k ρσ L σk 400 1.2
[Vx ] = = (3.153)
ρσ k σ L σ 2L 1.2 0.01
They are such that [Vx ] [V ] = [V ] [D]. It is easy to see that the transformation matrix [A] can be
computed as:
³ ´T ∙ 0.0375 −12.5
¸
−1/2
[A] = [V ] [D] = (3.155)
−0.05 −0.00015
where T denotes a matrix transposition. It can effectively be checked that [A] [Vx ] [A]T is the
2-by-2 identity matrix. This transformation matrix can be computed once for all. It will be used
as such for the subsequent iterations. The relation between the physical variables and the reduced
ones are then:
(
b
k = 0.0375k − 12.5L + 6.875
b = −0.05k − 0.00015L + 7.5 (3.156)
L
(
k = 0.00024b b + 150
k − 20L
b b+1 (3.157)
L = −0.08k − 0.06L
60
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
Step 0.2: The second preliminary step consists in computing the reduced failure condition.
The introduction of Eq. (3.157) into the failure condition leads to:
³ ´ ³ ´³ ´
b b
G b
k, L = 0.00024b b + 150 −0.08b
k − 20L k − 0.06Lb + 1 − 90
= −1.92.10−5 b b 2 + 1.6000144b
k2 + 1.2L kLb − 12.00024b b + 60
k − 29L (3.158)
Step 1: The first guess of the design point is supposed to correspond to the mean physical
variables:
b(0) = (0, 0) , i.e. x(0) = (150, 1)
x (3.159)
This point is represented in Fig. 3.11. This figure represents also the level curves of the
joint probability density function of k and L (left side, physical space) and b k and Lb (right side,
reduced space). Because of the physical variables are correlated the principal axes of the joint
probability density function are inclined. The reduction makes them equal. The level curves of the
(ev. reduced) failure function are also represented. The failure condition is the particular curve
corresponding to a zero-level. It is non linear in this case, as indicated by the previous relations.
The representation in the reduced space (with squared axis) shows that the non linearity of the
failure condition is not so high and then that the application of a First Order Reliability analysis
could lead to an acceptable result.
Figure 3.11: Illustration of the AFOSMC with a non linear failure condition
61
Denoel Vincent, An introduction to Reliability Analysis
Buckling model (correlated non gaussian variables)
Step 2: This point clearly does not lie on the failure condition. Indeed the reduced failure
condition at this point is equal to:
Gb(0) (0, 0) = 60 (3.160)
Step 3: The next step consists in computing the gradient of the reduced failure function at
this point, i.e.:
b
∂G
= −3.84.10−5 b b − 12.00024
k + 1.6000144L (3.161)
b
∂k
b
∂G b + 1.6000144L
= 2.4L b − 29 (3.162)
b
∂L
(0)
b
∂G (0)
b
∂G
n1 = = −12.00024 ; n2 = = −29 (3.163)
∂b
k b
∂L
This vector is indicated by the small at xb(0) on Fig. 3.11 (right side). As it is the gradient of
b(0) .
the reduced failure function, it is perpendicular to its level curve at x
Step 4: This vector can be used to define a new direction, i.e. a unit vector:
¯ ¯ q
¯ (0) ¯ 2 2
¯n ¯ = (−12.00024) + (−29) = 31.385 (3.164)
n o © (0) ª µ ¶
n −12.00024 −29
α(0) = ¯ ¯ =
¯n(0) ¯ , = (−0.3823, −0.9240) (3.165)
31.385 31.385
which is used to find the position of the next iterate.
Step 5: The approximate reliability index is obtained by the estimation of the mean failure
condition and its standard deviation :
N
X
µG b (0) −
= G
(k) (k)
bi
ni x = 60 (3.166)
i=1
∂G b b ¯¯
∂G ¯
¯
σG = α1 + α2 = ¯n(0) ¯ = 31.385 (3.167)
∂ Pb ∂b
k
Step 6: Their ratio simply gives:
µG
β (0) = = 1.9118 (3.168)
σG
An estimation of the probability of failure can be provided as well:
³ ´
pf = Φ−1 −β (0) = 2.7954.10−2 (3.169)
Step 7: Finally the new approximation of the design point is obtained by:
n o n o
b(1)
x = − α(0) β (0) = − (−0.3823, −0.9240) .1.9118 (3.170)
³ ´
= (0.7309, 1.7665) ≡ b b (1)
k (1) , L (3.171)
This point is represented by the cross and point 1 on Fig. 3.11. As in the previous illustrations
the iterations can now start. Figure 3.11 shows that the convergence is achieved quite fast. This
is also a consequence of the slightness of the non linearity of the failure condition. Tables 3.7 and
3.8 provide either summarized either detailed results of this numerical application. The results
obtained with the numerical procedure and the analytical developments of the previous section
62
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
do not coincide perfectly because the transformation matrix was not chosen exactly in the same
way (it is not unique!). However, the resulting design point (Eq. (3.151)) is identical for both
methods. The reliability indexes and the probability of failure are also exactly the same.
It is also interesting to notice that the result of the first iteration corresponds to the application
of the simple FOSM method. Its application leads to a significantly wrong probability of failure
(pf = 2.7954.10−2 ).
For information, a crude Monte Carlo simulation has been applied twice to this problem with
N = 100000 samples. The statistical characteristics obtained for the failure condition are (in both
cases):
µG = 61.2599 61.2086
σG = 31.4826 31.4915
γ 3G = 0.29944 0.30117
γ 4G = 0.12329 0.12303
pf = 0.016902 0.016916
which indicates that this number of samples seems large enough to produce a confident re-
sult. This probability of failure is in a very good accordance with the results obtained with the
probabilistic approach, pf = 1.6676.10−2 (section 3.2.4). The probability of failure obtained with
the AFOSMC method (pf = 1.6013.10−2 ) is slightly lower because the curvature of the failure
function in the reduced space is turned towards the origin.
63
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
It. Xb1∗ = b
k b2∗ = L
X b µG σG β pf
1 +0.0000 +0.0000 60.00 31.38 1.9118 2.7954.10−2
2 +0.7309 +1.7665 54.19 25.31 2.1409 1.6142.10−2
3 +0.7759 +1.9953 52.75 24.60 2.1441 1.6013.10−2
4 +0.7676 +2.0020 52.73 24.59 2.1441 1.6013.10−2
Iteration 1 2 3
Step 1 Xb1∗ = b
k +0.0000 +0.7309 +0.7759
Xb∗ = Lb +0.0000 +1.7665 +1.9953
2
k 150.00 114.67 110.09
L 1.0000 0.83553 0.81821
Step 2 G +60.000 +5.8106 +7.9291.10−2
b
∂G
Step 3
∂b
k
−12.000 −9.1733 −8.8072
b
∂G
b
∂L
−29.000 −23.591 −22.970
Step 4 α1 −0.3823 −0.3624 −0.3580
α2 −0.9240 −0.9320 −0.9337
Step 5 µG +60.000 54.189 +52.745
σG +31.385 +25.312 +24.600
Step 6 β +1.9118 +2.1409 +2.1441
pf +2.7954.10−2 +1.6142.10−2 +1.6013.10−2
Step 1 — Guess design point —
Step 2 — Compute Failure Condition —
Step 3 — Compute the gradient of the Failure Condition —
Step 4 — Compute the orientation of the next design p oint —
Step 5 — Compute the mean and std of Failure Cond —
Step 6 — Compute the reliability index and probability of failure —
64
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
Fω (ω L ) = prob (ω < ω L )
Z
+∞ Ã r !
k
= prob(m0 < m < m0 + dm0 )prob 2 < ω L dm0
m0
−∞
Z
+∞ µ ¶
m0 ω 2L
= pm (m0 ) Fk dm0 (3.178)
4
−∞
The corresponding probability density function is obtained by simple derivation with respect
to ω. By considering that pm (m0 ) is equal to 0 outside [am , bm ], the probability density function
of the circular frequency is expressed by:
bZm µ ¶
dFω (ω) ω m0 ω 2
pω (ω) = = m0 pk dm0
dω 2 (bm − am ) 4
am
m0 ω 2
It is advantageous to use the change of variables k = 4 in order to simplify the integrand to:
bm ω 2
Z4
8
pω (ω) = kpk (k) dk (3.179)
ω 3 (bm − am )
am ω 2
4
The introduction of the actual probability density function of k (Eq. (3.176)) in this relation leads
after some developments to:
∙ ¸k= bm4ω2
8 k2 − a2k k 2 − b2k
pω (ω) = 3 U (k − ak ) − U (k − bk ) (3.180)
ω (bm − am ) (bk − ak ) 2 2 k= am ω
2
4
where the final variation operation has no been performed for the sake of compactness in the
notations.
R +∞ It can be checked that this relation corresponds well to a probability density function
( −∞ pω (ω) dω = 1). Furthermore, this relation indicates that the probability distribution of the
circular frequency is neither uniform nor Gaussian.
A numerical example is illustrated in Fig. 3.12. It is a simple application of the previous
analytical developments with:
ak = 180kN/m ; bk = 220kN/m
am = 30kg ; bm = 70kg (3.181)
R
• U (x − x0 ) dx = (x − x0 ) U (x − x0 ) + cst
R (x−x0 )2
• (x − x0 ) U (x − x0 ) dx = 2
U (x − x0 ) + cst
R x2 −x2
0U
• xU (x − x0 ) dx = 2
(x − x0 ) + cst
65
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
The uniform probability density function of k and m are represented on the top line of this
figure. As a numerical application of Eq. (3.180), the probability density function of ω is also
represented. It is composed of three pieces of parabola. The corresponding cumulative distribution
function is also represented (lower right graph). Because the definition intervals of the input
variables (k and m) are limited, the probability density function of ω is strictly equal to zero
outside a finite interval. This important characteristic relates a significant difference compared
to a simulation with Gaussian variables. More precisely in the context of a reliability analysis
where it is desired to handle with small failure probabilities and hence tails of distributions, the
representation of the physical variables in their tail regions is very important.
Figure 3.12: Numerical application of the probabilistic analysis (analytical developments) - Prob-
ability density functions of k, m and ω, and cumulative distribution function of ω.
In order to validate these analytical developments, a Monte Carlo simulation has been used in
order to solve the same problem. A set of N = 100000 couples (k, m) is generated in order to match
their respective probability density functions. For each set the corresponding circular frequency
is computed and finally the histogram (using 31 bins) of this series of N values is computed. It is
represented in Fig. 3.13 together with the corresponding cumulative distribution function. The
good correspondence with the results of Fig. 3.12 validates the previous analytical developments
of the probabilistic analysis.
In order to compare with forthcoming results obtained with a reliability analysis, the proba-
bility that the circular frequency is smaller than ω L = 3.3 can be computed (from Eq. (3.178)):
66
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
They are useful for the transformation of the failure function to its reduced form:
v ³ ´
u
u ak + (bk − ak ) Φ b k
b b t
G( b =2
k, m) − ωL (3.188)
am + (bm − am ) Φ (m)b
This function is represented in Fig. 3.14 by its level curves. In the physical space (left side)
the level curves are straight. The non linear transformation transforms them to non linear level
67
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
curves in the reduced space (right side). As usual the most important level curve is the failure
condition represented by thick lines. The explicit expression of the reduced failure function and
it representation by level curves is not really necessary for the computation but provides a good
illustration of the method. On this figure, the joint probability density function of k and m are
represented by a rectangular unit step (its level curves are located on the limits of the graph) and
a gaussian distribution, as usual, in the reduced space (concentric circles) This kind of graphical
illustration has to be considered as an extra (non-required) information.
Step 1: After this preliminary step the complete information about the reduced space is known.
The iterations start with the guess of an initial design point. It is chosen as the mean physical
variable, i.e.
Step 2: This point does not lie of the failure condition. The failure condition (or equivalently
the reduced failure condition) is equal to:
r r
k 200
G(k, m) = 2 − ωL = 2 − 3.3 = 0.7
m 50
s
b m)
b k, 180 + 40Φ (0)
G( b = 2 − 3.3 = 0.7 (3.190)
30 + 40Φ (0)
Step 3: The third step consist in computing the gradient of the reduced failure function at the
design point. In this case these derivatives can be computed in closed form:
³ ´
b (b k − ak ) φ b
k
∂G
= r³ ³ ´´ (3.191)
∂b
k b
ak + (bk − ak ) Φ k (am + (bm − am ) Φ (m)) b
v ³ ´
u
b u ak + (bk − ak ) Φ b k
∂G t
= − b
3 (bm − am ) φ (m) (3.192)
∂mb (am + (bm − am ) Φ (m)) b
£ ¤ √
where φ represents the standard normal probability density function (φ (t) = exp −t2 /2 / 2π).
68
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
b
∂G 40 √12π 2
= p =√ = 0.1596 (3.193)
b
∂k (180 + 40 ∗ 0.5) (30 + 40 ∗ 0.5) 50π
s r
b
∂G 180 + 40 ∗ 0.5 1 4 2
= − 40 √ = − = −0.6383 (3.194)
b
∂m (30 + 40 ∗ 0.5)3 2π 5 π
4 This example shows that the computation of the gradient of the reduced failure function is not necessarily easy.
b is more complex.
In more complex applications, the explicit establishment of the reduced failure function itself (G)
Its gradient can be computed by decomposing the derivations as:
³ ³ ´ ´ ³ ´
b ∂G k b k , m (m)b º ∂k bk
∂G ∂G (k, m)
= = b)
∂kb b
∂k ∂k k=k (k b
∂k
b
m=m(m)
In this relation the derivative of G with respect to k is supposed to be computable (it depends on the initial
failure
³ ´ function ³ only).
³ ´´Concerning the second derivative, it could be more complex. Indeed, in the most general case,
k b k = F −1 Φ b
k k , in which F −1 could not have an explicit form. The derivation can however be simplified by
k
the following considerations:
³ ´ ³ ³ ´´ % ³ ´
∂k b
k ∂Fk−1 Φ b
k ∂Fk−1 (c) ∂Φ b
k
= =
∂b
k ∂b
k ∂c ∂b
k
b)
c=Φ(k
³ ´ ³ ´
φ bk φ bk
= k = ³ ³ ³ ´´´
∂Fk (k)
∂k
−1
pk Fk Φ b
k
k=Fk−1 (Φ(k
b ))
These considerations are useful in the context of hand calculations, since the derivative is expressed as a function
of accessible quantities. In the worst case scenario, Fk−1 has to be found from tabulated values.
69
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
Step 4: The orientation for the next design point is obtained by:
(0)
(0) n 0.1596
α1 = ¯ 1 ¯=q = 0.2425
¯n(0) ¯ 2
0.15962 + (−0.6383)
(0)
(0) n −0.6383
α2 = ¯ 2 ¯=q = −0.9701
¯n(0) ¯
0.15962 + (−0.6383)2
Step 5: The distance of this new point from the origin requires the estimation of the mean and
standard deviation of the failure function:
The iterative procedure requires to start back again at step 1 and repeat the operation until
convergence. Tables 3.10 and 3.11 give summarized and detailed results of the application of the
FOGSM method. They indicate that the convergence seems to be very slow in this application:
six iterations are not sufficient for the stabilization of the reliability index.
It. Xb∗ = b
k b∗ = m
X b µG σG β pf
1 2
1 +0.0000 +0.0000 0.7000 0.6580 1.0639 0.14369
2 −0.2580 +1.0321 0.5006 0.2911 1.7199 4.2727.10−2
3 −0.8147 +1.5147 0.3160 0.1616 1.9554 2.5271.10−2
4 −1.2297 +1.5203 0.2771 0.1403 1.9743 2.4173.10−2
5 −0.9453 +1.7333 0.2418 0.1247 1.9392 2.6240.10−2
6 −1.4046 +1.3370 0.3170 0.1721 1.8419 3.2741.10−2
Fig. 3.14 provides also an overview of the results of the FOGSM method. The locations of the
reduced and physical variables are represented for the first three iterations. As usual the small
arrows represent the gradient of the reduced failure function at the considered point. Fig. 3.15
presents the same results up to the 30th iteration. The successive positions of the reduced design
point are represented by red squared and linked together with red lines. This figure illustrates the
lack of convergence of the FOGSM method. This typical divergence is reported in many reference
books and is comparable to the divergence of the Newton-Raphson iterative method in the vicinity
of inflexion points.
As a solution to this problem, the design point¯ could¯ be considered as the point lying the
¯ b¯
closest to the reduced failure condition (minimum of ¯G¯), i.e. point 4 in this application for which
70
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
Iteration 1 2 3
Step 1 Xb∗ = bk +0.0000 −0.2580 −0.8147
1
Xb2∗ = m
b +0.0000 +1.0321 +1.5147
k 200.00 195.93 188.30
m 50.000 63.960 67.403
Step 2 G 0.7000 0.20045 0.04289
b
∂G
Step 3
∂b
k
+0.15958 +0.13788 +0.10164
b
∂G
b
∂m −0.63831 −0.25635 −0.12566
Step 4 α1 +0.2425 +0.4737 +0.6289
α2 −0.9701 −0.8807 −0.7775
Step 5 µG 0.7000 0.50061 0.31603
σG 0.65795 0.29108 0.16162
Step 6 β +1.0639 +1.7199 +1.9554
pf 1.4369e − 001 4.2727.10−2 2.5271.10−2
Step 1 — Guess design p oint —
Step 2 — Compute Failure Condition —
Step 3 — Compute the gradient of the Failure Condition —
Step 4 — Compute the orientation of the next design p oint —
Step 5 — Compute the mean and std of Failure Cond —
Step 6 — Compute the reliability index and probability of failure —
71
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
performed in the initial space. Indeed the exact probability of failure can be computed from the simple calculation
of the surface of the triangle under the failure condition.
72
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
Step 1: The reduced design point has already been computed (step 0.1) and,
Step 2: the reduced failure function at this point is equal to:
³ ´
b (0) b
G b = 0.7
k, m (3.204)
Step 3: Thanks to the simplicity of the transformation, the gradient of the failure function
can be computed easily:
s
(0) ∂Gb(0) 15.906mb + 50 15.906
n1 = =
∂kb b + 200 15.906m
15.906k b + 50
s
(0) ∂Gb(0) 15.906mb + 50 15.906bk + 200
n2 = = −15.906 (3.205)
∂mb b b + 50)
15.906k + 200 (15.906m
2
Step 4: Exactly as for the AFOSM method the new orientation of the design point is estimated
from the knowledge of the gradient:
(0)
(0) n 0.15906
α1 = ¯ 1 ¯= = 0.2425
¯n(0) ¯ 0.65581
(0)
(0) n −0.63623
α2 = ¯ 2 ¯= = −0.9701
¯n(0) ¯ 0.65581
Step 5: The estimated mean and standard deviation of the failure condition are given by:
N
X
µG b(0) −
= G
(0) (0)
ni xi = 0.7 (3.207)
i=1
N
X ¯ ¯
(0) (0) ¯ ¯
σG = αi ni = ¯n(0) ¯ = 0.65581 (3.208)
i=1
Because the reduced space is continuously changing (from an iteration to the next one), the
design point has to be computed, for the FOGAM method, in the physical space:
k(0) = 15.906b
k + 200 = 195.88
m(0) b + 50 = 66.471
= 15.906m (3.211)
73
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
This point is represented by the label ’1’ in the physical space as well as in the reduced
space. Now the iterations can be repeated, by defining at each iteration a new transformation,
i.e. a new reduced space and new reduced failure conditions. Fig. 3.16 illustrate the results
obtained for the first few iterations. It can be observed that the failure condition is continuously
changing (different level curves) but remains linear. Also the equivalent gaussian variables are
also continuously changing. This is indicated by the different joint probability density functions
at each iteration (concentric ellipses in the physical space).
Tables 3.12 and 3.13 give thhe summarized and detailed results of the application of this
algorithm. They inidcate that the FOGAM is also not suitable for the estimation of the probability
of failure. Indeed the summarized results indicate a divergent behaviour of the method. Actually,
it could be checked that the location of the design point in the physical space is obtained quite
accurately. The lack of convergence comes from the variability, from an iteration to the next
one, in the mean and standard deviations of the equivalent variables. Then even if the process
is convergent in the physical space, and since the reduced space is continuously changing, the
iterative resolution can be unstable in the reduced space. Because the reliability index is defined
in the reduce space, this divergent phenomenon leads to the impossibility to estimation correctly
the probability of failure.
It. Xb1∗ = b
k b2∗ = m
X b µG σG β pf
1 +0.0000 +0.0000 0.7000 0.6558 1.0674 0.14290
2 −0.2610 +1.3517 0.3918 0.2130 1.8388 3.297210−2
3 −1.6270 +1.4302 0.2630 0.1465 1.7953 3.630510−2
4 −0.9366 +1.4977 6.06610−2 0.09226 0.6575 0.25544
5 −0.6778 +3.5282 5.43710−2 0.1103 0.4929 0.31106
6 −0.5022 +3.3566 1.037e + 000 0.3150 3.2928 4.959410−4
Iteration 1 2 3
Step 1 Xb1∗ = b
k +0.0000 −0.2610 −1.6270
Xb2∗ = m
b +0.0000 +1.3517 +1.4302
k +200.00 +195.88 +182.07
m +50.000 +66.471 +66.947
Step 2 G +0.7.000 +0.13331 −1.706210−3
b
∂G
Step 3 +0.15906 +0.13449 +3.842310−2
∂b
k
b
∂G
b
∂m −0.63623 −0.16523 −0.14135
Step 4 α1 +0.2425 +0.6313 +0.2623
α2 +0.9843 +0.9986 +0.3832
Step 5 µG +0.70000 +0.39176 +0.26297
σG +0.65581 +0.21305 +0.14648
Step 6 β +1.0674 +1.8388 +1.7953
pf +0.14290 +3.297210−2 +3.630510−2
Step 1 — Guess design p oint —
Step 2 — Compute Failure Condition —
Step 3 — Compute the gradient of the Failure Condition —
Step 4 — Compute the orientation of the next design p oint —
Step 5 — Compute the mean and std of Failure Cond —
Step 6 — Compute the reliability index and probability of failure —
74
Denoel Vincent, An introduction to Reliability Analysis
Vibration model (non linear failure function)
75
Appendix A
Computer Programs
A.1 AFOSMC
A.1.1 Call to AFOSMC
mX = [150 100];
sX = [ 20 30];
r = 0.4;
covX = [sX(1)^2 r*sX(1)*sX(2); r*sX(1)*sX(2) sX(2)^2];
GFct = @FailureCond_Ex1_1;
[Beta, X, XRed, G, GrRed, Alfa, PrFail, MeanG, StdG] = AFOSMC (GFct, mX, covX);
Nvar = length(mX);
ep = 1e-6;
NbIter = 5;
switch iStart
case 0
beta = 0;
76
Denoel Vincent, An introduction to Reliability Analysis
FOGSM
alfa = zeros(1,Nvar);
case 1
beta = 3;
alfa = sqrt(ones(1,Nvar)/Nvar);
end
Xs_red = -beta*alfa;
Xs = Xs_red*invA’ + mX;
GXs = GFct(Xs);
for i=1:Nvar
X_red = Xs_red; X_red(i)=X_red(i)+ep;
X_ = X_red*invA’ + mX;
grad(i) = (GFct(X_)-GXs)/ep;
end
SumSq = sqrt(sum(grad.^2));
alfa = grad / SumSq;
mZ = GXs - sum(grad.*Xs_red);
sZ = SumSq;
beta = mZ/sZ;
PrFail(icpt) = 1-erfc(-beta/sqrt(2))/2;
end
A.2 FOGSM
A.2.1 Call to FOGSM
P = [150 30; 100 30];
Typ = {’Ext_Typ.I’, ’Ext_Typ.I’};
GFct = @FailureCond_Ex1_1;
[Beta,X,XRed,G,GrRed,Alfa,PrFail,MeanG,StdG] = FOGSM (GFct, P, Typ);
77
Denoel Vincent, An introduction to Reliability Analysis
FOGSM
ep = 1e-6;
NbIter = 6;
Beta= zeros(NbIter,1); X = zeros(NbIter,Nvar); XRed= zeros(NbIter,Nvar);
G = zeros(NbIter,1); GrRed=zeros(NbIter,Nvar); Alfa =zeros(NbIter,Nvar);
PrFail=zeros(NbIter,1);MeanG= zeros(NbIter,1); StdG = zeros(NbIter,1);
grad = zeros(1,Nvar);
switch iStart
case 0
beta = 0;
alfa = zeros(1,Nvar);
case 1
beta = 3;
alfa = sqrt(ones(1,Nvar)/Nvar);
end
Xs_red = -beta*alfa;
Xs=FOGSM_InvTransf(Xs_red,P,Typ);
GXs = GFct(Xs);
for i=1:Nvar
X_red = Xs_red; X_red(i)=X_red(i)+ep;
X_ = FOGSM_InvTransf(X_red,P,Typ);
grad(i) = (GFct(X_)-GXs)/ep;
end
SumSq = sqrt(sum(grad.^2));
alfa = grad / SumSq;
mZ = GXs - sum(grad.*Xs_red);
sZ = SumSq;
beta = mZ/sZ;
PrFail(icpt) = 1-erfc(-beta/sqrt(2))/2;
end
for t=1:length(XRed)
78
Denoel Vincent, An introduction to Reliability Analysis
Monte Carlo (non Gaussian)
switch char(Typ(t))
case ’Ext_Typ.I’
mu=P(t,1); sigma = P(t,2);
X(t) = evinv(normcdf(XRed(t),0,1),mu,sigma);
case ’Uniform’
a=P(t,1); b = P(t,2);
X(t) = unifinv(normcdf(XRed(t),0,1),a,b);
end
end
Nvar = length(Typ);
x=rand(N,Nvar); X=zeros(N,Nvar);
for t=1:Nvar
switch char(Typ(t))
case ’Ext_Typ.I’
mu=P(t,1); sigma = P(t,2);
X(:,t)=evinv(x(:,t),mu,sigma);
case ’Uniform’
a = P(t,1); b=P(t,2);
X(:,t)=unifinv(x(:,t),a,b);
case ’Normal’
moy = P(t,1); sig=P(t,2);
X(:,t)=norminv(x(:,t),moy,sig);
79
Denoel Vincent, An introduction to Reliability Analysis
Monte Carlo (non Gaussian)
case ’Beta’
p1 = P(t,1); p2=P(t,2);
X(:,t)=betainv(x(:,t),p1,p2);
end
end
g = G(X);
figure;
[nn,xx] = histv(g,Nbins);
subplot(1,2,1); plot(xx,nn); title (’pdf of MCS variable’); grid on
[n,x] = hist(g,Nbins);
F_g = cumtrapz(x,n); F_g=F_g/max(F_g);
subplot(1,2,2); plot(x,F_g); title (’cdf of MCS variable’); grid on
M = MomentsStatistiques(g,4);
disp ([’ --> Mean : ’ num2str(M(1))])
disp ([’ StandDev: ’ num2str(M(2))])
disp ([’ Skewness: ’ num2str(M(3))])
disp ([’ kurtosis: ’ num2str(M(4))])
else
disp (’G is not a function handle!’)
end
80
Bibliography
[1] Breitung, K. (1984). Asymptotic approximation for multinormal integrals, Journal of Engi-
neering Mechanics, ASCE, 110 (3) 357-366.
[2] Breitung, K. (1994). Asymptotic approximations for probability integrals. Lecture Notes in
Mathematics, 1592. Berlin: Springer Verlag.
[3] Freudenthal, A. M. (1956). "Safety and the probability of structural failure". ASCE Transac-
tions, 121, 1337-97.
[4] Haldar,A. and Mahadevan,S., “Reliability Assessment Using Stochastic Finite Element Analy-
sis”, John Wiley & Sons, Inc., 2000
[5] Kusama,H., “Lecture Note on Reliability Analysis” (in Japanese), Toyohashi University of
Technology and Science, 1994.
[6] Melchers, Robert E., Structural reliability analysis and prediction, Second Edition, John wley
& Sons, Inc., 1999.
[7] Rackwitz, R. (2001). "Reliability analysis - a review and some perspectives", Structural Safety,
23, 365-395.
[8] Schüeller, G. (1998). "Structural Reliability - Recent Advances - Freudenthal lecture". Pro-
ceedings of the 7th Internation Conference on Structural Safety and Reliabilty (ICOSSAR
’97), N. Shiraishi, M. Shinozuka, and Y. Wen eds. A.A. Balkema Publications, Rotterdam,
The Netherlands, 3-35.
81