Risk Theory - T
Risk Theory - T
General Editors
D.R. Cox, D.V. Hinkley, D. Rubin, B.W. Silverman
32 Analysis of Binary Data, 2nd edition D.R. Cox and E.J. Snell (1989)
35 Empirical Bayes Methods, 2nd edition J.S. Maritz and T. Lwin (1989)
R. E. BEARD
O.B.E., F.I.A., F.I.M.A., PROFESSOR
Leicestershire, England
T. PENTIKAINEN
PHIL. Dr, PROFESSOR h.c.
Helsinki, Finland
E. PESONEN
PHIL. Dr
Helsinki, Finland
THIRD EDITION
Preface IX
Nomenclature XIV
Appendixes 349
A Derivation ofthe Poisson and mixed Poisson processes 349
B Edgeworth expansion 355
C Infinite time ruin probability 357
D Computation of the limits for the finite time ruin
probability according to method of Section 6.9 367
E Random numbers 370
F Solutions to the exercises 373
Bibliography 396
The theory of risk already has its traditions. A review of its classical
results is contained in Bohlmann (1909). This classical theory was
associated with life insurance mathematics, and dealt mainly with
deviations which were expected to be produced by random fluctua-
tions in individual policies. According to this theory, these deviations
are discounted to some initial instant; the square root of the sum of
the squares of the capital values calculated in this way then gives a
measure for the stability of the portfolio. A theory constituted in this
manner is not, however, very appropriate for practical purposes.
The fact is that it does not give an answer to such questions as, for
example, within what limits a company's probable gain or loss will
lie during different periods. Further, non-life insurance, to which
risk theory has, in fact, its most rewarding applications, was mainly
outside the field of interest of the risk theorists. Thus it is quite
understandable that this theory did not receive very much attention
and that its applications to practical problems of insurance activity
remained rather unimportant.
A new phase of development began following the studies of Filip
Lundberg (1909, 1919), which, thanks to H. Cramer (1926), e.O.
Segerdahl and other Swedish authors, became generally known as
the 'collective theory of risk'. As regards questions of insurance, the
problem was essentially the study ofthe progress of the business from
a probabilistic point of view. In this form the theory has its applica-
tions to non-life insurance as well as to life insurance. This new way
of expressing the problem has proved fruitful. In recent years the
fundamental assumptions of the theory, and the range of applica-
tions, have been significantly enlarged. The advancement of the
general theory of stochastic processes and its numerous sub-
branches and applications has been reflected in the development of
risk theory. The explosive development of computers has made it
x PREFACE
claims and to that part of premiums which remains when the loading
for expenses for management has been deducted, i.e. risk premiums
increased by a safety loading. These restrictions are then relaxed,
leading gradually to the construction of a comprehensive model
(see Chapter 10).
In particular, the high rates of inflation prevalent today cannot
be ignored in practical work. To provide a satisfactory basis for
development it is assumed that, when the horizon under considera-
tion is longer than one year, the size of the claim will be corrected
by a factor depending on the assumed value of money.
1/1
E
·Ci
U
Time t
Figure 1.1.1 A sample path of claim process.
1.1 THE PURPOSE OF THE THEORY OF RISK 3
and the size of each claim is also a random variable. Any particular
realization consisting of an observed flow like that in Fig. 1.1.1
is called a sample path or a realization of the process.
If the observation time t is fixed, then the corresponding outcome,
X(t) in our example, is a random variable having a distribution
function (abbreviated dJ.) F(X; t) = prob {X(t) ~ X}. Random vari-
ables will be denoted by bold-face letters (see Section 1.5). If the
stochastic process is well defined, then F is uniquely determined
for every t of the observation range. On the other hand, however,
mere definition of F, even if it were valid at every t, is not sufficient
to determine a stochastic process. In addition, transition rules are
needed to describe how the X(t) values related to different times t
are correlated. Hence some care is necessary when the terms
'stochastic processes' and 'stochastic variables' or their distributions
are used.
...... j
u0 + (1+i
I
I
I X (tl
I
u (tl
this book. Instead it is mostly supposed that the initial values are
readily available. The problems caused by parameter uncertainty
are not only a feature of risk-theoretical considerations. In essence
the same problems are always present and are even more critical
in premium rating and in evaluation of technical reserves. In
fact some of the basic data underlie both risk -theoretical and rating
calculations or can be derived from the same basic files. Premium
rates are ultimately based on past experience and more or less
reliable prognoses of future trends, cycles, inflation and other
relevant factors, and they are understandably subject to inaccuracies
and errors. They affect the trading result, which makes it possible to
evaluate their order of magnitude but not until after a time lag,
which in practice may be two or three years and even longer for
some particular classes of insurance. The rates and reserves can
and should be then corrected (within limitations imposed by
competitive market conditions or statutory regulations, if legally
controlled). This control mechanism, which is inherent just from
the uncertainty in the parameters, is one of the important causes of
the underwriting cycles which will be described in Section 2.7 and
incorporated in the model in Chapter 6. In fact the effect of the
parameter uncertainty will be regarded in this indirect way in the
risk-theoretical considerations discussed in Section 6.2.
The effect of parameter inaccuracy can also be investigated
directly if necessary. The technique of sensitivity analysis, which
will be developed in Section 7.6, can be useful for this purpose.
Simply, variations in the initial data can be fed into the risk-theoreti-
cal models and the sensitivity of the outcomes can be used for
evaluation of the effect of inaccuracies arising from the uncertainty
of the initial data.
Even if the estimation inaccuracy is not considered, it must
always be kept in mind as a relevant background factor. For
example, it may be meaningless to apply very laborious techniques
to get very accurate results if the initial data are uncertain. The
selection of approaches, if alternatives are available, should thus be
consistent with the environment under consideration.
Time t
Figure 1.3.1 Negative risk sums. A portfolio of current annuities. The death
of an annuitant is reflected as a step upward.
1.4 MAIN PROBLEMS 7
;:)
CII
~
CII
I/)
~
x.
I/)
0:
Uo
u (t)
Time t
T.
Figure 1.4.1 A simple risk process. The state of the process can be checked
either at the end of the period (0, TJ or continuously as in item (b).
(J (t)
3 T
Figure 1.4.2 Checking at points t = 1, 2, ... T during the observation period.
(d) Claim number and compound processes The analysis of risk pro-
cesses begins in Chapter 2 by considering the number of claims
and the process related to it, which is called the claim number process
or often the counting process, the well-known Poisson and negative
binomial processes being treated as special cases.
The general case where the individual amount of a claim, claim
size, may vary forms the subject matter of Chapter 3 and later parts
of the book. This process, where both the claim number and claim
size are random variables, is called the compound process.
10 DEFINITIONS AND NOTATIONS
(e) Break-up and going-concern basis One way to define the solvent
state of an insurer is to require that at the end of each fiscal year
the assets should be at least equal to the total amount of liabilities
(possibly increased by some legally prescribed margin). This
situation may be tested by assuming that the activity of the insurer
would be broken at the test time point and the liabilities, such as
those due to outstanding claims, would be cleared up during a
liquidation process. Then assets should be available in step with
the time of claim and other payments. The risk factors that are
involved and which are to be evaluated are the uncertainties of
the magnitude of the claims, including those claims which have
already occurred before the test time point but which may be
notified later. Furthermore, realization of the assets is affected by
changes in market value and the whole process is subject to inflation.
In other words, this 'break-up basis' is involved with uncertainties
arising when both the liabilities and assets go into hypothetical
liquidation. The problem is to evaluate these inaccuracies and to
find a minimum solvency margin in a way which still gives an
adequate guarantee for the fulfilment of the commitments of the
insurer.
Another possibility is to assume that the business of the insurer
will go on. Then, in addition to the errors and inaccuracies concern-
ing gradual liquidation of the assets and the outstanding claims
and other liabilities that have arisen in the past fiscal years, i.e. the
risks involved just with the break-up situation described above, the
continual flow of new claim-causing events and other business
transactions gives rise to further fluctuations. Because this 'going-
concern' basis, by definition, includes the 'break-up' risks as a
partial element, it generates a larger range of fluctuations and leads
to demand for a greater solvency margin and safety loading than
does the break-up basis alone. This assumes, of course, that
consistent principles are followed in the two bases. It may be re-
marked that if, for example, the break-up basis is defined con-
ventionally, such as by statute, there may be incompatibility with
the practical reality of the going-concern basis.
The going-concern basis was tacitly assumed in the previous items
and it will be followed in this book generally. The problems involved
with the outstanding claims, which constitute the most important
break-up risks, will be discussed in item 3.1 (c). Asset risks will be
considered in Section 6.3.
1.5 ON THE NOTATION 11
f g(X)dF(X), (1.5.1)
are often needed, where g is some auxiliary function. For the types
of distribution functions mentioned, this becomes
FIX)
1 - - ______________________________________ .=-_~_~ __
x
Figure 1.5.1 A mixed type df
I.5 ON THE NOTATION 13
who are not familiar with this topic can regard it as an abbreviation
andreplace any such integral by the last form in (1.5.2), since for
our purposes the mixed case, as defined above, is general enough.
This extended integral has the same general features as the conven-
tional integral.
(e) Unit step function It is often convenient to use the unit step
function E(X) defined as follows
E(X )= {o forx<O
(1.5.3)
1 for x ~ o·
It can be interpreted as the d.f. of a 'degenerated' variable, where the
whole probability mass is concentrated at the origin.
An example of the application of the step function in integration
f::
is as follows (see the previous item)
rx.J = rx.(X)
J
= E(Xj) = f +OO
-00
xj dF(X). (1.5.5a)
f::
+ 00. The central moments J1. j of X
J1. j = J1.iX ) = E{(X - E(X))j} = (X - E(X))-i dF(X), (1.5.5b)
F*G(X)= f +OO
-00 F(X - Y)dG(Y), (1.5.8)
1.6 THE MOMENT GENERATING FUNCTION 15
F*G(X)= f +OO
-00 F(X - Y)g(Y)dY, (1.5.9)
(i) A list of symbols and notation has been given after the Preface
for the convenience of readers.
M(s) = f+OO
_ 00 eSx f(X) dX, (1.6.2)
where (Xh are the moments (1.5.5a) about zero. If M(s) is known,
they can be obtained in terms of derivatives
(1.6.5)
(ii) The dJ. is uniquely determined by its m.gJ. (if the m.gJ. exists),
i.e. if two distributions have the same m.gJ. they are identical.
(iii) The linear transformation y = ax + b transforms the m.gJ. into
the form
(1.6.6)
(iv) If Xl and x 2 are two independent stochastic variables, then the
m.gJ. of sum Xl + x 2 is obtained by multiplication
M(s) = M 1(s)M 2(S). (1.6.7)
In terms of distribution functions this means that the m.gJ.
transforms convolutions of distributions into products ofm.gJ.s.
1.6 THE MOMENT GENERATING FUNCTION 17
2.1 Introduction
(2.2.1)
°
for every t > 0, where p ~ is a parameter indicating, as will be seen
later, the average number of claims in a time unit. The process k
is called a Poisson process. The occurrence of an event of claim de-
pends on both the number of cases (risk units) exposed to risk and
also the risk intensity, i.e. the chance that a particular case gives
rise to a claim. The Poisson process arises as a product of these
components.
20 CLAIM NUMBER PROCESS
Exercise 2.4.1 Derive the expressions for !Xl and !X2 by direct
summation from (2.4.1).
(b) The shape of the Poisson distribution for three small values of
n is depicted in Fig. 2.5.1.
,,,_,,, n= 5
.,
I ,
: !-
,,j- ,
I
I
,
I
15
,,
I
.
I L,
0.1 , ,,
I
,,
~ ..
,,i
,, n
10 20 30 40 50
Figure 2.5.1 Poisson probabilities Pk' (To help visual shaping of the distribu-
tions the discrete probability values are linked as step curves.)
2.5 NUMERICAL VALUES OF POISSON PROBABILITIES 25
F l-F
.II "1\.
JIf 1!..
"'I
gI ~
r ~
] ;;::J
\n
\
I[ 100 ~~
_1001--
25 ~25-
L..... 10 Normal ' 10
10-3
I
If , II
III \~ ll-,
-4 -3 -2 -1 o 2 3 4 5
z
Figure 2.5.2 F or 1 - F for the normalized Poisson distribution (step lines)
of some values ofn andfor the normal approximation (2.5.3) (logarithmic scale).
n= 10 25 50
(e) Gamma formula The dJ. of the Poisson variable can also be ex-
pressed in terms of the incomplete gamma function as will be
shown in Exercise 2.9.7.
reinsurer pays the third and subsequent benefits. What is the risk
premium ( = expected amount of claims) for the reinsurance?
(2.6.4)
This makes it possible to apply the Poisson law to cases where the
risk exposure, measured by nt' may vary in some defined way, such
as by following a cycle or a trend. This aspect has already been
discussed in item 2.3(b).
(t> 0).
i.e. also that the sum process satisfies the conditions (i)-(iii) of
Section 2.2.
90
40
900
o
8 800
~
...
8.
ij 700
c:
~
~
lJ.. 600
500+-~--~~--~~~~~~-r--~-r--~-r-----
1951 1953 1955 1957 1959 1961 1963
0.3 (n-n)/n
30 n/10000
0.2
25
0.1
---
20
0.0
15
-0.1
10
5 -0.2
-0.3
1960 1965 1970 1975
Time
Figure 2.7.2 gives the annual claim frequency for the period 1951
to 1963, the exposure increasing from about 7000 to 27 000 over the
period, and shows a long-term periodic effect, with a probable trend
upwards over the period. This longer period shows that the declining
tendency in Fig. 2.7.1 was a downward phase of one of the longer-
term variations and shows the need to look at fairly long series of
values.
The workers' compensation time series depicted in Fig. 2.7.3 shows
a similar behaviour. Further analysis shows that the cycles are
2.7 TIME-DEPENDENT VARIATION OF RISK EXPOSURE 31
2 3 Time
Figure 2.7.4 A process where the risk parameter q related to consecutive time
periods varies at random.
= f oo [ .I pJnq)
[k] ]
dH(q) =
fOO F(k; nq) dH(q), (2.7.4)
o I~O 0
34 CLAIM NUMBER PROCESS
where F(k; nq) denotes the Poisson dJ. with mean value nq of the
number of claims. Note that the formula could also have been
obtained directly using the same reasoning as applied for (2.7.2),
i.e. by considering that P is, so to speak, the weighted mean of all
possible Poisson functions, the weights for each value of nq being
the probability of occurrences of this expected number of claims.
The claim number process k just introduced is called a weighted
or mixed Poisson process as distinguished from the 'simple' Poisson
process considered in previous sections. Naturally, types of claim
number processes other than the Poisson one can be similarly
weighted. The variable q will be designated the structure variable
and its dJ. H the structure distribution.
In the following the notation P is often replaced by F if it is
clear in the context that the weighted Poisson dJ. is under considera-
tion.
Hence these moments are obtained from those of the simple Poisson
dJ. by a similar weighting as for Pk and F(k) in the previous section.
Substituting the expressions (2.4.3) where n is to be replaced by nq,
it follows that
E(X)=E(E(XIV))= t+ oo
oo
E(Xlv= Y)dH(Y),
h, 0.10 0.05 0.05 0.10 0.12 0.16 0.12 0.10 0.05 0.05 0.10
Table 2.8.2 Examples of simple and mixed Poisson probabilities for n = 100
and for the H function are given in Table 2.8.1
(2.9.2)
where the upper limit is chosen to give the mean value E(q) = 1,
after which one parameter h is still open to choice.
E(q) =1
O"~ = I/h O"q = I/Jh
J.l 3 (q) = 21h z Yq = 21Jh
J.l 4 (q) = 61h 3 + 31h z Yz(q) = 61h. (2.9.3)
(c) The shape of the r~distribution is shown in Fig. 2.9.1 for various
values of the parameter h. Values can be found in Pearson (1954)
or they can be computed using expansions which will be given in
Section 3.12.
2.9 THE POLY A PROCESS 39
H'(q)
9 h=500
i\(n) = (
h+k -
k
1) ( +
n
h
h
)h ( n +n h )k . (2.9.9)
and
(2.9.13)
where
a = n/(n + h)
b = n(h - l)j(n + h).
It may be noted that integer-valued distributions having a recursion
rule of the form (2.9.13), where a and b are constants not depending
on k, constitute an important class of distributions. The Poisson
distribution (a = 0, b = n) is a member of this family (see 2.5.2)).
Generalizing a result of Panjer (1981), Jewell and Sundt (1981)
have recently shown that the compound Poisson function is also
computable by a recursion formula in certain conditions. This is
discussed in Section 3.8.
A few numerical values are provided in Table 2.9.1 and give some
idea of the flexibility of (2.9.9) compared with the simple Poisson
dJ.
h= CIJ 100 20 10 5
n k
S)TI(NI+UC) TI (N 2 +vC)
prob{k = k} = ( k u=o s I v=o
TI (N + wC)
w=O
(2.9.14)
fixed observation period, say in one year, will be kept constant and
equal to n. It can be proved that the passage leads to the negative
binomial formula (2.9.9). Hence the negative binomial can be
derived by assuming contamination between the risk units, e.g.
epidemic diseases or the spreading of fire. For this reason the
negative binomial distribution is often called a Polya distribution
and the corresponding mixed process is called a Polya process.
Exercise 2.9.1 Prove that the moments about zero of r(x; h) are
rl i = r(h + i)/r(h), (2.9.17)
and calculate the characteristics (2.9.3) and (2.9.10).
Exercise 2.9.5 Calculate and plot in the same diagram the Poisson
function Pk and the corresponding Polya Pk for n = 5 and h = lO.
Exercise 2.9.7 Prove that the dJ. Fn(k) of the Poisson variable can
be expressed in terms of the gamma dJ.
1 - F(k; n) = r(n, k + 1).
Exercise 2.9.8 For which value of k does Pk(n) as given by (2.9.9)
achieve its maximum?
claims in the year 1968, the average number of claims per policy
being 0.131 74 and the variance 0.13852. The column headed
'Poisson' sets out the distribution that would result if the occurrence
of claims had followed the Poisson law with n = 0.131 74, i.e. the
expected number of claims per policy in one year. As will be apparent
the Poisson distribution is theoretically shorter than the data, an
observation confirmed by the chi-squared test. In other words, the
hypothesis that the risk proneness is different for different policies
is confirmed.
The insufficiency of the Poisson law could also be anticipated
from the fact that the variance is greater than the mean, whereas they
should be equal if the Poisson law were valid, as will be seen from
(2.4.3).
The column headed 'Negative binomial' sets out the distribution
according to this law with parameters n = O. 13 174 and h = 2.555,
the latter being found by the method of maximum likelihood. The
value of chi-square is 6.9 which gives a probability of 0.14 for 4
degrees of freedom, so that the representation is acceptable. There
is a slight indication that the negative binomial may be under-
representing the tail and for some applications it might be desirable
to elaborate the model, but for applications which have no signi-
ficantly large skewness the model may be safely used.
(d) Terminology For the purposes of this book the inner variation in
the collective is not relevant. The collective will be treated as a
whole and the heterogeneity taken care of by the expected number
of claims n. Thus in what follows H(q) and the term 'structure
function' will represent only short-term variations in n, i.e. the random
fluctuation from one accounting period to another. Longer-term
variations are dealt with later in Chapter 6.
The reader will appreciate that this terminology deviates from
the practice sometimes assumed in the literature where a structure
function may refer mainly to the internal heterogeneity of collectives.
CHAPTER 3
S IZ)
Figure 3.1.1(a) A continuous function: (b) a discrete function: (c) mixed type.
errors of the outstanding claims from the model. Like any other
inaccuracy of the basic assumption, it gives rise to extra fluctuations
in underwriting results, probably in a periodic manner as assumed
in item 1.1(0 and as will be further discussed in Section 6.2. When
the model parameters are calibrated on the basis of observed actual
fluctuations, the effect of these inaccuracies will be automatically
taken into account. Furthermore, there is no essential obstacle to
the introduction of the outstanding claims or rather their estimation
error as a particular entry to the model. A brief indication of such an
approach will be presented in item 1O.2(e).
On the other hand, systematic under- or overestimation can give
rise to considerable bias in the balance sheet and thus in evaluation
of the actual solvency margins (risk reserves). The consideration of
these as well as many kinds of 'non-stochastic' aspects, e.g. in-
calculable risks jeopardizing the existence of insurers such as major
failures in investments or risk evaluation, misfeasance or malfeas-
ance of management, etc., are essential parts of the general solvency
control of insurance industry, but they fall outside the scope of this
book (see Pentikiiinen, 1982; Section 2.9).
Analysis of the development of claims estimates is a normal part
of business routine and will indicate the need for some extra pro-
visions; it might be thought, for example, that a margin is needed
to deal with inflationary changes with respect to both the average
level of inflation as well as the need for emergency measures in cases
when the rate of inflation may occasionally be soaring. Even though
inflation will be incorporated into the model assumptions, its special
effects on the claims reserve will not be further discussed. Such
adjustments might be needed in assessing the parameters of the
overall model and will have to be dealt with in individual circum-
stances.
f:
be calculated from the recurrence formula.
and the following important formula is obtained for the dJ. of the
aggregate claims
00
ai = f: Zi dS(Z), (3.3.1 )
L Pkajk),
k=O
where a}k) is the jth moment about zero of the sum of k individual
3.3 BASIC CHARACTERISTICS OF F 53
claims
Zl +Z2 + ... +Zk'
The terms of this sum are mutually independent according to
assumptions made in Section 3.1 and have the same dJ. S. Hence
the first moment
(3.3.4)
and the second and third central moments of the sum of claims can
be summed from its components:
f.1.(k)
} }
= f.1.,(Zl) + '" + f.1. ,(Zk)
}
U = 2 or 3).
Then the second moment about zero, needed for (3.3.3), can be
calculated as follows
a~l = f.1.~k) + (a\k l)2
= kf.1. Z(Zl) + k 2 mZ (3.3.5)
= k(a 2 - mZ) + k 2 mZ
= ka 2 + k(k - 1)m 2 .
(d) In the Poisson case i.e. when the structure variable q is constant
(= 1), (3.3.7}is reduced as follows
Jlx = nm
2_
O'x - na 2
a r
Y = 3 = 3 (3.3.9)
X ai/ 2 In r~/2 In
3.3 BASIC CHARAC. ~l{ISTICS OF F 55
(e) For the Polya case i.e. when the structure function is of gamma
type, the corresponding expressions are derived by using (2.9.10)
I1x= nm
ax = na 2 + n2 m2 /h
2
Table 3.3.1 The share p of the standard deviation due to the structure variation.
100o-xiP lOOp
n Standard Standard Polya h = 100 Polya h = 1000
Table 3.3.2 The components (3.3.12) as a percentage of the total variance ai.
(3.3.18)
where Fnq is the simple compound Poisson dJ. (aq = 0) having the
expected number of claims nq. The merit of (3.3.18) is that it is
not necessary to try to find any analytical presentation for H in
cases where it is evaluated, for example from empirical data.
Exercise 3.3.4 Show that if Zl' ... ,Zk are mutually independent
random variables, then the third central moments satisfy the
equation
113 (Jl Jl
Zj) = 113 (Zj)'
(b) In the Poisson case when H(q) = e(q - I) (see (1.5.3)) the follow-
ing formula is obtained as a special case of (3.4.2)
(3.4.3)
60 COMPOUND POISSON PROCESS
M(s) = [ 1- ~(Mz(s) - 1) T
h
• (3.4.4)
Exercise 3.4.2 Check that the m.gJ (2.8.7) is obtained from (3.4.2)
by substituting S(Z) = 8(Z - 1).
3.5 Estimation of S
(b) Policy files as basis First a method is given for computing the
S function starting from the individual policies of an insurance
3.5 ESTIMATION OF S 61
• The term 'probability of claim' is often used in this connection, but qi must, in
fact, be regarded as a frequency or, what is the same, the expected number of events
(which might even be ;;, I). If the number of claims is distributed in a Poisson form
during a certain interval, and the parameter q is very small, the probability of occur-
rence of at least one event is clearly p = I - e -q ::::; q. In this sense, reference is some-
times made in a rather loose way to the probability of an event, when, in fact, the
expected number of events during this interval is meant.
62 COMPOUND POISSON PROCESS
(a) Claim statistics In this method the actual claims of the portfolio
in question are collected in a table according to the amounts of the
claims, as in Table 3.5.1 which sets out claims arising from a combined
experience of Finnish insurance portfolios comprising industrial
fire risks.
2 3 4
n.
Z x 10- 3 £ ni ~S=---.!. S=L~S
n
I 0.010 283 0.033953 0.033953
2 0.016 280 0.037664 0.071617
3 0.025 157 0.045479 0.117096
4 0.040 464 0.055413 0.172 509
5 0.063 710 0.063707 0.236216
6 0.100 781 0.068234 0.304450
7 0.158 530 0.070466 0.374915
8 0.251 446 0.070370 0.445285
9 0.398 491 0.071745 0.517030
10 0.631 673 0.074009 0.591039
II 1.000 779 0.075761 0.666800
12 1.585 741 0.073025 0.739825
13 2.512 520 0.064 899 0.804724
14 3.981 425 0.052757 0.857481
15 6.310 323 0.040 152 0.897633
16 10.000 179 0.029698 0.927331
17 15.849 173 0.021660 0.948990
18 25.119 112 0.Ql5765 0.964755
19 39.811 94 0.011310 0.976065
20 63.096 57 0.008222 0.984287
21 100.000 39 0.005599 0.989886
22 158.489 22 0.003767 0.993653
23 251.189 17 0.002424 0.996077
24 398.107 12 0.001582 0.997659
25 630957 5 0.001022 0.998680
3.5 ESTIMATION OF S 63
2 3 n. 4
Z x 10- 3 £ ni tlS=~ S = L:tlS
n
(d) Smoothing The empirical data of Table 3.5.1 and Fig. 3.5.1
were mechanically smoothed by replacing each of them by a moving
AS 10-1
i
~
10-2 : ~
10-3
10-4 iii
10- 5 \
10-6
Figure 3.5.1 Claim size densities of Finnish industrial fire insurance. The data
of Table 3.5.1 are plotted on a double logarithmic graph. The points indicate
observed data. Unit for Z is £1000.
3.5 ESTIMATION OF S 65
but in the absence of other methods it does provide some basis for
further calculation. For life insurance the method is easier to use,
because of the absence of partial damages. The method involves a
rough idealization, since for example the risk premium is used as
a measure of individual risk, whereas in practice the basis of a risk
premium involves an equalization over some groups of policies.
If the portfolio is large, so that there are many cases over the
limit (in the above example £100000), suitably selected samples
for the various risk sums may be taken and only the largest cases
treated individually.
S(Z) = L p (1- e-
i
CiZ ), (3.5.5)
i= 1
where L Pi = 1.
1
r(aZ + b, a) = r(a)
faZ b
0
+
e-uu a - 1 du (Z ~ 0, aZ + b ~ 0),
(3.5.6)
as an estimate for the claim size dJ. S. There are three parameters
available for fitting the curve according to the actual dJ. which
can be determined so that the distribution will have the given mean
(/1), standard deviation (0-) and skewness (y). First it is useful to
standardize the variable Z
z = (Z - J.l)/a, (3.5.7)
70 COMPOUND POISSON PROCESS
where
(3.5.9)
S(Z) =
w~ 1) [ 1 + ~1
r(
w w w
+ ~1·~2 + ... ] , (3.5.10)
eW rx+ rx+ rx+ rx+
is convenient, where
w=rx+zjrx. (3.5.11)
A good approximation for the complete gamma function is obtained
from the formula
where
bl = - 0.577 191 652 bs = - 0.756 704078
b2 = 0.988205891 b6 = 0.482199394
b3 = - 0.897 056 937 b7 = - 0.193 527818
b4 = 0.918206857 bs = 0.035 868 343.
This formula requires that the parameter rx is 1 ~ rx ~ 2. This can be
achieved by making use of the recursive formula
f(rx) = (rx -l)f(rx - 1). (3.5.13)
The formulae given above are useful if the skewness is not too
small. Troubles arise if this condition is not valid, because rx and w
grow to such an extent that the formulae are no longer easily work-
able and a special technique is needed. For example, the following
3.5 ESTIMATION OF S 71
s' (z)
1.0
0.5
-3 -2 -1 o 2 3 4
z
Figure 3.5.2 Examples of gamma densities having mean = O,standard devia-
tion = 1 and varying skewness y.
72 COMPOUND POISSON PROCESS
0.5
-3 -2 -1 o 2 3 4 5 6
z
S' (z)
2 3 5 7 10
z
Figure 3.5.3 A family of log-normal densities having the joint moments a 1 = 0
and mz = 1 but varying skewness y. The whole curve is plotted on a linear scale
(upper figure) and the tail on a double logarithmic scale (lower figure).
74 COMPOUND POISSON PROCESS
Make use of this function and calculate the moments ak of the log-
normally distributed variable Z in the case a = O.
(ii) Show that if Z is log-normally distributed with parameters
a, J.1,)" then
at = E(Z) = el'etG2 +a
m2 = var(Z) = e2 1' eG2 (e G2 - 1)
)' =)'z = (e G1 + 2)J(e G2 - 1).
S(Z)
Z
= 1 - ( Zo
)-a (Zo~Z;rx>I). (3.5.20)
2 3 5 7 10 20 30 50 70
Z= Z/Zo
Figure 3.5.4 A family of Pareto densities S'(Z) = rxZ~/Z«+ 1 (double logarith-
mic scale).
(b) Large claims Experience has shown that the Pareto distribution
is often appropriate for representing the tail of distributions where
large claims may occur. As was demonstrated in Section 3.5.2, the
Pareto dJ. can be combined with other types of distributions;
that is, S(Z) can be piecewise composed of several functions, each
of them being valid in disjoint intervals of the Z-axis.
(exercise 3.5.7). For example, the first and second moments exist
for (X > 2 only.
Seal (1980) has collected empirical (X values.
(b) Experience has shown that the behaviour of the tail in practice
is often between that of the Pareto and log-normal types. Therefore
there is an obvious need to find distribution functions which have
greater flexibility. Two such distributions are presented in this
section and others will be considered in subsequent sections.
5' (z)
p
10-4+-______________~--------~----~--~~--~~--~~~
1 2 3 5 7 10
z
Figure 3.5.5 Comparison of the exponential, log-normal and Pareto densities
(double logarithmic scale).
5' (Z)
0.8
1
1.1
2 3 5 7 10
(Z-Zo) 1Zo
Figure 3.5.6 The two-parameter Pareto densities, 0( = 1.8 and f3 varying. Note
that for f3 = 1 the one-parameter Pareto is obtained (double logarithmic scale).
78 COMPOUND POISSON PROCESS
where IX and P are positive parameters, Zo is the limit for the tail
for which the formula is fitted, and b indicates the weight of the
probability mass, which is situated in the tail area Z ~ Zo' i.e.
b=l-S(Zo)·
Shapes of the distribution are shown in Fig. 3.5.6 for selected
parameter values. As can be seen, the desired flow between the
Pareto case (P = 1) and the log-normal type can be achieved by
varying the parameter P (> 1). Some actuaries, e.g. Gary Patric
(Prudential, New Jersey, unpublished letter) have reported successful
results concerning the fit of (3.5.23) to actual distributions.
S'lZl
10-41 + - - - - - - - . . , . . - - - l - - . , . . 1 - - - . . . , . - - J , . . . . - - r - - , - - , - - - . - - - r....
1 2 3 5 7 10
ZIZo
It is defined by formula
Z )-a-Pln(ZIZO)
S(Z) = 1 - b ( Z ' (3.5.24)
o
where the meaning of the parameters is in principle the same as
in (3.5.23).
Examples of the distributions are plotted in Fig. 3.5.7. For
{3 = 0 the Pareto case is obtained. Analysis of this dJ. can be found
in Shpilberg (1977). The name 'quasi-log-normal' reflects the
fact that the curves closely approximate the log-normal ones for
positive {3 values (Dumouchel and Olsten, 1974).
(a) The idea of finding more flexibility for curve fitting can be
extended. For this purpose Benktander (1970), following the earlier
work of Benktander and Segerdahl (1960), suggested a family of
distributions which contains as special members both the Pareto
and exponential and also approximately the log-normal distribu-
tions. By suitable adjustment of parameters a better fit with actual
data can be obtained; general experience of the type of the portfolio
in question and the crucial choice of the distribution type, as
mentioned above, are not so significant and may be replaced by
parameter estimation.
The analysis is again focused on the tail Z ~ Zo of claim size
distribution above some suitably chosen limit Zo. For values
Z < Zo some other expression or directly observed frequencies in
tabular form can be used.
(b) Extinction rate Suppose that the claim size dJ. S(Z) is known
or assumed for Z ~ Zo and that the necessary integrals and
derivatives exist; an auxiliary function m(Z) will be introduced as
follows
m(Z) = E{Z - Z/Z ~ Z}
=
1
1 - S(Z)
fooZ (V - Z)dS(V) (3.5.25)
=
1
l-S(Z)
foo (l-S(V»dV.
Z
80 COMPOUND POISSON PROCESS
s' (Z)
The function m(Z) can be interpreted as the mean value ofthe claims
excesses over Z or as the distance of the centre of gravity of the
shaded area in Fig. 3.5.8 from the Z vertical. The latter form is
obtained from the former by partial integration.
If (3.5.25) is differentiated, a differential equation
(c) Examples It is easily verified that for the exponential dJ. (3.5.3)
m(Z) = lie, (3.5.28)
and for the Pareto dJ. (3.5.20)
m(Z) = Z/(IX - 1) (3.5.29)
Benktander and Segerdahl have investigated a number of actual
distributions containing large claims. Their results (see Benktander
and Segerdahl, 1960) have the same general behaviour as that
depicted in Fig. 3.5.9.
(Z)
200
150
100
50
50 100
Z
Figure 3.5.9 The function m calculated for the unsmoothed claims frequencies
given in Fig. 3.5.1. Unit is £1000000.
82 COMPOUND POISSON PROCESS
or
Zl-b
m(Z)=- o ~ b ~ 1 (Type II) (3.5.30b)
a
where
(3.5.31 )
and a and b are parameters which can be chosen within the limits
given so that the best possible fit with actual experience can be
achieved. As is seen immediately, the exponential and Pareto
cases are members of these function families. Type I gives a smaller
deviation from the Pareto straight line than type II.
Substituting the functions (3.5.30) into (3.5.27), the following
claim size distributions are obtained
(3.5.32a)
and
(3.5.32b)
The constant c is chosen so that continuous linking with the function
chosen for Z < Zo can be achieved.
Examples of the distribution of type I are given in Fig. 3.5.10.
It is appropriate to use Zo as a unit on the Z-axis.
Benktander (1970) has proved that the log-normal distribution,
which falls (depending, of course, on the parameter choice) between
the exponential and Pareto extreme cases, can be quite closely
approximated by the functions (3.5.32). As is seen from Fig. 3.5.10,
the Pareto distribution (b = 0) is the 'most dangerous' one in that
it gives the greatest probability of occurrence for very large claims.
Benktander has also derived this conclusion analytically.
104~____________~________~____- r_ _~r-__~~~~~~~
1 2 3 5 7 10
z
Figure 3.5.10 A bunch of Benktander type I densities; a = 1.8, c = 0.1 and
b = 0, 0.1, 0.2, 0.3, 0.4, 0.5 and 1; Zo = 1.
(c) The Weibull dJ. is also often a suitable alternative and it may
be written (see Johnson and Kotz, Vol. 2, 1970, Chapter 20) as
S(Z) = 1 - exp{ - [(Z - ZoVa]b}. (3.5.34)
(d) The inverse normal for which the density function is (see
Johnson and Kotz, 1970, Chapter 15)
(3.5.35)
is also sometimes suggested for claims size d.f. It is, in fact, a modified
Bessel function.
According to the excess of loss treaty the reinsurer pays that part
of each claim Ztot which exceeds an agreed limit M, and hence the
cedent's share is Z = min (Ztot' M). Then the dJ. SM of the amount of
one claim so far as the cedent is concerned can be expressed in terms
of dJ. S of the total claim Ztot as follows
for Z <M
(3.6.2)
for Z~M.
f:
From (3.3.1) the moments of SM are given by
2 3 4 5 6 7 8 9
ZorM AS(Z) S(Z) a 1 (M) a 2 (M) a 3 (M) r2 (M) r 3 (M)
1 1.000E - 02 0.033953 0.033953 1.0ooE -02 l.oo0E -04 1.oo0E - 06 1.oooE + 00 l.oooE+OO
2 1.585E - 02 0.037664 0.071617 1.565E - 02 2.461E - 04 3.880E - 06 1.005E + 00 1.012E+00
3 2.512E - 02 0.045479 0.117096 2.426E - 02 5.986E - 04 1.490E - 05 1.017E + 00 1.044E +00
4 3.981E - 02 0.055413 0.172 509 3.723E -02 1.441E - 03 5.661E - 05 I.040E + 00 1.097E +00
5 6.3IOE - 02 0.063707 0.236216 5.650E - 02 3.424E - 03 2.123E - 04 1.073E + 00 1.177E+00
where im is the highest value of the row index taken into the table
(see Table 3.5.1 where im = 41).
(d) Robustness related to the selection ofrules Some tests have pro-
ved that fortunately the risk-theoretical behaviour of the claims
process is fairly robust for variations in the claim size function S
in so far as the top risks are cut off by reinsurance. It seems even to
be possible to get an idea of the order of magnitude of the fluctua-
tions of the aggregate claims on the insurer's net retention simply
by applying the excess of loss technique presented in Section 3.6.2.
Then for M is to be taken the highest level of M applied in practice
(see Pentikiiinen (1982), Heiskanen (1982)). These results indicate
that the considerations are tolerant of fairly rough approximations
if these are necessary in some special environments.
Figure 3.6.1 Surplus reinsurance. The shaded area indicates that part of the
range of the two-dimensional total claim size distribution where Z ~ Z.
Then
(3.6.11)
The double integral is to be taken over the shaded area A z of Fig.
3.6.1.
r= Ztot/Q. (3.6.12)
It follows from the definition that 0 < r :s::; 1. The conditional dJ.
G of r as well as the (marginal) dJ. W of Q can be obtained from
S(Ztot' Q) (or rather they may be derived directly, e.g. from empiric
data if available) as follows
G(r/Q) = prob{r:S::; r/Q = Q} = prob {Ztot:S::; rQ/Q = Q}
(3.6.13)
and
W(Q) = prob{Q ~ Q} = S(Q, Q). (3.6.14)
92 COMPOUND POISSON PROCESS
Note that W(Q) is not the same d.f. as could be obtained by directly
recording the sums insured of the portfolio files of the policies in
force. The policy sums Q in (3.6.14) are weighted according to the
risk proneness, which causes the differences.
= f: E(Zh IQ = Q) d W(Q)
= f: E(Zt~tIQ = Q)dW(Q)
+ f: E(ZhIQ = Q)dW(Q), (3.6.15)
where
(see Straub, 1978 and Venezian and Gaydos, 1980; the last considera-
tion follows Heiskanen, 1982).
3.6 THE DEPENDENCE OF THE S FUNCTION 93
--: :~:~:~
=l =i
10000-: 10000-1 10000-1
I
o _ILl..I..L.L.l..lL. ............l ........... L............ Q_IL ............LLL. ..........LLI- ...... LLLLL.L.. 0 _Il .......~"t-L .............. L......... L.LL. ..... L
10 4 105 10 6 101 Q 10 4 105 10 6 10 7 Q 10 4 105 10 6 10 7 Q
6' (r)
-2
10
-3
10
\ >5000
,
1
0.5 1.0
r
Figure 3.6.3 Density of loss degree for some insured sums Q (in units of 1000
FIM as plotted at curves). Industrial fire insurance in Finland 1973-78.
Compiled by Harri Lanka (unpublished materiaO.
94 COMPOUND POISSON PROCESS
(3.7.1 )
Let
(3.7.2)
(3.7.5)
Sj is the dJ. of claim size assumed for the sectionj. As in the foregoing,
fo~ convenience of notation the first moments a l j of the sections
WIll be denoted by mj and a l by m.
The quantity a i can be interpreted as 'a weighted moment about
zero' of the compound portfolio. This concept proves to be useful,
as will be seen in what follows.
(b) Mean value Now the expected amount of the aggregate claims
96 COMPOUND POISSON PROCESS
and
r2 _ a 2 I m 2,
- (3.7.11)
are introduced. The former expression is the extension of the
structure variance concept and the latter of the risk index (3.3.8).
Note that q here does not refer to any real structure variable
q; a~ simply conveys the composite effect of the section variables qj
on the portfolio variance. Furthermore, note the complete formal
similarity of the composite expression (3.7.9) with (3.3.7).
3.7 DECOMPOSITION OF THE PORTFOLIO 97
_,u3(X)_(!i
YX- 3 -
0" 2+
~"zazf~;
L-TC.
n +L-TC.y.O".
nm.) m .
,,3
.)
3)/(rZn
q) q)
+0".2)3 /2
q
x ))) (3.7.14)
This formula as well as the expression in parentheses in the final
formulation of (3.7.9) are of dimension zero in respect of the
monetary unit, which makes them 'immune' to the direct effect of
inflation. This simplifies matters when (as later) periods longer than
one year are studied.
(e) In the Poisson case the S function for the whole portfolio exists
and can be composed of the section functions Sj making use of the
m.gJ.s as follows (see (3.4.3». The m.gJ. of Section j is
= ex p [ ~>j( too e SZ
dS/Z) - I) ] (3.7.16)
= exp [ n too e
SZ d( ~ ~ Si Z )) - n 1
98 COMPOUND POISSON PROCESS
But this is again the m.g.f. of a Poisson dJ. for which (see (3.4.3),
(3.4.1) )
(3.7.17)
2 3 4 5
2 4 8 16
0.8 O.l 0.05 0.02 0.03
Calculate J.1 x and ax for the whole block in the following cases
(i) Assume first that structure variables qj = q are the same for all
risk units.
(ii) Assume that the structure variables qj are mutually
independent.
can occur as the claim size. For brevity Zl will be taken as the
monetary unit: hence Zi = Z/Z 1 = i. Let the corresponding fre-
quencies be
qi = prob{z = i}. (3.8.2)
Of course, any distribution can be approximated by a dJ. of this
kind. In principle it is not necessary to limit the number of Zi values,
but unfortunately the numerical computations very soon become
laborious if the number of the points grows large. Hence the index
is generally limited to some rather small range 1, ... , S so that
Ek = E { Zk I J1 Zi = X}.
102 COMPOUND POISSON PROCESS
Owing to the symmetry this conditional expected value has the same
value Ek for all Zi (i = 1, 2, ... ,k) and the sum of all these expected
values is x. Hence kEk = x or Ek = x/k. Substituting in (3.8.7), the
equation
k x
qhx = _ "i..J iq.q(k-.I)*
l X-l ,
(3.8.8)
Xi=l
Pk = (a + b/k)Pk_l . (3.8.9)
This formula was introduced in item 2.9 (g) and it was stated that the
Poisson and negative binomial distributions belong to it. Then the
frequency f(x) for x > 0 can be manipulated into a form where it is
expressed by thefvalues calculated for x-I, x - 2, ....
To obtain this expression it must first be noted that according to
(3.8.9) 00
f(x) = L (a+b/k)pk_lq~*
k=l
_ 00 k* 00 b k*
-Lapk-1qx +L,?k-lqX'
k=l k=l
The convolution can be lowered one step making use of (3.8.6) and
(3.8.8) and by denoting m = min (x, s)
The inner sums are equal to f(x - i) as seen from (3.8.4). So the
recursion formula is obtained
min(x,s)
f(x) = L (a + ib/x)qJ(x - i). (3.8.10)
i= 1
(b) References Panjer (1981), Bertram (1981) and Jewell and Sundt
(1981) have proved the recursion rule to be valid for more general
assumptions than above. Jewell and Sundt also present an exhaustive
study of the family satisfying (3.8.9) and extend the consideration to
some more general distributions. The recursion formula (3.8.12) for
the Poisson case was presented by Adelson (1966).
Table 3.8.1 Examples of F(x) for x < 0 and 1 - F for x> 0 per thousand.
Pareto claim size d.f. (3.8.13), Poisson claim number dj, n = 50, x = (X - m,J/ux
the normed variable.
Number of intervals r
x 20 10 5 2
(3.9.2)
1 IX (3.9.3)
N(x) = J(2n) _ 00 e- tu2 du,
'"1 2.00 i
I I
I I
I
I I :
I
I I I I
I I I
1.00 -------1--
I
--r-------.-------,
I l
I I
I
iI
•
0.50 I I
I
I
0.30
0.10 -I
:
••
I
I
I
0.05
,,I
,,
I
0.03 ,
,
I
I
,••
I
,••
,•
~01+---_+----+_--_+----+_--_+----~--~--~
-3 -2 -1 0 3 4 5
x
Figure 3.9.1 Comparison of the normal df (N) and the NP df (N) as a
function of x and y. The relative deviation 100 (N - N)/Ny for x < 0 and
100[(1- N) - (1 - N y)]/(l - Ny) for x> 0 is computed and then the value
pairs x, yare sought and plotted for which these deviations are equal to - 75,
- 50, ... , 50, 100 so constituting 'a map' to give the altitudes of the deviations.
For example, for x = 2 and skewness y = 0.1 one can read that the relative
deviation is about - 11 %, i.e. the normal approximation gives a value for 1 - F
which is 11 %less than the one obtained by the N P method. The discontinuities at
x = 0 are due to the presentation ofF and 1 - F at either side of this line, due to
the fact that F(O) =1= 1 - F(O) for y > O.
(d) Numerical values of the normal d.f. can be obtained from
standard textbooks or can be programmed making use of the
following expansion (Abramowitz and Stegun, 1970).
First calculate
1
R =J(2n)e -t x2 (b b 2
1t+ 2t + b3t 3 + b4t4 + b 5t 5), (3.9.5a)
3.10 EDGEWORTH SERIES 107
where
t = 1/(1 + 0.2316419Ixl),
and the values of bi' i = 1,2, ... ,5 are respectively
0.319381530, - 0.356 563782, 1.781477937,
- 1.821 255978, 1.330274429.
Then
for x ~ 0
(3.9.5b)
for x > O.
When the value N = N(x) of the function is given and the matching
argument value x is requested, first calculate
J( - 21nN)
{J( for 0 < N~0.5
t= - 21n(1 - N)) for 0.5 < N < 1,
(3.9.6a)
and
(3.9.6b)
where
Co = 2.515 517, c 1 = 0.802 853, c2 = 0.010 328
d 1 = 1.432788, d2 = 0.189 269, d3 = 0.001308.
Then
-y for 0 < N ~ 0.5
x= { (3.9.6c)
y for 0.5 < N < 1.
The absolute amount of the error is estimated to be < 7.5 x 10- 8
for (3.9.5b) and < 4.5 x 10- 4 for (3.9.6c).
remainder term contains lin at least in power 3/2 in the Poisson case.
The Edgeworth expansion is most simply obtained by means of
the characteristic function of F, expanding the exponential in a
MacLaurin series and reverting back to the distribution functions
after integration, making use of the correspondence of the charac-
teristic function and the distribution function. Details of the deriva-
tion of the formula are given in Appendix B.
Reference to (3.10.1) shows that the normal approximation is
merely the form given by the Edgeworth expansion when the first
term only is retained, i.e. by ignoring terms of O(I/Jn). If the explicit
expressions of higher derivatives of the normal function N are
introduced into (3.l0.l), it can be shown that the error of the
Edgeworth expansion tends to infinity as the number of terms
increases without limit. The Edgeworth expansion is not a conver-
gent but a divergent series. However by taking a suitable number of
terms it gives acceptable results in the neighbourhood of the mean
value. It can be generally expected that the result is good up to a
distance oftwice the standard deviation from the mean, but for points
outside this interval the result soon deteriorates. From the point of
view of risk theory this is unfortunate since in most problems the main
interest arises from points at a distance of two to three times the
standard deviation to the right of the mean. For this reason some
improvement on this series is needed, and this is given in the following
sections.
F(X) ~ N[ - ~
Yx
+ J( ~ + +
Yx
1 6X -
y ax
Ilx) J. (3.11.8)
If the next two terms in (3.11.5) are taken into account and Yx and
Y2(X) are again denoted briefly by y and Y2' the extended version is
obtained
(3.11.9)
3.11 NORMAL POWER APPROXIMA nON III
The mean, the standard deviation and the skewness of this distribu-
tion are approximately 0, 1 and y (= free parameter) respectively if
the short version (3.11.6) is used. It is convenient to say that x is
NP-distributed, denoted NP(O, 1, y). The variable X = Ilx + xax
(see (3.9.1)) is then also NP distributed having mx ' ax' y = Yx
approximately as mean, standard deviation and skewness. Note that
a linear transformation does not change the skewness, i.e. x and X
have the same skewness. In brief it can be said that X is NP(llx' ax' yJ
distributed having the dJ. N y [ (X - mx)/axJ.
As the compound Poisson distribution was approximated by the
normal dJ. N(llx, ai) in Section 3.9, so it will now be approximated
by the NP(llx' ax, Yx) distribution. The crucial benefit of this
approach is, of course, that now three parameters are available
instead of only two. In fact, the normal dJ. is extended to a family of
functions having an extra parameter y available. For y = 0 the NP
function reduces to the N function.
Since the proof is based on the assumption that F can be represented
112 COMPOUND POISSON PROCESS
F 1-F
-1
10
Normal
~\X\
Edgeworth \ \\
NP
\ \
\ \
-2 -1 o 2 3 4 5
x
Figure 3.11.1 Example ofthe Normal, Edgeworth and N P approximated values.
Polya df F with h = 100, n = 25, claim size df Pareto with IX = 2 truncated and
discretized in points Z = 1,2, ... , 21 (see Table 3.11.2). The slight irregular
bends are due to the calculation of this strictly discrete function at equidistant
points which did not coincide with the steps of F, and with irregular rounding off
small errors resulted.
114 COMPOUND POISSON PROCESS
F N y -F
ro
x Ny (Ny - F)/F%
for positive x-values the skewness is increasing up to, say 1.5 or 2, first
at the periphery of the x values and then over the whole range. For
negative values of the argument x the deterioration of the relative
deviations for larger x values begins to appear from skewness values
rather less than 1, even if the absolute differences are still quite small.
Table 3.11.2 is intended to illustrate the critical area. For this
purpose, rather heterogeneous distributions were taken as examples
and in addition the parameter n, the expected number of claims as
an indicator of the size of the portfolio, is very small. Hence another
critical condition is that n should not be very small ( < 25).
Fig. 3.11.1 illustrates typical behaviour of the different formulae.
The normal dJ. is symmetric and therefore incapable of approximat-
ing skewed distributions. The Edgeworth expansion is clearly better,
but nowhere near so effective as the NP formula, which gives, even
for as small n as 25, a quite good approximation over the whole
relevant range.
In practice the size of the portfolio in insurance companies usually
makes the skewness parameter small because the volume parameter
n is in the denominator of its expression (see (3.3.7)). It is mostly of
order of magnitude of 0.1-0.4, and often less. The examples given
3.11 NORMAL POWER APPROXIMATION 115
Table 3.11.2 Polya distributed values, hand n as given in the table. Truncated
Pareto claim size d! (ex = 2, 1 ~ Z ~ 21, discretized in 21 points. In the upper
block Ffor x < 0 or 1 - F for x ~ 0, and in the lower block the relative deviations
ofN P values from the corresponding F or 1 - F values.
r
-1 0.1605 0.1605 0.1593 0.1518 0.1558 0.1445 0.1644 0.1195
[(Ny-F}IF] x 100
The slight irregularities in relative deviations are partially due to the same rounding
ofT effects as mentioned at Fig. 3.11.1.
(i) Summary For convenience the original formula (3.11.8) and the
modified formulae are summarized as follows
(i) X given, F(X) to be found
x = (X - jJ.,J/ux (for jJ., u and y see (3.3.7))
g = y/6; Xo = -J(7/4)
y= J( ~) - ~
1 + _1_ +
4g 2 g 2g
for x ~ 1
= ~
2g
- )(-1 + 1_~)4g 2 g
for Yo:::; y < 1 (3.11.16)
1
=J(D-Q)-J(D+Q)+12g fory<yo'
118 COMPOUND POISSON PROCESS
y 0·2 0·4
;001/03/0507
. '';: ,./"
10
•
5
IV ~ V V I.--V 1.5
IV: ~f% t/'V V- I.-- ~~ 2.0
4
~Vv V- ~ I- 3.0
V V ~
3
V
.;~I
2
1,,{.,,-
...&i,. I
-r 1-1~
I I I I
-5 -4 1 2 3 4 5 6 7 B 9 10
-3; X
'J
IA' -2
III/m
.aWl 3
!J Q:'r/,
V:'lI flJJrI -4
/,1/ 'I
V '/'/ 1 -5
r = 00·1 0·3 0·5 1 23
and finally
X = J.1. x + xO'x' (3.11.17)
- 1 _12dy
f(x) = J(2n) e ,Y dx
(3.11.18)
dy
for x ~ 1
dx 1 + 2gy'
= 1 - 2gx + g2(12x 2 - 7)e(x o - x) for x < 1.
Exercise 3.11.1 Let the moments about zero of the claim size dJ.
be a 1 = £10 3, a2 = £210 8 and a3 = £3 10 14 and let the standard
deviation and the skewness of the structure distribution be 0.1 and
0.5 and the expected number of claims n = 10000. Calculate the
probability that the total amount of claims exceeds 14 x £10 6
by using (a) the Normal approximation, (b) the NP formula and
(c) the Wilson-Hilferty formula.
II ~
5
"-..... 10 V ..... / --
'-"
5
/j 11==
4
=~
-i=
3 -40§
-20-- t=
-~( -1LT I f - - - -~
=\=
\
2
I~
1\ --~
1\
---=::::~ r=-
o
1/V ....- V
~ ~l--
~~ ~
-1
-~ -10?20
I / IJ~ F -1.00 , I
-2 --
5
---- ~ II
I
Gamma =0
t:=-- /10,
1.1'--
.........
1== f:ii It'll I !
~ 111111111 '1IIHt"ttt""';~1111
0.1 0.2 0.5 1.0 2.0 Skewness
=0 for x ~ 0 or x ~ 1 (3.13.1 )
with
was used by Campagne when the rules for the EEC convention for
solvency margins were under consideration. Here x is the claims
ratio defined as the claims paid for the insurer's own account divided
by the premiums including also loading for expenses. p and q are
parameters, the mean of x being
J1x = p/(p + q)
124 COMPOUND POISSON PROCESS
and variance
2 pq
(J'x = -:-(p-+-q-:-;;)2,....:-(p-=-+-q-+-----:-7"I)
F(X) =
I
2 lim
f+Tl - .e- isX
cp(s) ds + tF(O). (3.14.1)
1tT-+oo -T IS
sr----------:::::=-r--
Z, Z2 Z3 Z
Figure 3.l5.l Decomposition of S function.
Applications related to
one-year time-span
(4.1.4)
Ruin barrier Ur
Time
Figure 4.1.1 The risk process as a difference of incoming premiums and out-
going claims.
(e) Basic equation To get the equation in a form which gives the
interdependence of the involved variables explicitly, it is assumed
that the N P approximation is applicable.
For brevity U is here and later often scaled by putting Ur = 0,
and U 0 will be denoted by U if the clarity in the context concerned
does not necessitate the subscript. Furthermore, let Y, be the value
of the standardized variable which corresponds to the ruin probabi-
lity e according to the equation
e = N( - y,) = 1 - N(y,). (4.1.6)
For example, YO.Ol = 2.326 and YO.OOl = 3.090. The shorter form
Y will sometimes be used in place of Y, .
Making use of the above notation and the N P formula (3.11.16)
(for Y ;?; 1) and (3.11.17) the relation (4.1.5) can be written in the form
of the following basic equation
(4.1.7a)
Assuming X as a compound Poisson variable and substituting the
expressions of the standard deviation and the skewness from (3.3.7)
this equation can be written
U = y,pJ(r 2 /n + (J~) - AP
+ iP(y; - 1) x (r3/n2 + 3r2(J~/n + Yq (J!)/(r 2/n + (J~). (4.1.7b)
In the particular case where the normal approximation is applicable
the above equations are reduced to the shorter form
U = Y,(Jx - AP = YEP J(r 2/n + (J~) - AP. (4.1.8)
These equations contain either explicitly or implicitly the quantities
e, A, M, U, (Jq' Yq and n or P. (4.1.9)
u u
---
U(o)
--- ---
U(O)
--- ------ ---
1-oE--:;,,;--------'-I U(O)+~ p
initial amount U = U(O) such that the reserve will not be exhausted
at the end of the accounting period (Fig. 4.2.1(b)). The latter ap-
proach will be assumed in this chapter but the former problem
setting will prove important in subsequent chapters.
(Jq = 0.038
Yq = 0.25
SM(Z) according to Table 3.6.1.
The value of U which satisfies (4.1.7), when the above data are
substituted in it, is U = 15.72 £10 6 . The risk premium income
corresponding to the data is P = 73.0 £10 6 •
132 APPLICATIONS RELATED TO ONE-YEAR TIME-SPAN
114=10
U 50
40
4
0.4
0.1
10
50 100 150
p
VIP 1.0
0.5
-==-____________________ A=
_===================
_ 0.00
0.10
0.05
0.20
50 100 150 P 200
Figure 4.2.3 The solvency ratio u = VIP as a function of P and A.. Standard
data (4.2.1).
(f) Ruin probability II =f(n, U) Figure 4.2.5 sets out the ruin pro-
bability t: = 1 - F as a function of nand U.
In order to demonstrate the influence of the structure function
its variance was removed in two cases (a q = 0) dotted in the figure.
As seen, the change for large collectives (n large) is quite crucial.
This means that the structure variation is the main cause of the
fluctuations of large portfolios, whereas the 'pure Poisson' fluctua-
tion is dominant for small portfolios. This very same feature was
already anticipated by Table 3.3.2.
n/l000 =1 5 10 20 50 100
10.0
M
7.0
5.0
3.0
2.0
1.0
0.7
0.5
0.3
0.2
0.1 ~~~~--~~__~~~~-----4--~~~~~~~
1 2 3 5 7 10 20 30 50 70 100
u
Figure 4.2.4 The net retention M as a function of U and n. Monetary unit is
£10 6 (double logarithmic scale).
U = 2
5
10
0.1000
20
50
0.0100
10
0.0010 -- - - - - - -- -- - -- -- --
2 3 5 7 10 20 30 50 70 100
n/l000
2.0
UIP
ex: 0.9
1.0 1.1
Standard
0.7
1.5
0.5
0.3
=-_-------2.5
0.2
0.2 0.3 0.5 0.7 1.0 2.0 3.0 5.0 7.0 10.0 20.0 ~.O 50.0 100.0
M
(g) The effect of the claim size d.f. To show the effect of the choice
of claim size function S, calculations based on some Pareto distribu-
tions are set out in Fig. 4.2.6 together with values for our standard
distribution.
It was assumed that S(Z) is in all cases the same and equal to
the standard up to the value Zo = £10 6 , and the tail is upwards
Pareto distributed (see (3.5.20)). Even though only 0.072~~ (see
Table 3.6.1, line 26) of the probability mass of the S(Z) distribution
is located in the area Z> Zo its influence on the behaviour of the
tail of the function u = U IP = u(M) is quite significant. This means,
among other things, that if a portfolio gross of reinsurance is
considered, its solvency properties may depend quite crucially on
the assumptions concerning the upper tail of the claim size distribu-
tion. As already expected (see item 3.5.8(c) and Table 3.5.1) the
standard distribution is to be classified as 'dangerous'.
136 APPLICATIONS RELATED TO ONE-YEAR TIME-SPAN
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ~1 1 !1 1 1 1 1 1 1 9. 2 n = 20000
111111111111111111111111111113.4 ~ = 0.1
mllllllllllllllllllllllllllllllllllllllllllllllllllill111111111111111111 8.9 A = 0
1111111111111111111111111111111111111111111111111111111111111111111111111111111111 9 . 7 M=1
1111111111111111111111111111111111114.3 !Tq = 0
1111111111111111111111111111111111111111111111111111111111111111111111111111119.3 E= 0.001
Figure 4.2.7 Dependence of the risk reserve U according to the basic equation
(4.1.7). U is calculated first to the basic combination n = 10000, M = OA,
A = 0.05, O'q = 0.04, Yq = 0.25, P = 54.6,8 = 0.01, having unit = £10 6 . Then the
variables changed as given in the figure and the value of U is plotted individually
at each bar.
Exercise 4.2.4 Let the claim size d.f. be of Pareto type, equal to
1 - z-a for z ~ 1 expressed in suitable monetary units; excess of
loss reinsurance is arranged with maximum net retention M. Cal-
culate M by one decimal from (4.1.7) when IY. = 2.5, ~ = 20, n = 100,
A. = 0.1, 0' q = 0 and Ye = 2.33. Hint: Derive an expression for U as
a function of M and then find (by trial and error) the requested
numerical value of M.
V ( y2 - I )
P -+ Y + -6-l'q O"q - A, (4.3.2)
f: f:
claim sizes are limited to ~ M, the inequality
can be utilized. The equality is valid only if the size of one claim is
constant and equal to M. Putting this limit in (4.3.1) and using the
convention P = mn, a distribution-free upper limit for U is obtained
U ~ yJ(PM) - AP + i(y2 - l)M. (4.3.4)
r- ...
0.5 r-- i'oo.
0.3
0.2
0.1
10 100 1000 10000
M/£1000
Figure 4.3.1 The factor K (see (4.3.5)) as a function of M. Claim size df as
given in Table 3.6.1.
140 APPLICATIONS RELATED TO ONE-YEAR TIME-SPAN
of the main terms in (4.1.8) and (4.3.4) is plotted in Fig. 4.3.1 for
the standard d.f. (Table 3.6.1). This figure as well as numerous other
examples suggests that for the commonly used values of M the
factor K lies in the interval 0.5 to 0.8 and that the value K ~ 0.6
can be used as an estimate. K is (possibly with some rare local
exceptions) a decreasing function of M and it tends to zero when
M --t 00.
It follows that the approximation (4.3.4) can safely be improved
into the form
u ~ KyJ(PM) -).p +i(/- I)M, (4.3.6)
40
A:-0.10
U
-0.05
30
°
20
_------0.05
10
Basic eq.
0.15
50 100 150 200
P
(4.3.8)
80 Premium 8
legislation is above all to ensure that the values of reserve funds are
adequate also when applied to the weakest cases, it is reasonable to
assume that the safety loading A. is not positive. Then, according to
(4.3.7) and Fig. 4.3.2, U obviously must be an increasing function of
the volume of the company, i.e. an increasing function of the premium
income P. Hence it is of the type which is plotted by solid line in
Fig. 4.4.1, being a parabola for A. = O. According to practice, instead of
the risk premiums the gross pre~iums E, including also loading for
expenses, are used as a basis for the rule.
The parabolic curve can be approximated by a broken line as
presented in the figure. Expressed as a formula it is
(4.4.1)
Solvency margin rules of just this type are applied, for example,
in the decrees of the EEC (European Economic Community) in
1973. According to the EEC rule, a = 0.18, b = 0.02 and Bo is 10
million monetary units. Alternatively the basis is defined as
aggregate claims instead of the gross premiums and then the
constants are a = 0.26, b = 0.03 and Bo = 7 million units. The
constant U 0 is 0, but instead a certain minimum amount for U is
defined (for details see Kimball and Pfennigstorf, 1981).
In Great Britain up to 1978 a similar rule was applied with
a = 0.2, b = 0.1 and Bo = £2 500000. In Finland, where this type
of formula had already been introduced in 1953, U 0 = 0.2 millions
of FIM, a = 0.2, b = 0.1 and Eo = 4 millions of FIM.
(b) The effect of the level M of the net retention has already been
studied in Figs 4.2.2, 4.2.4 and 4.2.6 for excess of loss treaty applying
the data given in Table 3.6.1. Similar figures can be obtained also for
other types of reinsurance by virtue of the technique considered in
Section 3.6.
Owing to the fact that the solvency properties are fairly robust
for the dJ. of the claim size as long as the net retention limit M is
not very high (see Pentikiiinen, 1982, Section 4.2.3), the values obtain-
ed for excess of loss type of reinsurance may be used as an
approximate guide for surplus treaties as well. This is useful owing to
the fact that the latter type of treaties is rather inconvenient to handle
as discussed in Section 3.6.4. This conclusion was confirmed by
Heiskanen (1982) who calculated various cases by both excess ofloss
and surplus rules.
100
----_ ..
U
--- --- -- --
-- --
70
---
p -----
50
-- -- --
--'"
30 ",-
20
10
7
5 ----------------------------
i
3 ii
2
1~--~~~~~--~~~~~~--~~~~~.
!
0.01 0.020.03 0.05 0.1 0.2 0.3 0.5 2 3 5 7 10
lot
Figure 4.5.1 Risk reserve U as a function of the net retention M. Unit is
£10 6 • Standard data according to item 4.2(b).
(e) The effect of the background factors (see (4.1.9)) can be examined
148 APPLICATIONS RELATED TO ONE-YEAR TIME-SPAN
2.0
M
0.7
0.5
0.3
0.2
and solving
(4.5.1)
W=~(~+U+2A)'
K Y u
(4.5.3)
where
(4.5.4)
The ratio w is a hyperbolic function of the solvency ratio u as
plotted in Fig. 4.5.3.
If the coefficient f3 is positive, then the curve has a minimum
u1 = J f3
w =-~~-
J
2(A + f3)
(4.5.5)
1 K2/
If f3 is negative, then the curve is increasing for u > 0 and w is negative
for
w=MIU
0.15
0.10
0.05
M ~
2()' + J().2 - i( 2)) U.
2 2 q (4.5.7a)
Ky
In the special case, when no variation of the basic probabilities
exists (or it is omitted or included in ).), i.e. a q = 0, we have
4
M=~).U~)'U (for 8 = 0.001) (4.5.7b)
Ky
This formula is, in fact, the same as (4.3.8). It leads to a rule ofthumb
which is often used in practice by insurers: that the net retention
should be a certain percentage of the reserves U which the company
is willing to lose for covering losses during a year. If ). is taken to
be e.g. 5%, M is 0.05U. This estimate is based, however, on such a
weak premise that it may not be very useful except in a few special
cases. Neither does it offer any noticeable simplification compared
with(4.5.1).
Another rule, according to which M is related to the premium
income, could be obtained from the latter part of (4.3.8).
U'(M) = [JMyJn
(a (M))
2
- ).nJ(1 - S(M)). (4.5.8)
finite M? Note that the type of the solution depends on the sign of the
derivative at the origin (see exercise 4.5.2).
Exercise 4.5.4 Suppose that the standard data (other than M) and
distributions given in item 4.2(b) are valid and U = 10 million.
What should the maximum net retention M be according to the
basic equation (4.1.7)? Calculate also an approximate value by
means of (4.5.1) taking P = £65 million and K = 0.6.
Exercise 4.5.8 Let the claim size dJ. be, in suitable monetary units,
What should the maximum net retention M be, when excess of loss
152 APPLICATIONS RELATED TO ONE-YEAR TIME-SPAN
(a) The problem The basic equation in its short form (4.1.8) will
now be utilized to deal with the case when the portfolio is subdivided
into independent sections indexed by j = 1, 2, ... , r, each of which
has its own claim size dJ. Sj' safety loading Aj > 0, expected number
of claims n.,
J
net retention M.J and structure variable qJ.. First the
moments ak about zero are defined separately for each section. They
depend on the retentions M j (see (3.6.3) and (3.6.8)) and can be
denoted by akiM/ For brevity, let mj = miM) = a1iM/
The problem is to determine the Mjs so that:
(i) the expected amount of profit as a function of the Mjs
f(Ml' M 2, ... , Mr) = I.Ajnjmj' (4.6.1)
is maximized; and
(ii) the basic equation (4.1.8) is satisfied for the whole portfolio:
(4.6.2)
a'k/Mj) = a~J f: j
ZkS'i Z ) dZ + (1 - SiM))M; ]
The extremal values can be found among the joint zero points of
Q and of these derivatives. Putting the factors in braces equal to
zero it follows that
O"x(p-IL 2
M.} = A.. - n.m.O" .. (4.6.4)
yp } }} q}
Solving the Mjs from these equations and substituting into (4.6.2),
p can be determined.
Equations (4.6.3) give only the necessary conditions for solution.
In actual cases, where the data involved are known, it has to be
investigated whether solutions exist which among other things
depend on the values of U. A further step is finding a numerical
solution when the distributions and data are given. This may
give rise to considerable problems, because the variables M j are
buried in the moment expressions.
The problem concentrates on the search for variables which give
an absolute maximum for the profit function f on the surface
Q = O. Note that also one or more of the factors I - S(M) in (4.6.3)
may be 0, which means that the section in question needs no rein-
surance and the consideration is to be limited to the remaining
part of the portfolio.
5' rZ)
I I
I I
I
I
I
A z
Figure 4.7.1 Excess of loss treaty. The total amount of a claim Z,o, exceeding
the limit (net retention) A is divided between the cedent and the reinsurer:
Z,o, = Z + Zre'
Exercise 4.7.2 The upper tail of the claim size is supposed to follow
the Pareto dJ.
S(Z) = I - b(Zo/ZY
Calculate the excess of loss premium (4.7.2) for Zo < A < B.
X ={~l - c)(X - A)
when X ~ A
r• when A <X <B
(1- c)(B - A) when X ~ B,
where X is the total claim.
The reinsurance risk premium PsL(A, B) is easily expressed as
follows
f:
so that the net premium becomes
(b) Discussion Like the excess of loss premium, the stop loss pre-
mium is sensitive to uncertainties of the tail of the claim size dis-
tribution as well as to inflation. In addition it is sensitive to in-
accuracies and to the structure variations of the number of claims
n. Table 4.8.1 demonstrates how the stop loss premium depends
on the variation of the basic probabilities indicated by O"q and Yq
(see Section 2.7) as well as on other environmental aspects.
It is important to notice that not only the short-term oscillation
of the basic probabilities (assumed when the mixed compound
Poisson process was defined in item 2.7(c») but also the long-term
cycles and trends mentioned in item 2.7(b) (consideration of which
is left until Section 6.1) very strongly affect the stop loss premium.
It can be assumed that allowance will be made for trends and that
the parameters, especially n, will be adjusted accordingly. Even if
the long-term cycles, e.g. when ruin probabilities are evaluated,
are in general intricate to handle, they can be considered in connec-
tion with the rating problem as a random fluctuation of the same
158 APPLICATIONS RELATED TO ONE-YEAR TIME-SPAN
5000 7.30 44.0 4.5 0.04 0.30 1.25 1.75 0.0005 0.0000 0.0005
5000 8.90 169.0 122.7 0.04 0.30 1.25 1.75 0.0134 0.0001 0.0133
5000 9.60 681.0 4314.0 0.04 0.30 1.25 1.75 0.1014 0.0390 0.0624
100 7.30 44.0 4.5 0.04 0.30 1.25 1.75 0.1868 0.0836 0.1032
300 7.30 44.0 4.5 0.04 0.30 1.25 1.75 0.0721 0.0118 0.0603
1000 7.30 44.0 4.5 0.04 0.30 1.25 1.75 0.0168 0.0002 0.0166
3000 7.30 44.0 4.5 0.04 0.30 1.25 1.75 0.0020 0.0000 0.0020
3000 7.30 44.0 4.5 0.00 0.00 1.25 1.75 0.0015 0.0000 0.0015
3000 7.30 44.0 4.5 0.05 0.50 1.25 1.75 0.0023 0.0000 0.0023
3000 7.30 44.0 4.5 0.10 1.00 1.25 1.75 0.0062 0.0000 0.0062
3000 7.30 44.0 4.5 0.20 1.50 1.25 1.75 0.0275 0.0014 0.0261
(4.9.l)
and for the formulae (3.11.16) the skewness is the same as given in
(3.3.7) (note that a linear transformation, here X --+ X/P, does not
affect the skewness).
The computation of the indices r 2 and r3 and the evaluation
for (J q and yq can be computed from the statistics in question, or
(if they are not known and an advance estimation of the error is
needed) they may be obtained from the general experience of the
insurance class in question.
As an example let n = 100, r2 = 10, r3 = 200, (Jq = 0.1 and Yq = O.
Then
(Jx/P = j(10/100 + 0.1 2 ) = 0.33,
and
Yx = (200/100 2 + 3 x 10 X 0.1 2 /100 + 0)/(0.3W = 0.64,
and for e = 0.05, y = ± 1.96 according to the nomogram of Fig.
3.11.2 Xl = - 1.7 and x 2 = 2.3. Hence
- 1.7 x 0.33 ~ N If ~ 2.3 x 0.33,
or
- 0.6 ~ N / f ~ 0.8.
to arise due to trends and cycles (see item 2.7(b)), which should
be estimated in one way or another (see McGuinness, 1970).
f
k 0xo F(X) dX ~ AE(X). (4.10.2)
k Il +A)P Ny (X ax P) dX ~ AP,
(4.10.3)
(4.10.6a)
Because for minor risk collectives, for which the experience rating
is usually applied, the variation of the basic probabilities may be
less significant than the other fluctuation (see Table 3.3.2), the
formula can be simplified by putting 0' q = O. Then
(4.10.7)
is of special interest.
166 APPLICATIONS RELATED TO ONE-YEAR TIME-SPAN
p 10% 5% 1%
((X,qi
q as follows
Now let us assume that for the risk unit concerned a sequence of
total claims Xl' ... , X t for t years is observed as illustrated in Fig.
4.10.1. It can be expected that they are clustered in a more or less
narrow area on the X -axis. Heuristically it is easily conceived that
this kind of accrued experience makes it possible to conclude what
is the order of magnitude of the unknown parameter q. Obviously
those values are most probable which correspond to the curves
having their modes in just that area where the X values are clustered.
The well-known Bayesian rule enables us to find an expression for
the probability density of the unknown parameter q by the condition
that X has the observed sequence of values. Then an obvious idea
is to amend the premium formula (4.10.12) by weighting P(q) by
these conditional probabilities as follows
4.10 EXPERIENCE RATING, CREDIBILITY THEOR Y 169
o
p(q)
lJ~ ,
[l!(X p q) dH(q)
t 1.
[l !(X i , q) d H(q)
o i= 1
(4.10.13)
The expression in brackets is just the conditional Bayesian pro-
bability density.
If the density function! is known, as assumed, then in principle
this expression gives an estimate for the premium for the next year.
The expression (4.10.13) has been widely examined and it is
proved inter alia that in the Polya case where H is the incomplete
gamma function and F the mixed Poisson (and, as Jewell (1976)
has shown, also for some other function mixtures) the formula
can be reduced to the very simple form
(4.10.14)
where
(4.10.15)
var(X):::::: V(X):::::: ai = t
ro
(X - E(X))2 dF(X) , (5.1.1)
50 100
X tot
Figure 5.1.1 The stop loss treaty (X*) and excess of loss treaty (X). The
realizations (amounts retained on the cedent's own account) of X* are on the
straight lines plotted solidly and the realizations of X, shown by circles,
are distributed in the half axis angle 0 ~ X ~ X'o'. Compound Poisson aggregate
claim df n = 20, claim sizes Pareto-distributed, IX = 2, Zo = 1 (see (3.5.20»,
net retention for the excess of loss arrangement M = 5.
(5.1.8)
Thus a reinsurance of quota share form (see Section 3.6.3) gives the
desired result.
Hence a quite general theorem is proved that if the reinsurance
premium increases with the reinsurer's variance, and is thus of the
5.2 RECIPROCITY OF TWO COMPANIES 175
V:
point P move along EI(V~). Then V~ is preserved unchanged but
is changing all the time according to which of the ellipses E2 is
intercepted until a point is reached in which El(V~) is tangential to
one of the E2 ellipses. Then V:has reached its minimum. So a point
5.2 RECIPROCITY OF TWO COMPANIES 177
J(
c,
J(~)
Figure 5.2.1 The families of level ellipses. E'1 and E~ plotted by solid lines:
the cases V~ = VIand V~ = V 2.
(ii) Prove that Po is the minimum for all collectives having the
same P, n and structure function H, if the mixed compound
Poisson distribution is assumed.
(iii) Rewrite the basic equation (4.l.8) making use of p.
2 3 4 5 6 7 8 9 10 II 12 13
j n.
J r2j cr qj mj PJ Uj ij Uij Gij Gi G~, G,V
1 10000 5 0.05 0.0028 28.0 4.6 12 26.8 4.2 2 1.2 0.3
2 5000 150 0.10 0.0088 44.0 26.4 13 8.1 3.2 5.1 7.1 9.2
3 1000 5 0.80 0.0028 2.8 6.7 23 27.2 5.9 3 1.8 0.6
taken to be 1. Then the difference can be called 'the gain G' obtained
when the groups are united as one collective:
G I23 = VI + V z + V3 - V I23 = 10.1.
Now the problem is how to divide reasonably this gain among the
groups, i.e. to find amounts Gi satisfying the condition
3
L G; = G123 · (5.3.2)
i= I
6.1 Claims
As was specified in item 3.1 (c), X(t) includes both the paid and
outstanding claims.
X (1,T)
r
Figure 6.1.1 A realization of a claims process extended over T years.
(c) The trends in the parameter n are partially due to the gradual
change of the size of portfolio and are partially affected also by
alterations in the risk exposure inside the portfolio. They can be
taken into account by assuming that the model parameter n is
time dependent. This can be done conveniently by means of a
growth factor rit) = n(t)/n(t - I), or equivalently
t
n(t) = n TI ri'r), (6.1.2a)
t= I
and by putting
n(t) = nr~. (6.1.2b)
Another approach would be to use linear formula
n(t) = n + r~t. (6.1.2c)
(d) The cycles Besides the trends there are the periodic variations
in claims frequencies; these are considered next. Even though the
variations concerned are often in practice rather irregular, they are
by convention called 'cycles'. They are distinguished from the short-
term 'oscillation' already introduced in Section 2.7; these are
composed of only variations appearing as waves which extend over
two or more years, whereas the 'structure variations' of consecutive
years are mutually independent. The simplest way is to find some
suitable deterministic formula to indicate the relative deviations,
denoted by z(t) = Lln(t)/n(t) of n(t) from its trend flow (6.1.2b). A
deterministic sinusoidal form
z(t) = zm sin(wt + v), (6.1.3)
is assumed as an example (Fig. 6.1.2), where Zm is the amplitude,
z (tJ
v=o
, ~~--_
'-, v=-1T/2
,,,
",'
"
,,
,,
\
,
, ,, /
I "
I
,"
I
,, "
,-- " "
;
"
Tz
°
the consecutive variable values.
If b 1 = b2 = ... = then (6.1.5a) is reduced to a so-called auto-
regressive, briefly AR process. A benefit of the ARMA approach is
that a stationary time series may often be described by a model
involving fewer parameters than a pure AR process (Chatfield 1975,
paragraph 3.4.5).
According to Wold's decomposition theorem (see Chatfield, 1978,
Section 3.5) any discrete stationary process can be expressed as the
sum oftwo uncorrelated processes, one purely deterministic and one
purely indeterministic. The best-known examples of purely deter-
ministic processes are those whose realizations are simply of the
sinusoidal form (6.1.3).
18
14
10
19641
-~9t62
-6
Figure 6.1.4 Relative changes in the amount of losses per vehicle. German
motor third-party insurance. The actual values (solid line) and the forecast
(dashed line) are according to (6.1.7). Reproduced by permission from Becker
(1981).
wit)
t Time
(0) Mean values After the definition in item (n) we are now ready to
calculate the main characteristics of the annual claims expenditure
X(t) and its accumulated amount X(l, t). The mean values are obtain-
ed directly from the sum (6.1.1)
t
flx(l, t) = I flx(r), (6.1.13)
r~ 1
(6.1.17)
(6.1.21)
(q) Skewness For the calculation of the skewness the third central
moment is needed and is obtained in a similar way to the variance
(see (3.3.7))
t
and the mean value and standard deviations are to be taken from
(6.1.13) and (6.1.20). The skewness
Y = Yx(1, t) = ,u3(X(1, t))/a x(1, t)3, (6.1.25)
is obtained by means of (6.1.22).
Then
x = x(l, t) = (X - ,ux(1, t))/ax (1, t), (6.1.26)
is transformed into y according to (3.11.14), after which
F(X; 1, t) = F(x; 1, t) ~ Nix) = N(y). (6.1.27)
(b) Risk premium Now a formula for the pure risk premium for
year t can be given by making use of(4.1.1) and (6.1.14)
(6.2.3)
t= 1
5 10 15 20 25
Time t
Figure 6.2.1 A simulated solution of the difference equation (6.2.7b) for
a 1 = - 0.5, a 2 = - 0.5, Ao = 0 and Il a normally distributed variable having
mean 0 and standard deviation (J = 0.05. The variation range is approximately
± 0.1 and the wavelength 3.2 years.
or
(6.2.10)
providing that the relative stochastic deviations are not large and
given that X ~ P. Hence, the fluctuation of f is about the same,
whether both AX and AP fluctuate or whether their sum is replaced
by a modified AX' having a range of fluctuation about the same as
the original sum of these variables; that is, if P is taken as determinis-
tic and the standard deviation and skewness of the claim fluctuation
are adjusted to correspond to the fluctuation of the loss ratio.
This approach would be expected to be most useful in cases where the
long-term variations, i.e. trends and 'cycles', are programmed to be
deterministic, and consequently the values of AX (and AP) for
consecutive years are mutually independent.
Clearly 'the asset risks', i.e. the risks involved with investments,
are much more diversified in different countries than are the 'liability
risks' generated by the claims expenditure and rating. Therefore it
seems to be quite difficult to find any proven detailed technique for
their consideration which could be valid to a satisfactory degree
in the very varying circumstances existing in different countries.
Only some rather general aspects can be discussed in this section, and
then the investment income will be incorporated into the general
model as a special entry in Chapter 10, where business planning is
considered.
While a more sophisticated technique for the handling of asset
risks in connection with risk theoretical considerations is at present
deemed to be beyond the scope of this book, pending future develop-
ment of the theory, this need not occasion any serious gap in the
model. As in the case of many other uncertainties, it is always
possible to make deterministic or semi-deterministic assumptions
about the anticipated future development of the return rates and
asset risks. In fact this will be provided in Chapter 10. If, in turn,
reasonably optimistic, pessimistic and likely hypotheses are used
as input data, useful information about the sensitivity of the model
to asset risks can be obtained. Surely such an approach, even though
seemingly crude, is better than complete ignorance of the asset risks.
15
1975 1980
~ 300
."
.s
~
1ii"
« 200
100
the interest rate is seen in Fig. 6.3.1, in particular in the flow of the
rate in the USA in the years 1979-81.
Still more volatile is the current market value of the shares.
This is illustrated in Fig. 6.3.2, where the actuaries' all share index
in the UK for years 1972-82 is depicted. A temporary plunge down
more than 50 per cent can be seen; however, the yields on shares vary
much less than the share prices. A well-planned investment policy
may very much preclude the possibility of capital losses. Some
shares may decline in value, but if only sound securities are bought,
they will refind the original price level if given enough time.
It is highly doubtful whether modelled forecasting of interest
rates is feasible for more than perhaps a few years ahead. For
analysis of the general behaviour of an insurer's economy and for
some particular applications, simulation of the gain rates may be
useful by employing cycle generating time series (Godolphin and the
Maturity Guarantees working party, 1980). The problem and appli-
cation under consideration will determine which kind of prognoses
and techniques for the calculation of the yield on investments can be
accepted. If the solvency of an insurer is evaluated, then a cautious
assumption as to rate and possibly some kind of hypothetical
depreciation may be advisable. In rate-making and corporate plan-
ning alternative prognoses can be applied in order to get a feel for
the sensitivity of the model outcomes to the various assumptions.
6.4 PORTFOLIO DIVIDED IN SECTIONS 211
(f) Investment losses Besides the yield, the depreciation (or possi-
bly appreciation) of the values of investments, especially securities,
needs to be considered. If the market value slides below the book
value, the depreciation of the latter must immediately be reflected
as negative income. The possibility of this event should be taken
into account in one way or another, at least when the factors
affecting the solvency of insurer are evaluated.
A current practice in many countries is not to book the assets
at market values. In particular, due to inflation, considerable
underestimations in assets may arise, which are a cushion against
unexpected losses in investments and can well be taken into account
in solvency evaluations and also when the model assumptions are
deliberated.
More details of the non-stochastic risks including the asset risks
are discussed in Pentikainen 1982, Section 2.9.
= L n(r)m('r), (6.4.2)
r::::;: tl
u= 1
rg/u),
and
n .(r)
mer) =
J
n r )m.(r)
L. ~( J
= ~v.(r)m.(r) = ~v.(r)m.
L.
j
J J L.
j
J J
n r .(u).
u= 1
XJ
(d) Skewness Finally the third moment, needed for the calculation
6.4 PORTFOLIO DIVIDED IN SECTIONS 213
(6.4.4a)
t,j
(i) The model parameters are first derived and calculated on the
basis of the section parameters as weighted averages as given
above, and then the computation (either simulation or direct
analytical handling of the problem) is performed for the whole
portfolio.
(ii) All the simulations and calculations are done separately for
each section, and finally the section outcomes are summed up.
The first approach is, of course, very much more economical
than the second as far as computation time and other resource
requirements are concerned. It means, however, that the cycles,
real growth and even inflation must be the same for all sections as
mentioned in item (e). If this cannot be assumed, approach (ii)
may be more rational, or a special technique is needed to calculate
the weighted average parameters.
(b) Broad and narrow approaches The basic equation (6.5.1) can
be treated by quite general conditions, taking into account numerous
inside and outside impacts and presses and business strategies. As a
rule the model gets so complicated that often only simulation can
cope with it. This general study will be postponed to Chapter 7.
It is useful before that to deal with the model by making some simpli-
fying assumptions. In this way it is possible to throw light on many
special problems which are of interest and which can serve as build-
ing blocks for more comprehensive approaches.
(c) The yield of interest will first be partitioned into two compo-
nents
(6.5.2)
where W is the technical reserve, i j is the rate of interest (see item
6.3(a)) and the subscript - 1 indicates time shift to the previous year.
By using this formula and replacing .1U by U - U _ l ' the basic
formula (6.5.1) assumes the form
(6.5.3)
The ratio u will be called the solvency ratio. Even though B may be
stochastic in some applications by virtue of the safety loading 1 (see
item (6.2(e», the 'deterministic' symbol B will be mostly used.
(6.5.7)
Hence
(6.5.8a)
and
rjgp = rJr gp' (6.5.8b)
All the factors and ratios may be time dependent, although the
argument t is not written out for brevity reasons. The subscripts are
chosen to indicate the factors interest, growth and premium inflation
involved (see notation, item 1.5(1)). This rate and factor can be
interpreted as generalizations of the classical ij (interest rate) and
rj (accumulation factor = 1 + i), operated in the 'static' interest
calculus based on constant values of the money. The rate ijgp and
the factor rjgp are their counterparts in 'dynamic calculus' where
both the value of money and the real volume of business are chang-
ing. Note that the rule rj = 1 + ij cannot be extended to r jgp ' i.e.
generally rjgp =1= 1 + ijgp'
Then (6.5.5) can be rewritten
(6.5.9)
(6.5.14)
(6.5.15)
The interpretation of the basic equation (6.5.13) is obvious. The
relative amount of the risk reserve u (solvency ratio) changes from
year to year for the following reasons:
220 RISK PROCESSES WITH TIME-SPAN OF SEVERAL YEARS
r(r, t) = n
.=t+1
rjgp(v) = n t;V)()
v=t+1 r g vrpv
= B((r» x
Bt
n
v=t+1
rj(v). (6.6.1a)
6.6 DISTRIBUTION OF THE SOLVENCY RATIO u 221
In the particular case where all the factors involved are constants
this is reduced to a power expression
r(r, t) = r:;t. (6.6.1b)
If the growth and inflation are omitted, i.e. B is constant, it is further
reduced to the well-known accumulation factor r:-
t = (1 + iJ-t.
In the particular case where the rates of interest, growth and inflation
are not time dependent this equation is modified in form
t t
u(t) = u(O)r: gp + L pir)r:g-;,t - L f(r)r:;pt. (6.6.3b)
t=1 t=1
222 RISK PROCESSES WITH TIME-SPAN OF SEVERAL YEARS
where
t
u{-t:}
Ruin barrier
nm x (1 + z(r)) x nt
rg(v)rx(v)
/lr(r) = t v=1
nm x TI r g(v)r p(v)/(1 - Ab - c)
v=1
I
where
D(r) = VD1 rx(v) VD1 rp(v), (6.6.7)
The first two terms of the last expression give the average flow of
the solvency ratio without the cyclical fluctuation, and the last
term sets forth the cycles.
If rjgp < 1, as often happens in practical applications, the
possible downward or upward trend declines when t grows and
/lo(t) continues oscillation around a horizontal equilibrium level.
This equilibrium level can be obtained from (6.6.8) by dropping
the effect of cycles, i.e. by putting z(r) == 0. Then the formula is
transformed to a geometric series, the sum of which is
_ (O)t
() -u ,1-r:gp (6.6.8a)
/lot rjgp+IL~,
rigp
224 RISK PROCESSES WITH TIME-SPAN OF SEVERAL YEARS
ii It I
A
(l-r )
'gp
Figure 6.6.2 The trend of the average solvency ratio and the deviation caused
by the cycles according to (6.6.8) when r igp < 1.
En =
1
l~m Jlu(t) = -_A1
00 r igp
. (6.6.9)
as follows
Giu) = G(u o , u; 1, t)
= prob{u(t)::;; ulu(O) = uo}
= I-prob{f(I,t)::;;R(t)-ulu(O)=u o }
= 1 - pro b {x ::;; x}
~ 1 - N(y), (6.6.17b)
where, since ,ur(1, t) = R(t) - ,uo(t) and O"r(1, t) = O"u(t) (see Fig.
6.6.1),
(6.6.18)
and, as above, y = v;
l(X) with y = Yr(1, t) = - yu(t).
Here and in the following the case whereby the dJ. F or G is
discontinuous is ignored, which could provide an auxiliary term
prob{u(t) = u} (or similar) when the passage from u(t) to f(l, t)
is made.
In addition, either gamma approximation, presented in Section
3.12, or the Wilson-Hilferty formula (item 3.11(m)) can be used for
the calculation of F(X; 1, t) and Gt(u) in a similar way to the NP
formula based on the same characteristics as above.
U
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
U(O)
U(T)
:ur
I
(t!
t -1 T Time
Figure 6.7.3 A crosscut of the process, the distribution of the state of the
business measured by means of the risk reserve U(t) at time t. The function gt
is the density of all cases and wt represents the portion of such sample paths which
still survived at t - 1.
f:
or equivalently
(6.7.9)
(I) The set of all paths which begin at U(O) = U 0 and satisfy U(T) ~
U, irrespective of whether the process is in a state of ruin before
the year T or not.
232 RISK PROCESSES WITH TIME-SPAN OF SEVERAL YEARS
(II) The set of all those sample paths which are in a state of ruin,
i.e. satisfy Vet) < Vr(t), at one or more of the intervening points
t = 1,2, ... , T - 1.
computed
[W(v, t)]v,t = [W(U/t) - vh, U/T);t, T)]v,t
for t = T - 1, T - 2, ,,, , 1; v = 0, 1,2, ... , (6.7.13)
after which 'I'T is obtained from (6.7.12) and (6.7.11) by another
series of numerical integrations.
The benefit of this method is that the intervals over which the
integrals are to be taken are shorter than in the direct truncated
convolution and mainly consist of the tails of the distributions.
Whilst the integration is not extended over the mode of the distri-
bution, the rounding-off errors do not cause problems.
Note that the calculation algorithm can be arranged also by
defining the set II according to the time when the ruin first occurs, as
will be shown in exercise 6.7.1. The benefit of the approach given is,
however, that the matrix (6.7.13) does not depend on the initial
capital U 0 and can be used unchanged when 'I'T is computed for
several values of U o.
~
for r ~ 0
R(r) {; for 0 < r ~ 1 (6.8.l )
for r > 1.
This distribution is also called rectangular owing to the shape of
its density function. A sequence of uniformly distributed random
234 RISK PROCESSES WITH TIME-SPAN OF SEVERAL YEARS
F F
1---------------------- 1 -----------.-----------------------:~-
~
F(X)
! F~Epi
r f-~~~._j,-----'
x x x x
II
s • s •
Table 6.8.l Examples of the absolute and relative limits of the simulation
errors according to (6.8.4). Confidence level 1 - e = 0.95; Y, = 1.96.
0.4
F'(X)
0.3
0.2
0.1
2 3 4 5 6 7 B 9 10
X
Figure 6.8.2 Exact values of r'(X; 2) (solid line) and its simulated estimate
(the heights of the dotted pillars); sample size 2000.
°
from the open interval (0, 1). For many applications it is necessary
to employ generators which do not give or 1 as output.
Roughly speaking, a computer takes about the same time to
generate a random number as it takes for a couple of multiplications.
°
consists of normally distributed mutually independent random
numbers having mean and standard deviation 1.
A simple way of generating approximately normally distributed
random numbers is first to generate m uniformly distributed mutual-
ly independent random numbers. Their sum X is approximately
normally N(mI2, m/12) distributed. Often m = 12 is chosen. Then
X - 6 is approximately normally N(O, 1) distributed. It gives, how-
ever, a bias in the periphery of the distribution range and is therefore
unsuitable for many risk -theoretical applications.
6.8 MONTE CARLO METHOD 239
°
random variables having mean = 0, (J = 0.03. Plot A{t) for
t = 1,2, ... , 25 first putting (J = and then three (or more) realiza-
tions regarding the given value of (J (readers who have no random
number generator available can use numbers given in Appendix
E). Verify that the wavelength is the same as that obtained by the
analytic calculation presented in exercise 6.2.1.
(iii) Set the counter: c(X) ..... c(X) + 1 if X sim :::; Xj (j = 1,2, ... ).
Finally
F(X) ~ c(X)/s (j = 1,2, ... ). (6.8.9)
of size wk ' the so-called strata size, will be assigned for each k. The
strata sizes are to be determined so that optimal efficiency can be
achieved. For details consult, for example, Rubinstein (1981)
Chapter 4.3.
(i) The claim number process is of the mixed Poisson type with
a Poisson parameter (the 'conditional' expected number of
claims) n(t) = n(t)q(t) for t = 0, 1,2, ... , T. The structure vari-
ables q(t) are mutually independent and have a joint dJ. H.
(ii) The trends are given by a deterministically defined factor
242 RISK PROCESSES WITH TIME-SPAN OF SEVERAL YEARS
The dJ.
Fn(X) = F(X In(t) = n). (6.8.14)
t= 1
and
) r3
y(t = J(nr~) . (6.8.17)
(a) u process The whole risk process, i.e. the claims as well as the
flow ofthe solvency ratio u(t), can now be simulated. For this purpose
the sequence X(1), X(2), ... ,X(T) or rather the corresponding
ratios f(t) = X(t)/B(t) are to be generated as presented in Section
6.8.3. Furthermore assumptions and basic data for premium calcula-
tion and for the yield of interest are now needed as described in
Sections 6.2 and 6.3. The approaches may be either deterministic
or stochastic and dynamic control rules can be programmed as
examples in later sections demonstrate. Equation (6.5.15) then
immediately gives u(t) and a sample path can be generated such as
that in Fig. 6.8.3 where, to begin with a simple case, safety loading
and rates for interest, inflation and growth were kept constant.
1.0
<Xl
"-
;:)
"
"o
:;::
e
~0.5
c
~
oVl
5 10 15 20 25
Time
Figure 6.8.3 A sample path of the business flow process; algorithm (6.5.13)
has been used.
1.5
Ruin barrier
5 10 15 20 25
Time
1.5
o
:c
~
>-
u
~ 1.0
o
Vl
0.5
Figure 6.8.5 Risk process subject to cyclical variation of the basic probabilities
having the amplitude zm = O.l, and wavelength T z = 12 years. Inflation varying
as will be explained in item 7.6(d). Other data as in Fig. 6.8.4. Ruins 18/100. The
stars indicate the ruin of the sample path.
6.8 MONTE CARLO METHOD 247
over the ruin barrier (e.g. the legal minimum amount of solvency
margin), it indicates a solvent state.
1.5
"
e
o
>-
u
~ 1.0
~
0.5
5 10 15 20 TIme 25
Figure 6.8.6 The same process as in Fig. 6.8.5, but now the phase of the business
cycle is randomized.
248 RISK PROCESSES WITH TIME-SPAN OF SEVERAL YEARS
to the whole number of the sample paths is an estimate for the ruin
probability 'P T (in Fig. 6.8.5, 18/100).
(e) Dynamics The model given is still simplified and was aimed
solely at demonstrating the simulation method. It will be further
developed later, taking into account dynamics which may appear
in actual circumstances. When an insurer approaches a critical
state measures will be probably sought to improve profitability, e.g.
economising in costs, reducing risks such as lowering the net
retentions, acquiring additional solvency margin, etc. On the other
hand if theory and practice are compared, i.e. by asking whether or
not the model outcomes are in acceptable coincidence with actual
experience, it must be kept in mind that insurers, being in a weak
solvency situation, seldom become bankrupt. They are instead
usually merged with other companies. So the actual number of
observed 'ruins' is not comparable with the theoretical ruin number;
rather the latter is comparable with the number of dissolved compan-
ies (mergers included) even though, of course, a merger can also
arise for reasons other than the threat of an imminent ruin.
50
40
20
10
50
Time
u(ti
1.0
0.5
The middle term, the union of the sets At' is the set of all the sample
paths which are at one or more times in a state of ruin. Since it is just
the finite time ruin probability required, the limit inequalities
T
max qJ(t) ~ 'P T ~
1 <;;t<;;T
L qJ(t), (6.9.3)
t= 1
are obtained.
The upper limit deduced can be illustrated by means of Fig. 6.7.3,
writing
(6.9.4)
t= 1
The area C in the figure is equal to A'P t = 'Pt - 'Pt - 1 , which is always
a part of the area C + D = qJ(t); from this (6.9.3) follows.
252 RISK PROCESSES WITH TIME-SPAN OF SEVERAL YEARS
(C) Improved limits The above limits can be still further improved
in a simple way. For this purpose, note that the sample paths
which are in a state of ruin in more than one year are counted several
times in the upper limit. An exact value for ruin probability could
be found if each sample path, being ruined one or more times, could
be counted only once. This is, however, difficult to achieve without
unreasonable complications, but it is fairly easy to remove from
each term cp(t) the counting of those paths which were in state of
ruin already atthe previous time t - 1. To show this we will introduce
an auxiliary quantity, the probability cp(t - 1, t) that a sample path
is in state of ruin at both t - 1 and t. Its evaluation will be dealt
with in item (e). Providing it is obtainable, a corrected probability
(6.9.5)
can be introduced, giving the probability that the sample path is in a
state of ruin at time t but was not at t - 1.
By similar reasoning, the inequalities (6.9.3) can now be replaced
by the following inequalities
T
max [1P(t - 1) + IPc(t)] ~ 'PT ~ L IPc(t). (6.9.6)
2';;t';;T t=l
) l-'I't_1 ()
ep ee(t = 1 _ ep (t _ 1) x ep ct. (6.9.7)
in the upper limit of (6.9.6). As seen from (6.9.4) epe(t) is, in fact,
aimed to approximate L\ 'l't and it should be replaced by another
expression which is more closely related to L\'l't. Now L\ 'l't represents
all those sample paths which are first time ruined at t. Hence they
emanate from the set of paths which are not ruined during the first
t - 1 years. They are presented by the area A t - 1 = I - 'l't-1
(Fig. 6.7.3 applied for the year t - 1). The paths which were counted
above for epc(t) emanate from the whole set of sample paths which
are not in a state of ruin at t - 1, the corresponding probability
being 1 - ep{t - 1) = area A t - 1 + B t - 1 • Evidently L\ 'l't/epe{t) is
smaller than the ratio of the probabilities of the parent sets
(l - 'l't-1)/(1 - ep(t - 1)) = A t - 1 /(A t - 1 + B t - 1 )· This follows from
the fact that those paths which are already ruined before t - 1 are,
according to our auxiliary assumption, on average in a worse
solvency situation than those paths never ruined, and the former
ones are therefore more likely to fall into a state of ruin at t. Hence,
introducing the notation (6.9.7), we have L\ 'l't ~ epcc(t) which justifies
T
q>(t - 1, t) = f_
ur <t-l)
00 G(u, ur(t); t - 1, t)d u G(u o' u; 1, t - 1),
(6.9.9)
where ur(t) = U.(t)/B(t) is again the (relative) ruin barrier. The
differential term gives the probability that a sample path at t - 1
is going through the infinitesimal interval (u, u + duJ, and the
integrand term gives the conditional probability that it is still under
the ruin barrier ur(t) at time t.
The numerical calculation can be made, for example, by virtue of
Simpson's rule substituting the G function by the NP (or gamma)
approximations (see items 6.6(f) and (g)). Formulae for the derivative
were given in item 3.11(j).
For readers' convenience the formulae needed for calculation of
the limits derived are recorded in Appendix D in a form which is
readily programmable.
(f) Examples of the limits (6.9.3), (6.9.6) and (6.9.7) are given in
Table 6.9.1 for a process illustrated in Fig. 6.9.1.
As the example illustrates the effectiveness of the limit formula,
i.e. the closeness of the limits, depended on the fact that the test
processes were of a type which dipped down only for a couple of
Table 6.9.1 Probabilities related to the example given in Fig. 6.9.1 'Plower and
'P upper are the limitsfor 'P, as given in theformulae specified in the table.
U
0.9
0.8
0.7 W1r
0.61~
0.5
0.4
0.3
0.2
0.1 i==========~~~:;;::==~R~u:;in~ba;;rr~ler
2 3 4 5 6 7 8 9 10
TIme
Figure 6.9.1 An example of the application of the limit formulae. rp(t) values
calculated by the NP method are given in Table 6.9.1. Standard data given in
item 7.l(d) exceptfor u(O) = 0.7 and the amplitude of sinusoidal cycles zm = 0.05.
years and then recovered. Hence only very few years during the
time span were critical in this example. The same is the case in
examples given in the figures of Section 6.8. If, on the other hand,
the process is of such a type that the lower confidence boundary of
the bundle is directed for a long time parallel with the ruin barrier,
then the limits may be expected to be more distant from each other.
In any case, it is advisable to calculate both the upper and the lower
limit to gauge the accuracy of the approximation when the upper
limit is used as an estimate for 'P T"
A major benefit of the limit formulae is the relative simplicity
of the computations compared with other known techniques, and
the ability to control the accuracy.
Ij1 (ui
T
u
Figure 6.9.2 The ruin probability'll T(U) as a function of u applying the limit
(6.9.8) for T = 5 or 10. Finding of u when'll T is given. Data according to standards
given in item 7.1(d).
given and the initial minimum solvency ratio u(O) (or briefly u) or
the corresponding initial risk reserve (solvency margin) U = uB is
requested. For the purpose 'P T is first calculated as a function of u
(Fig. 6.9.2). Then the u values corresponding to any given s can be
read from the figure.
It is convenient to program the computer to seek the required
value u = ue directly as a continuation of the calculations producing
'P T so that no graphical or other separate determination of u is
needed. This approach is precisely what is mostly needed for
various applications.
Note that the approximate formulae (6.9.6) or (6.9.8) give still
better accuracy than quoted in item (0 when they are used for
the calculation of u. This is due to the fact that the curve 'P T as a
function u is quite steep for larger u values (note the half-logarithmic
scale in Fig. 6.9.2), hence inaccuracies in 'P T values do not affect u
values considerably.
For comparison of the limit formulae and the Monte Carlo
method some examples are calculated using both (Table 6.9.2).
Generally the direct evaluation dealt with in this section is far
more expedient in cases where it is applicable. The benefit of
6.9 LIMITS FOR THE FINITE TIME RUIN PROBABILITY 'PT 257
Table 6.9.2 Examples of the finite time ruin probabilities 'P T calculated using
the limit formulae and by the Monte Carlo method, T = 10 years, ruin barrier =
O.IB (Rantala, 1982, pp. 3.1-5).
qs10 (ul
Figure 7.1.1 The ten year ruin probability as a function of the expected
number of claims n and the initial solvency ratio u. Other variables have the
standard values given in item (d). The dotted line was obtained by shortening
the time-span from 10 to 5 years in the case n = 20000. Explanation for the
slight deviation of '¥ 5 from '¥ 10 can be found in the analysis in Section 7.5.
(d) Standard data The great number of the variables and para-
meters involved in the model makes it very difficult to get any
insight into the structure of the system and into the interdependence
of the variables and background assumptions. One way to get
round this difficulty is first to fix average values for the parameters
and variables and then to calculate the required analyses for this
set of basic values. In other words, a special 'standard insurer' will
be first constructed and examined. Then one or more variables
in turn are changed and the resulting reactions calculated. In this
way it is possible to get insight into the multidimensional structure
of the model. This idea is illustrated in the profile in Section 7.8.
The standard data are as follows:
n = 10000, the expected average number of claims; see (6.1.9)
}p = - 0.086, the premium related safety loading which provides
the value 1.71 for the coefficient w, see item 6.5 (h)
Ie = 0.040, the total safety loading; see item 6.5(h)
M = £10 6 , the maximum net retention; see Section 3.6 and Table
3.6.1
c = 0.28, the loading for expenses; see (6.2.8)
(J q = 0.038, the standard deviation of the structure variable q;
see (2.8.5)
Yq = 0.25, the skewness of the structure variable q; see (2.8.5)
T = 10, the time-span of the finite time ruin probability
G = 0.01 (= '1'10)' the ruin probability; see (6.7.1)
ix = 0.09, the constant or the average rate of the claims inflation;
see item 6.1 U)
7.1 FINITE TIME RISK PROCESSES 261
The claim size dJ. is the one given in Table 3.6.1 providing
excess ofloss reinsurance and the maximum net retention M = £10 6 •
As seen from Table 3.6.1, line 26, the mean claim size is £7302 and
the risk indexes r2 = 44.1 and r3 = 4465. In those applications
which are directly borrowed from the Finnish research works
already mentioned, the dJ. of industrial fire and loss of profit
insurance was used (Rantala, 1982, Appendix 1). For moderate
retentions the results calculated by either of these dJ. do not deviate
notably.
It follows from the above data that:
U 1.5
1.0 1.0
0.5 0.5
0.0 0.0
--------------
<---~-.~---~-~_
(a)
10 20 t (b)
10 20 t
1.5
U(n 1.5 t
1.0 U(t:.o.~
0.5 0.5 •
_ u _ n _________________ ._. __ :______ ~
0.0 <---~-~-~-~_ 0.0 ~
(e) 10 20 t (d) 10 20 t
Figure 7.1.2 The main types of risk processes generated by means of algorithm
(6.5.13) (a) r igp = 1.027, no cycles; (b) r igp = 0.938, no cycles; (c) r igp = 1.027,
cycles; (d) r igp = 0.938, cycles.
2.0
veo)
1.5
~~
-e. _ e __& _ ::1, cycles
Q.5 \~~ ~
M=~e a a e a a e e
1.5
U(I)
1.0
0.5
This divergence arises from the long time-span (T = 10) and as seen
in Fig. 7.21 from the cycles now assumed.
n/l000= 20 10 5 2.5 1
/
3.00
M
2.00
1.00
0.70
0.50
0.30
0.20
0.10
0.07
0.05
0.03
0.02
2.0
U
1.5
- - - - -_ _ _ _ _ _ _ _=zm;.15
1.0
--------------0.10
--------------0.05
0.5
----------------------0
5000 10000 15000 20000
n
Figure 7.4.l The initial solvency ratio u as a function of the amplitude Zm
of the cycle waves and the volume parameter n. The values of u are calculated
for intervals of length 3000 of n. M = £400 000. Default data according to
standards given in item 7.l(d).
U: 0.5
0.9
-1
10
1.3
10-2 ________ _
-3
10 . . . . . . _.... . ___ A' /____ •
/
/
/
/
/
/---------- 0.9
1O-4-1----'----'-_J,...!-"--_ _~---~---~~-.=.._
5 10 15 20 25
T
Figure 7.5.1 Example of the effect of the time span T. Solid lines: processes
involved with cycles of type (d) in Fig. 7.1.2. Dotted lines: no cycles, types (a)
or (b) in Fig. 7.1.2.
Then clearly the first few years are critical from the point of view of
the solvency effects. When they have passed, the stochastic bundle is
soaring high enough that extension of the period T no longer has
any noticeable influence on the probability ofruin and hence on the
solvency ratio u. The situation is quite different if the growth factor
r igp is less than 1. This is seen e.g. in Fig. 7.1.2(d), where also cycles are
assumed. Then the stochastic bundle dives down periodically to the
critical area. These periods are reflected as upward jumps in the
curves of Fig. 7.5.1, which coincide with the beginning of the adverse
halves of the cycles.
1.5
U III
1.0
0.5
10 20 30 40 50 60 70 80 90 100 years
(c) Varying inflation has effects which are very different from those
of steady inflation. The claims expenditure can be assumed (see
items 6.l(j) and 6.2(a)) to be affected nearly immediately, but it is
likely that the premium rates are corrected after some time lag tp'
During this time lag a loss occurs which may be compensated later
when the adjustment of rates has become effective. The claim in-
flation and the premium inflation were distinguished in item 6.2(a)
for the study of this effect. Let us recall the connecting equation (6.2.2):
(7.6.1)
7.6 EFFECT OF INFLATION 269
1.5
U(II
1.0
0.5
0.0~------~5------~10--------15--------2~0-------'2·5
t
Figure 7.6.2 Solid line with circles: only a steady inflation rate of 9% per
annum was assumed. The other line: a shock inflation rate of 20% per annum was
effective in years 3 and 4. The shaded area marks the difference between the
original and changed flows.
factor r igp ' the two curves of Fig. 7.6.2 do not coincide completely
even in the long run. In fact, the correction of rates according to
(7.6.1) overcompensates the losses.
In the example the inflation shock caused an extra loss of some
20% ofthe premium income and is consequently one of the factors to
be taken into account when the dimensioning of solvency margins is
deliberated.
(e) Effect of the length of the time lagtp was studied assigning to
it values 0, 1 and 2 years respectively; a sample path was driven
for the each of them, keeping the sequence of random numbers
the same (Fig. 7.6.3). The change of inflation was now programmed
according to (7.6.2) with cj = 0.5.
As expected, the time lag gives rise to considerable loss when
inflation is in its increasing phase in this case also. The loss is
compensated during the decreasing phase. The assumed syn-
chronism enlarges the amplitude of the cycles, as concluded easily
from (7.6.2) and as seen in Fig. 7.6.3. In effect, about the same
°
outcome as given by the synchronization (7.6.2) can be achieved by
taking cj = and by making the amplitude zm larger, which
procedure somewhat simplifies the considerations. This was given
as another alternative for zm in the standard list of item 7.1 (d).
More details on this relationship can be found in Pentikainen
(1982, item 4.2.6.4).
1.5
U(tJ
1.0
0.5
1.5
"o
;::
~
>-
~ 1.0
>
(5
Ul
0.5
5 10 15 20 25
Time
Figure 7.6.4 For examination of the effect of the time lag tp it is here removed
from the process that was exhibited in Fig. 6.8.5; otherwise the two processes
are the same.
U It)
A ItJ< i\o
~--~~~~-------------------------------~
------------------------------R;
~----------------------~--~-------------~
t
Figure 7.7.1 Dynamics of the safety loading.
If the solvency ratio u(t) exceeds a given limit R2 then the safety
loading is reduced after a time lag t 1 • On the other hand, if u(t)
drops below another limit Rl then the safety loading is enhanced.
Here AO is a target value of the safety loading and c1 and c2 free
parameters. The changed safety loading is valid either until (accord-
ing to the rules (7.7.1)) another change is coming, the absolute value
of which is larger than the previous one, or until the solvency ratio
falls in the normal zone, defined by limits R'l' R~. In this zone the
safety loading has the target value AO'
The rule described is an example of the so-called dynamic or
adaptive processes. The process is made self-correcting by means
of ,autoregressive' rules (7.7.1).
The rule suggested was made fairly simple because the aim is
mainly to illustrate the control technique. The applied simulation
procedure also allows, without serious complications, more sophi-
sticated systems; e.g. the programmed changes in A (and possibly
in other control variables, too) may depend on the profits or losses
of the previous accounting years or on some joint combination of
the profitability and solvency position. For example, the works of
7.7 DYNAMIC CONTROL RULES 275
UlfJ
1.0
0.5
0.0 .j.---~---~---~---~--_
5 10 15 20 25
f
Figure 7.7.2 Effect of the lowerlimit RI which is either 0.35 or O. The coefficient
c 1 = 0.3.
1.5
u(tJ
0.5
o.o~------~--------~-------- __--------~------~
5 10 15 20 25
1.5
o
:c
~
>-
u
"~ 1.0
o
If)
0.5
5 10 15 20 25
Time
o
.r:
~
>-
u
c:
~
3i 1.0
------- ------------- -------------R2
0.5
5 10 15 20 25
Time
o
;:;
1:'
~
c:
">
01.0
(J)
05
Ruin barrier
5 10 20 . 25
TIme
(1) The case r > 1 In most of the foregoing examples the important
accumulation factor r igp was < 1 according to the standard bases
given in item 7.1(d). In essence, the same simulation technique and
many of the straightforward calculations as employed earlier are
equally applicable for processes where r igp > 1, i.e. in cases of low
inflation and low real growth of the portfolio. This is demonstrated
in Figs 7.7.5 and 7.7.6. As seen in Fig. 7.7.5, the uncontrolled flow
of u(t) tends to infinity (dotted line). This is, of course, an unrealistic
situation in view of the applications. Hence, additional assumptions
are unavoidable to make the model workable in those cases where
rigp> 1. The rules (7.7.1) suggest an approach for the purpose. More
sophisticated models will be discussed in Chapter 10.
+
Claim size ~ 21.0
+
Structure variation 111111111111111111111111 24.7
+
Cyel es 1111111111111111111111111111111111111111111111111111111111111111111111111111 81.2
+
Inflation 111111111111111111111111111111l1li1111111111111111111111111111111111111111111111111111111111111111111 107.0
Figure 7.8.1 The dependence of the minimum solvency ratio on the different
basic assumptions.
280 APPLICATIONS RELATED TO FINITE TIME-SPAN T
cycles. The rate of inflation also has a marked effect on the solvency
condition.
Standard insurer
1IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIUllllllllllllllllllllllll107 Inflation
1
Insurer's size 1 ix = 5Of, ..•• 111111111111111""111""'111"'' ' ' "11"'' ' ' ' " 92
n 1 :I~Z: '~ .'. : : : : : : : : : : : : : : : : :~ :~ : : ~: : : ~: : : I ;~ 125
5000 ....... !!!!!!!!!!~!~!~!!!!~!!~~~=!~:::~!l!!:~!:'I:~~122
I
~~~·.·~·.·.··.·.lIllrn'=='III,£ffi~8 TIme lag between claim and premium inflation
o years 1iI111111111111111111111111!1!I!II!l1II1II11II 76 i
Ne~ retention 1 year .. Jllllllllllllllllillllllll!lllllllllll!!!Iilllillllll!! gl
M Cycle amplitude 1
0.1 ........... ~~ 1
0.5............. 1
1.0 ........ . 1
I 107
Size and net retention 1 iiiiiiiiiiiiiiiiiiiiiiiiiiii~_0IID149
1
n M I
5000 0.1 U1i1lu1l1l 11llDJ1i1lmJ1I1II1I1II1II1II1II11I1II1I1II1II 100 Cycle length
10 000 0.5 IIIIIIIlIIlIDJlIIlIIIIlIIlIIIIIIIIIIIUlUlillullllllllllll 100
20000 1.0 111111111111111111111111111111111111111111111111011111111111101
1
Time span T 1
1
1 year ... /OOOIDO 27 1 Safety loadmg Ab
2 years ..~ 45 1 1
5 years . 1111111111111111111111111111111111111111111111111111111111111111104
-0.10 ... "'I"''''""'''IIIIII'''"'''""IiIIllIll""lllllllllllll"i"1ll116
10 years ... 11111111111111111111111111111111111111111111111111111111111111111107 -0.05 ... IIIIIIIIIIIIIIIIUIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII 86 1
1 - 0.00 . UIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII 61 I
Structure variation 1 +0.05 .... -.um41 1
1
O"q Yq 1 Real growth 1
o
0 1111111111111111111111111111111111111111111111111111111111111100 1
0.05 0.25 II 1 1 1 II 112 ig =0 . 1111111111111111111111111111111111111111111111111111 85 1
Figure 7.8.2 Values of the minimum initial solvency ratio u(O) for different
parameter sets. T = 10 years. Band M given in £ million.
7.9 EVALUATION OF THE VARIATION RANGE OF u(t) 281
::::
"o
:;:
e
>-
u
~
'0
V>
/
(t+s)
t+s Time t
Table 7.9.1 The minimum solvency margin (7.9.3) as function of some basic
parameters (7.1.1), T = Tz •
Zm Tz A ri rx rg tp Ci M Rs
(d) Legal stipulations for the solvency margins were considered al-
ready in Section 4.4, deriving some rules on the basis of a one-year
planning horizon. The background of the short horizon is the view
that public intervention to control the insurance industry can be
limited mainly to safeguarding the interests of those insured.
Therefore only that margin is necessary which is needed to protect
the financial state of a possibly weak insurer from direct bankruptcy
for a 'warning period' during which the weakened solvency is to
be restored on pain of winding up. On the other hand a short
planning period is, of course, not sufficient from the point of view
of the insurer. The objective should be a safe continuation of the
284 APPLICATIONS RELATED TO FINITE TIME-SPAN T
Figure 7.10.1 The lowest still solvent positions of the stochastic bundle;
r igp < 1, rg > 1.
7.10 SAFETY LOADING 285
between the cases when the 'dynamic discount factor' rjgp (see
item 6.5(g)) is either < 1 or > 1.
(b) The case Yjgp < 1. The flow of the midline of the bundle was
given by (6.6.8) {see Fig. 6.6.2). Assuming a constant rate for interest,
inflation and growth it tends towards an equilibrium level Eu
(6.6.9). Furthermore, the measure for the breadth of the bundle
is required. It was studied in the previous section and denoted
t
by Rv' Clearly Eu must be at a distance at least Rv from the ruin
barrier, or in terms of the quantities given
Eu > ur + tRy.
Substituting (6.6.9) for Eu and solving an inequality for A, the
following is obtained
(7.1 0.1)
Assuming that ur = O.l, Rv = l.2 (see item 7.9(c)) and (according
to the standards in item 6.l (d)) rjgp = 0.938, a numerical value is
found for the lower limit of A ~ 0.043.
Another way to study minimum conditions for the safety loading
is to use the asymptotic expressions (6.6.9) and (6.6.12). This is
done in Exercise 7.l O.l.
The interpretation of this outcome is as follows. A minimum
amount of safety loading is necessary to counteract the eroding
effect of inflation and to increase the solvency margin in keeping
with the real growth of the business volume, in so far as the yield
of interest is not sufficient for these purposes, as is the case when
Yjgp < l. The situation is more complicated if the insurer can acquire
fresh capital to reinforce the solvency margin V, as may be possible
at least in the case of proprietary companies. Then the demand
of the safety loading which is needed to compensate the eroding
capital is smaller, but on the other hand the stockholders expect a
return for their investment, which increases the need for margins
or loadings in rates.
It will be recalled that the safety loading, as defined above, is
an aggregate quantity composed of the yield of interest earned for
the underwriting reserves and of the 'ordinary' loading accrued
from premiums according to (6.5.12). Furthermore, for the above
considerations the total amount of the safety income AB is relevant,
as was discussed in item 4.l(c), without paying any attention to the
286 APPLICATIONS RELATED TO FINITE TIME-SPAN T
1.0
0.5
5 10 15 20
Time
Exercise 7.10.1 Find a lower limit for A. in the case where rigp < 1
by using the limit expressions (6.6.9) and (6.6.12) and applying a
similar method as in item (b). What is its numerical value calculated
for the standard data of item 7.1(d)?
2.0
y(O,t)
5 10 15 20 25 35 40
Time t
Figure 8.1.1 The cohort random process (8.1. 7b) calculated using the following
assumptions and data: q(t) = 0.0006 + exp[O, lISt - 8.125],/(0) = 10 000, r i =
LOS, w = 40, d(t) simple Poisson variable with expectation n = I(t)q(t). Expense
loading C(O) = 0.02; C(t) = 0.002 for t ~ 1.
(8.2.2)
8.2 LINK TO CLASSIC INDIVIDUAL RISK THEORY 293
The last term is due to the maturity at the end of year w - 1. The
terms related to expenses (see (8.1.6» do not appear, because the
consideration is now limited to risk premiums only.
Note that putting P 1 = 0 and a = - P the conventional endow-
ment treaty with all term risk premium P is obtained as a special
case, and putting S = 0 the capital value of an annuity a.
Now consider one policy entering into the cohort at t = O.
There are w + 1 different ways it can terminate, i.e. either death
in one of the years t = 0, 1, ... , w - 1 or maturity at the beginning
of the year w. These events can be described letting the termination
time t be a discrete random variable with frequencies
d(t) l(t)q(t)
pet) = prob {t = t} = 1(0) = l(O) (t = 0, 1, ... , w - 1)
let)
(t = w).
1(0)
The second line represents the case of maturity. Noting that
lew) = lew - 1) x (1 - q(w - 1», and combining the events related
to the end of the last policy year w - 1 and the beginning of the
next year, these expressions can be rewritten as follows
( ) _ d(t) _ l(t)q(t)
p t - 1(0) -l(O) for t = 0, 1, ... , w - 2
Next a profit (or loss) Z(t) arising from termination at year t can
be introduced
1
=P 1 - a (S -I-v
[--+
I-v
-a)
- v1+1J .
(8.2.6)
Introducing the notation
w-l
A(v) = I p(t)v l + 1, (8.2.7)
1=0
PI = L
w-l [I
p(t) a -
vl + 1
+Svl+ 1
]
1=0 I - v
= Wfp(t)[_a
1=0 I - v
+(s __v)v1+1] a
I -
( a )W-l (8.2.8)
a
=--+ s--- I p(t)V + 1 I
= _a +
I-v
(s __ I-v
a )A(V),
and the following classic expression for the variance (J2 can be
derived by some algebra (exercise 8.2.2)
(i) In the case of the endowment policy with whole term premiums,
put a = - P where P is the pure annual risk premium (B
without expense loading C); now.P 1 = O.
(ii) In the case of single premium endowment treaty put a = o.
(iii) In the case of a single premium annuity ( = a) put S = O.
Exercise 8.2.1 Write down the formula for risk premium P (without
loadings) of the endowment policy.
t= 1
(8.3.5)
The above ideas are illustrated in Figs 8.3.1, 8.3.2 and 8.3.3.
In order to get a general idea about the effect of the background
assumptions, first the process was made deterministic in Figs
8.3.1 and 8.3.2. Explanations can be found in the captions of the
figures. Figure 8.3.2, demonstrates how the model can be used to test
what degree of inflation the supposed secondary bases still tolerate.
Then one ofthe processes was selected for Monte Carlo simulation
in Fig. 8.3.3. Both the deterministic average flow without bonus
barrier and the bundle of stochastically varying flows provided
with bonus barrier are exhibited. The hypothetical example set
out in Fig. 8.3.3 illustrates a case where the accumulated profit
was not sufficient to cover the adverse development at the final
phase before maturity. It arose in this example mainly due to the
fact that owing to inflation the 'actual' expenses widely exceeded
the loading calculated for them.
It is seen that the system is sensitive to changes in the primary
bases, whereas the stochastic variation of the number of deaths
has a minor though noticeable effect.
The steepness of curves 1 and 3 confirm the well-known
experience that in particular the interest rate and inflation are of
crucial significance for the long-range balance of that type of life
y(O,t)
/
1.5 4
1.0
0.5
0.0
-0.5
-1.0
3
-1.5
5 10 15 20 25 30 35 40
Time
2.0
Y (O,t)
1.5
9 9
1.0
0.5
0.0
-0.5
11
-1.0
-1.5
5 10 15 20 25 30 35 40
Time
2.0
y(O,t)
1.5
1.0
0.5
0.0
-0.5
-1.0
-1.5
-2.0 iii ( i i
5 10 15 20 40
Figure 8.3.3 Flows of the accumulated cohort profit. The curve with circles
is deterministic, assuming no bonus barrier. The other curves are simulated
letting deaths be randomly varying according to the Poisson law and adopting
a bonus barrier b = 0.2, rj = 1.09, rx = 1.09, mortality and lapses as in Fig. 8.3.1.
........
o
.t:.
o
U
f_.., __ _
------- - -- --------- I
L_.-_
I
Current time
n
t
S(te' t) = S(O, 0) x r!c x rlr), (8.4.1 )
't=tc+ 1
2.0
0
:i: 1.5
E ix x100= 5
>-
u
""
1.0
>
'0
Vl
0.5 8
0.0
10
-0.5
11
-1.0
-1.5
-2.0
5 10 15 20 25
Time t
~ 0.5
::J
o
e
>-
u
c
~
o
<f) 0.0 j----- ----- ____________ .. ____ ... ___ .. ___________________""
-0.5
Figure 8.4.3 Simulation of the case inflation rate ix = 0.09 and the bonus
barrier b = 0.2. Other data are as in Fig. 8.4.2.
These examples are still simplified such that the study of the
effect of the differing entry ages and different types of policies
was omitted. However, it can be expected that these examples
already reveal typical features in life insurance structures. As
is well known from earlier experience and already stated in Section
8.3 on the basis of an analysis of one single cohort, the long-range
validity of the basic data like interest rate, mortality, loading and
the effects of inflation are crucial, whereas the 'ordinary' random
fluctuation has minor, even though not negligible, dimensions
compared with the effects of the basis aspects. For pure risk in-
surance (temporary life insurance) and disablement benefits the
latter variation range is expected to be larger than in the present
examples.
The model can be further developed regarding all the aspects
which specify a cohort. Furthermore, time-dependent changes in
relevant factors, for instance inflation, interest rate, mortality,
lapse rates, etc. can be programmed and the sensitivity of the
outcomes of each can be investigated in a similar way to the case
of general insurance in the foregoing chapters. Also dynamic rules
can be incorporated into the model to simulate possible measures
taken by management, e.g. in the case of adverse development.
8.4 GENERAL SYSTEM 305
y(O,tJ
t
Figure 8.4.4 Demonstration of the effect of remedial action when an adverse
flow of the solvency margin is imminent. The solid line shows the flow if no
special measures are taken; the dotted line shows the rectified flow.
(c) Link with general risk theory If the planning horizon is short
or if the portfolio is already more or less in a state of equilibrium,
306 RISK THEORY ANALYSIS OF LIFE INSURANCE
y(O,t) t
1.5
1.0 ."
~~======~~26
0.5
'-----------~--~~--~~-------~------~~~~
0.0
* * * * * -~<:o:::: l~ -11
--<1<6
-------------36
-0.5
-1.0
-1.5
- 2.0
5 10
Figure 8.4.5 Examples of the profit (or loss) flow of a supposed portfolio
and eight of its cohorts having the duration from the entry 1,6, 11, '" , 36 years
respectively. Interest rate 7%, rg = 1.04 and other data are as in Fig. 8.3.1.
The curve with asterisks represents the joint result of all the cohorts.
9.1 Introduction
,
,, 0.3
I
~ &-&-e-e-e-e.-&-e
,
I
,Ill'
,
I
I
,,
! , -------
0.4
5 10 15 20 Time T 25
(9.1.2)
equal to one or less than one. It follows directly from (6.9.3) that
the condition
ao
L cp(t) < 1, (9.1.3)
t=to
(ii) Show that 'P 00 = 1 still applies when (9.1.4) is replaced by the
weaker condition
00
L prob {X(t) > M} = 00. (9.1.4')
t= 1
Note that (9.1.4) holds, for example, in the case where the variables
X(t) have a d.f. of the compound Poisson type.
** Exercise 9.1.2 Prove that (9.1.3) is a sufficient condition for 'P 00 < 1
provided that the condition ~ 'P t ~ qJ cc(t) holds for t > to (see (6.9.7))
and that 'P to < 1.
satisfied for t> to' then 'P 00 < 1. (Make use of the Tchebyshev
inequality.)
(a) Special assumptions The famous formula for the infinite time
ruin probability 'P 00' often briefly denoted 'P and called 'ruin pro-
bability', is discussed in this chapter. Consideration is limited to
the very special case, where the risk process is stationary and no
fluctuations in basic probabilities, either short term or long term,
are present. Also the safety loading A, and thus implicitly the rating,
is never changed; furthermore, interest and inflation are omitted
and the continuous checking of the state is adopted (see item 1.4(b)).
The assumption concerning stationarity can be replaced by a more
general assumption allowing a preprogrammed deterministic change
of the parameter n, the expected number of claims. Anyway, the
independence of the stochastically defined increments is assumed.
Hence it will be assumed that the distribution function F of the
total amount of claims during one year remains unchanged, or
at least that its changes are generated merely by changes in the size
of the portfolio giving rise to changes in the expected number of
claims each year. However, it will always be assumed that the dis-
tribution of the size of a claim is independent of time.
312 RUIN PROBABILITY DURING INFINITE TIME PERIOD
(9.2.1 )
1 + -(1
h - e-(l H)mnR1h) = foo e RZ dS(Z). (9.2.3)
n 0
(9.2.4)
from which R can be found by a few trials. By using the first or first
two terms and the risk indexes r 2 and r 3 (see (3.3.8)) the following
approximations are obtained:
2A 1
R=-- (9.2.5a)
r2 m
1
R= m
J'[6A~ + 9r~J 3r
4r~ - 2r m'
2 (9.2.5b)
3
10 20 30 40 50
U
Figure 9.2.1 The (approximate) ruin probability 'P ~ e- RU as a function of
U and A. The default data are the standards defined in item 7.1(d) (semi-logarith-
mic scale).
314 RUIN PROBABILITY DURING INFINITE TIME PERIOD
-RU
e
10 20 30 40 50
U
Figure 9.2.2 The (approximate) ruin probability as a function of U and M;
default data as standards of item 7.1(d) (semi-logarithmic scale).
closely the value of'llIe is linked to the safety loading Aand the initial
reserve U. If A = 0, it can be proved that'll = 1, i.e. ruin is certain.
(a) The purpose of the model The many applications of the theory
of risk described in the previous chapters, such as the estimation
of a suitable level for the maximum net retention, the evaluation of
stability, the safety loading and the magnitude of the funds, have
been treated as isolated aspects of an insurance business. In this
chapter an endeavour will be made to build up a picture of the
management process in its entirety and to place the risk theoretical
aspects in the context of other management aspects, many of which
are not of an actuarial nature. In this way many of the applications
previously dealt with can be integrated with the concepts of modern
business planning, in particular the techniques of long-range
planning and dynamic programming.
The object is to describe the actual management process of the
insurance business by means of a mathematical model.
(c) Risk theory models Model building is useful also for the study
of risk-theoretical behaviour of portfolios, e.g. for analysis of
the solvency conditions. It can be developed as a natural extension
to the conventional risk theory, with the purpose of giving a com-
prehensive view over the structures as a whole and of trying to find
tie-ins between the numerous aspects involved. Then it is sufficient
to select from the very numerous variables and details which are
needed in comprehensive corporate models only those which are
most relevant for solvency properties. This line will be followed in
the sequel. A model of this type was experimented with in a Finnish
solvency study (Pentikainen, 1982 and Rantala, 1982).
In order to facilitate the reading only the main lines will be drawn.
For example, the decomposition of the portfolio in sections is
omitted; even though it would not be difficult in principle to accom-
plish using the technique described in Section 6.4, its implementation
can be laborious. Furthermore, problems concerning rate-making,
technical reserves and many other aspects which are relevant for
corporate planning are not dealt with, because it has been necessary
to limit the scope of this book to aspects which are suitably handled
by the technique ofrisk theory. For the same reason 'non-stochastic
risks' like failures in investments, political risks, consequences of
misfeasance or malfeasance by management are not discussed,
notwithstanding the fact that they may be quite important and can
seriously jeopardize the solvency of insurers (see Pentikainen, 1982,
Section 2.9).
The presentation will be limited to approaches which are best
fitted to non-life insurance. However, life insurance can also be
handled in a similar way, in particular if the planning horizon is
fairly short. For the long-range consideration of life insurance,
modifications and attention to the special features along the lines
discussed in Chapter 8 are necessary. It is a matter of course also
that generally, in all kinds of business, the capability of models of
producing reliable analyses rapidly deteriorates when the time-
span grows because most of the basic distributions and data distort
in a way which is barely predictable, and because the environment
10.1 GENERAL FEATURES OF THE MODELS 321
I1U Claims
Earned paid
risk ou~s~anding
co premiums p
III
E
:J Expenses
'E
Cl) Safety loading
L.. U Reinsurance
Cl.
Loading for (ne~ balance)
expenses
Ne~ inves~men~
Dividends
earnings
B = Bo + Bre = Bo + P re (1 + Are)
X = Xo + Xre ,
where the subscripts '0' and 're' refer to the insurer's own account
and to the reinsurers. Pre is the ceded part of the risk premiums and
Are the loading including net compensation of the reinsurers'
expenses, safety margin and profit margin. Substituting in (10.2.1),
the reinsurance terms offset each other and the equation is trans-
formed in the following net of reinsurance form
(10.2.3)
324 APPLICATION OF RISK THEORY
(10.2.4)
where
(10.2.5)
(10.2.6)
This approach is based on the fact that ~Xre does not affect the
trading result ~U because it is included, with the + sign in X
and the - sign in C re • Hence, when the outcome on the insurer's net
retention only is concerned it is unnecessary to simulate the stochas-
tic value of Xre because the net result is the same as if it is replaced
by its expected value.
The benefit of the version (10.2.4) is that the reinsurance cost
is expressly shown, which is particularly useful when different
reinsurance policies are investigated and compared (see the example
given in item 4.5(d)). The type of the reinsurance treaty as well as
its parameters like the retention limits M and the target level of the
loading A. re are model parameters.
A more detailed analysis should necessarily concern also the effect
ofthe yield ofinterest earned for the shares of reserves or for the agreed
depositions.
In what follows the choice of the approach is left open: the user
can understand the basic variables either as gross amounts or net
amounts and in the latter case drop the term C re from the equations,
if it is taken into account in the definition of the variable values
calculated on the insurer's net retention. It is also dependent on
the application concerned whether Are and consequently Cre are to be
assumed stochastic or deterministic.
(c) Premiums have already been defined in Sections 4.1 and 6.2,
and the technique to handle stochastic control was discussed in
connection with (10.2.4). The growth of the premium income is
10.2 AN EXAMPLE OF RISK THEORY MODELS 325
appropriate to treat the rate of interest ij(t) and the inflation rates
ix (t) and ip (t) as separate variables, trying to find appropriate
assumptions for their combinations. Experience has convincingly
shown that all these quantities are subject to long-term major
variations and, in addition, to short-term oscillations having a
minor amplitude. The abundant econometric literature may suggest
ideas for the construction of proper submodels for these features. At
present there are no well-developed theoretical disciplines available
for insurance applications. As has already been suggested in item
6.3(b), the sensitivity of the model outcomes to the investment
entries may be tested by making varying optimistic and pessimistic
deterministic (or semi-deterministic) assumptions about the anti-
cipated future level of the return rates and their relation to inflation
and growth.
In risk theory models the investment earnings can be handled in
different ways. They can be kept as a separate variable or they can be
divided between the underwriting reserves and the solvency margin
as was suggested in Section 6.5 (see (6.5.2) and (6.5.12)).
(I) Expenses Usually the insurers also use special accounting for
their expenses. Its outcomes can be linked with the model as a
special entry through the terms Co and Cm. Co represents all the
sales and operating expenses the insurer may have. Cm includes
different kind of miscellaneous entries in the loss and profit account.
These items may be either stochastically or deterministically
defined. Probably it will be convenient to keep them deterministic
and move the uncertainties to the safety loading l (see item 6.2(d)).
This is supposed to have been done in the present considerations,
328 APPLICATION OF RISK THEORY
and these variables are denoted accordingly (no longer using bold
letters).
It is convenient to divide the expenses Co into two parts. On the
one hand every insurer has some normal costs which cannot be
controlled much by the insurer, at least not in any radical way in the
short term. Some of these costs are related to the volume of sales of
new policies, some to the number of policies in force and some to the
number of claims. On the other hand, some other costs, e.g. those
related to the sales, are very much under the control of the manage-
ment, and it is advisable to introduce a special variable Q for them.
Hence the total costs are composed of a normal amount C 1 and a
moving part Q as follows.
Co = C1 + Q.
As will be seen in Section 10.3, Q can be one of the key issues in a
dynamic program, because e.g. sales campaigns and other similar
operation policies can be introduced into the model by using it.
Furthermore, alternative actions concerning personal, office
facilities, etc. can be considered. For example, investments in new
buildings and inventories will obviously first cause a rise in expenses
but later on will reduce them.
The normal costs C 1 should be programmed to depend on the
forecasted business volume. Often a sufficient approximation for a
risk theory model may be to assume them proportional to the
premium income P(t), as was presented in item 6.2(f), i.e. C1(t) =
c'P(t), where c' was adopted as a coefficient related to unloaded risk
premiums P instead of the coefficient c which was used in (6.2.8).
In fact, c' = c/(1 - c - A.b ) as seen from (6.2.11). It is natural in this
connection to base the supposed actual costs on a quantity which is
not stochastically varying. This was the rationale for the change of
the definition.
Tax is one of the relevant items of the model. Its consideration
and programming depends on local legislation and practice. We
assume it to be included in expenses in one way or other. Some
actuaries have suggested it as a separate entry in the basic equation
(10.2.3) and provided rules on its dependence on the book profit
(and dividends).
State variables define the state of the system (insurer's fiscal status)
at times t = 0,1,2, .... Volume parameters n, assets, liabilities,
solvency margin V, etc. are examples.
Transition equations like (10.2.9) determine the increments of the
state variables from each time t to the next t + 1.
System parameters are needed to define both the state and transitions.
Some of them, such as inflation, depend wholly or at least mainly
on the outside circumstances and environment. Some others,
so-called decision parameters or rather decision variables, can be
controlled at least to some degree by the management of each
insurer, e.g. premium rates, net retentions and allocation of
resources like dividends D and sales efforts Q. Some parameters
are of mixed character, being partially dependent on exogenous
factors and to some degree on the insurer's own policy decisions.
Rate of interest is an example. A set of the numerically assigned
decision variables is called a strategy. Examples of the variables
and factors involved with the model are listed in Fig. 10.1.1.
(l0.3.3a)
and
(d) Algorithm When all the assumptions are settled and values
for the model parameters determined, then the model can be
operated by means of the Monte Carlo method, just as was presented
for the simple underwriting processes in Chapters 6 and 7. The
difference is that the number of model variables may be greater and,
in addition to the claims, other stochastic variables, e.g. the yield
of interest and premiums, via the response functions can also now be
stochastic. For different state variables, and above all for the
solvency margin U, 'a random walk' can again be obtained as is
illustrated in Fig. 10.3.1. In principle it fully corresponds to the
simulation figures of the previous chapters, e.g. Figs 6.8.3 and
6.8.4; however, now only the boundaries of the confidence region
are plotted.
I
I
I
I
... I
I
......... _ . I
- - _____________________ -4
Ruin barrier
Time t
Figure 10.3.1 An outcome of a stochastic model.
334 APPLICATION OF RISK THEORY
,
".
8(0) 8
Figure 10.3.2 An outcome of a business strategy in the B, U plane. Stochastic
bundle and its confidence region.
10.3 STOCHASTIC DYNAMIC PROGRAMMING 335
U (0) .- .- .-
, .-
_ _ _ _ _ _ _ _ ----
....
,
............ Ruin barrier
.... _------- --
8(0) 8
Figure 10.3.3 A conservative strategy, I, and a risk-taking one, II.
336 APPLICATION OF RISK THEOR Y
Solvency Expansion
U B
Bes~
stra~egy ?
Dividends
D
B(O),U(O)
e
B
Figure 10.4.2 Search for an optimal strategy. The labelled points represent
outcomes of different strategies.
(d) The concept of utility The choice of the strategy options in the
approach described was left very much to the subjective weighting
between the contradictory alternative outcomes. Attempts to find
more objective criteria can be made by introducing the so-called
10.4 BUSINESS OBJECTIVES 339
The wealth if
I II
Insurance Insurance
taken not taken
In case of
(i) no incidence of loss
(ii) incidence of loss
340 APPLICATION OF RISK THEORY
stU)
Wealth U
the value of the utility function G and on the other hand the pro-
bability that just the outcome in question may occur. The weighted
utility of the option I is then
GI = (1- p)G(U o - B) + pG(Uo - B) = G(U o - B),
and that of the option II, i.e. not to insure, is
Gn = (1 - p)G(U 0) + pG(U 0 - X).
Insurance should be taken if GI > GIl' otherwise not. This depends
on the selection of the utility function G and also on the initial
state U0 and on the values of p, X and B.
Note that in the particular case in which G is linear, equal to
aU + b (against the axioms (l0.4.1)),
GI - Gn = -aB + apX.
--- ----
u
® No ruins
-
U
\I
,,-Ruins
10
100
185
B
Figure 10.4.5 Example of the search for favourable strategies presented by
Pentikiiinen (1976). A. is the safety loading and f is a parameter that allocates
the sales efforts between the two classes of insurance presented in the example.
The midpoints only for each outcome are plotted. An acceptable or not accept-
able level of the risk of ruin is indicated by circles or asterisks respectively.
B
Figure 10.5.1 Competitive market of three insurers I, II and III. The curves
represent the flow of the average outcome EB(t), EU(t) as functions of time
for respective insurers (see Figs 10.3.2 and 10.3.3).
346 APPLICATION OF RISK THEORY
40
_ _ _ _- - 2
20 -1
etc. are examined. If the strategy changes are made in short time
intervals the scenario is much akin to systems which are treated by
the theory of differential games, a survey of which theory is edited by
Grote (1975).
Both for staff training and for business planning so-called business
games have been developed. Special teams play the role of the
management of different enterprises and their decisions are simul-
taneously put into a computer programmed to give as output the
market reactions. These are then fed back to the playing teams for
the next step. Obviously the models drafted in this chapter can well
be used for such purposes. If stochasticity is programmed into the
model, it can give a new dimension to conventional game models,
which may mostly be deterministic.
APPENDIX A
(ii) any interval should contain at least one claim with the probability
1; thus the interval (0, t] would contain an infinite number of claims
with the probability 1, contrary to (iii). Consequently the constant
can be denoted e-P(p ~ 0) giving
(p ~ 0), (A.2)
This is true for any positive rational number t. Further, since
is everywhere non-increasing, 1to(t) cannot have steps and is
1t o(t)
accordingly continuous. Thus (A.2) is also true for irrational ts.
In order to calculate 1tk(t) for k > 0 let h be an integer such that
n = 2h > k and write down the disjoint partition
i- 1 i ]
1.= ( --t,-t (i = 1, 2, ... , n),
, n n
of the interval (0, t].
This partition has the property that when h increases then the
pre-existent division points remain division points. From the set of
all possible realizations of. the process (both the number of claims
k and their placement in (0, t] vary) two subsets are now selected
for a fixed h as follows:
Ah is the set of all realizations (sample functions) such that at least
one claim occurs in exactly k intervals Ii; that is to say, realiza-
tions for which exactly n - k intervals remain without claims
(nowk ~ k)
Bh is the set of all realizations such that at least in one interval
Ii at least two claims occur (hence k ~ 2)
Evidently
{k(t) = k} c Ah U B h ,
since, in order that exactly k claims occur, it is necessar)1that either
k different intervals must include claims or some interval must have
at least two claims. On the other hand,
Ah - Bh c {k(t) = k},
since the left-hand side includes only realizations where claims
occur in exactly k intervals and nowhere. more than 1. Thus
prob(Ah - B h ) ~ 1t k (t) ~ prob(A h u B h )
or, a fortiori
A.l POISSON PROCESS 351
Hence,
(p ~ 0). (A.4)
352 APPENDIX A
A.2 Extensions
The theory of Poisson processes can be extended by replacing one
or more of the assumptions (i) to (iii) by weaker ones. The new
claim number processes obtained in this way playa central role in
the advanced theory of risk.
As has been seen, the conditions (i), (ii) - or (ii)' -lead to (A.2),
which gives 1t o(t) as an exponential function of t. Suppose now that
condition (ii) is substituted by a weaker condition assuming only
that:
prob{k'('1 + 'z) = O}
= prob{k'('1 + 'z) -k'('I) = O} prob{k'('I) = O},
and hence, because prob{k'(,) = O} = e- pr
prob {k'(, 1 + 'z) - k'(, 1) = O} = e -p r2 = prob {k'(,z) = O}.
A.2 EXTENSIONS 353
This proves that the process k'(r) satisfies the condition (ii)'.
Accordingly
It can be said that the process k(t) is a Poisson process in the trans-
formed new time scale r, in so-called operational time.
Conditions (i) to (iii) lead to a process where only the constant
p remains to be estimated in applications. The weakened condition
(ii)* instead of (ii) leads to a process where the function r(t) remains
as a 'parameter' to be estimated or assumed in applications. The
product pr(t) gives, in this case, the expected number of claims in
the interval (0, t]. The derivative r'(t) can be called the intensity
of the process. In applications the intensity can be assumed to be,
for example, increasing in accordance with some prognosis con-
cerning the future volume of the insurance collective in question or,
perhaps, due to the anticipated changes in the frequencies of claims.
The process (A.5) gives an example of processes with non-stationary
increments, also called heterogeneous in time, whereas the Poisson
process (A.4) is a process with stationary increments, also called
homogeneous in time.
A further extension of risk processes is obtained if the constant
p is thought of as a random variable p, which varies due to some
outer factors, e.g. random effects of weather conditions. Suppose
that the claim number k(t) satisfies the conditions (i), (ii)*, and (iii)
on condition that p has a given value p. Then the conditional dis-
tribution ofk(t) is again a Poisson distribution.
A more general case is obtained, which is also more realistic,
if p is dependent on time, hence being a general stochastic process
p(t). In order to give a short survey, let p(t) be a realization, i.e. a
sample function of this process, and suppose that, for the fixed p(t),
conditions (i) (ii)*, and (iii) are satisfied. Then again, for any value
of t, prob{k(t) = k} (on condition that this sample function p(t)
of the process 'occurs') is evidently dependent only on the expected
value of the number of claims in the interval (0, t], i.e. of the product
pr, where p is the value which the sample function takes for t, but
it is not dependent on the values that the sample function takes
for other values of time. Generally, since the operational time r
is calculated separately for different realizations p(t), it is dependent
354 APPENDIX A
prob{k(t) = k} = 1 o
00 (qt)k
e- ql - , dqH(q, t).
k.
(A.6)
Edgeworth expansion
N(X;m,u)=J(2n)u
1 fX [1
-2 (z - m)2 ]
-00 exp -u- dz
=N( X ~ m ; 0, 1) = N( X ~ m ).
clearly satisfies these conditions, the characteristic function of its
kth derivative is
(B.l)
where
(B.2)
+ ... + remainder,
where the terms omitted are of the form Cn i NU) (X; n m, j(n az ))
withj/2 - i ~ 3/2, C being independent of n. Hence, if x = (X - mn)/
j(na z ), according to (B.3),
F(X) = N(x) - ia 3n(j(na z ))- 3 N(3)(x)
+ z14a4n(j(naz))-4N(4)(x)
+ /Z a~ nZ( j(na z )) - 6 N(6)(x) + ... + remainder,
where the terms omitted are ofthe form
C'ni(J(na z ))- j N(j)(x) = C" ni - jlZ N(j)(x) = C" n -k N(j)(x),
C.I Assumptions
We are now going to extend to a special case the consideration
of the risk processes related to a finite time interval as considered
in Section 6.5, letting the length of the time interval grow to infinity.
The assumptions and notation are modified as follows.
Let A denote a time unit. It is assumed that the state of the risk
process is tested at times A, 2A, ... , tA, ....
The aggregate claims amount during the· period (( t - 1)A, tA]
is denoted by X(t) and the underwriting profit (or loss if negative)
will be defined for each period by the equation
Wltl
TIme In A Unl~S
o ----..........
'..... .... ........ , ,."
..... --,, I
,----- /'
, I
, I
, I
Figure C.l. Two realizations of the accumulated profit W(t) with L'1 as length
of test period. For the other realization a ruin (asterisk) is observed.
Then the rum probability related to the time period (0, T!1] IS
'PT = 'PT(U O )
= 1 - prob {W(t) ~ - U0 for t = 1,2, ... , T} (C.3)
where U0 is the initial risk reserve (see Fig. C.Ll).
Note that in terms the notation of Chapter 6 W(t) = U(t) - U 0
(if!1 = 1).
The irifinite time ruin probability 'P = 'P 00' which is obtained
when T tends to infinity, depends, as well as on the risk process,
on the choice of the time unit !1: the smaller !1 is the larger 'P be-
comes. When !1 ~ 0 the testing is carried out continually at every
time point.
In general the assumptions given above concerning the variables
Y(t) do not remain valid if !1 is replaced by some other time unit
!1', unless !1' = k!1, where k is some positive integer. However, if
the risk process under consideration is a simple compound Poisson
process, it is obvious that the assumptions remain valid for any
positive !1.
(CII)
or
(C.l2)
This formula is one of the main results of the theory of risk. In
it T can be any positive integer and therefore it holds also when T
tends to infinity.
Before developing the theory further it is of great interest to get
some idea as to how much 'I' = 'I' 00 differs in reality from the upper
limit e- RUo , i.e. would it be feasible to use the approximation
(C13)
C.2 ESTIMATION OF 'I' 361
measurements are made, and thus it will hold good even if the
possibility of insolvency is measured at every time.
A formula corresponding to (C.I8) can also be obtained when the
fluctuating basic probabilities are assumed to be in accordance with
the distribution (2.9.9). In this case the m.gJ. in (C.17) is given by
(3.4.4), and thus the equation
f oo
o eRZ dS(Z) = 1 +
l_e-(1+J.)nmR/h
nih ' (C.19)
d--V=W(t~l+U
~----I 1 0
Time
0--------4-----4-----~--------~
<t,-1lt.
,
Wet -1)+U =v-z -------~
, 0 1 ~
= El(e-RV+RZ1) (C.2l)
= El {E 1 (e- RV eRZ1 1 V)}
= El {e-RVE 1 (e RZ1 1 V)}.
= e- RV I fro
eRZ dS(Z)
1- S(V) v
0.2 Calculations
n(t) = n TI rg(r)
<=1
Note that by the conditions given the standard deviations are the
same for U and X but the third moments and skewness have opposite
signs.
a 2(t) = (r 2/n(t) + a;)Jlx(t)2
a 2(1, t) = a 2(1, t - I)r; + a 2(t)
Jl3(t) = (r 3/n(tf + 3r 2a~/n(t) + a!yq)Jlx(t)3
Jl 3(1, t) = Jl3(1, t - 1)r; + Jl3(t)
y(t) = Jl 3(t)/a 3(t)
y(1, t) = Jl 3(1, t)/a 3(1, t)
where
d
D(t)=[I-N(xl'y(t))]dU N(x 2 ;y(l,t-1))
U I = Ur(t - 1)
Xl = (r i U + p;Jt) - Ur(t) - Jlx(t))/a(t)
regarding cp(O, I) = 0.
If (6.9.7) is required, the algorithm should be applied
I _ '1''''
'1'''' = '1''''
I 1-1
+ l_cp(t_I)'t'
t-l (m(t) - m(t t - I)).
't',
APPENDIX E
Random Numbers
0.6296 0.2398 0.4581 0.3662 0.4208 0.9293 0.0621 0.0482 0.3030 0.4816
0.8251 0.4668 0.9510 0.7583 0.8647 0.8345 0.7651 0.9910 0.2975 0.7888
0.3556 0.0782 0.0987 0.5638 0.7772 0.6325 0.7109 0.9119 0.5130 0.7772
0.4041 0.7764 0.2097 0.6930 0.3849 0.7126 0.1610 0.9153 0.6557 0.0600
0.5427 0.3808 0.6239 0.8641 0.3968 0.2245 0.4621 0.5486 0.3656 0.3263
0.2164 0.3367 0.1768 0.6085 0.6667 0.4235 0.8771 0.0819 0.9354 0.8975
0.8170 0.6431 0.8471 0.4127 0.3852 0.6215 0.0381 0.6105 0.4928 0.0758
0.8154 0.2912 0.8076 0.8412 0.7663 0.3364 0.7387 0.3261 0.1178 0.3806
0.1709 0.0589 0.8460 0.2330 0.9489 0.9852 0.5352 0.0496 0.1115 0.5144
0.9877 0.2549 0.8766 0.3164 0.5828 0.2l38 0.3750 0.8691 0.3863 0.6536
E.2 Normally (0,1) distributed random numbers
0.6211 -1.1983 -0.5891 1.8947 1.2064 0.9722 -0.7708 0.1443 -0.3138 0.9395
-1.3769 -0.6988 -0.9492 -0.9243 0.2836 -0.6349 -0.3449 -0.0277 0.3379 -0.1070
0.9971 -0.3024 -0.5235 -0.0315 0.2423 0.3936 -0.3664 -1.4520 -0.2174 -2.0894
0.1015 -1.1994 -2.0946 1.1515 2.6083 1.0364 -0.3801 -0.7835 -1.3011 -1.1076
0.3792 -0.1713 -1.2002 -0.6104 0.4016 -0.2125 -0.8872 0.5802 1.2419 0.5373
0.2474 0.7294 0.6982 0.6237 0.6423 -1.3561 -0.1343 0.4261 0.0666 -0.9646
-0.1009 -0.2162 -0.5640 -1.4068 -0.1834 0.0118 -0.3795 -1.9057 0.7346 -1.7216
1.2772 0.3481 0.6472 0.3538 0.4539 -0.0924 -0.3890 1.4981 0.8572 -0.7642
-1.0352 -0.2686 -1.4689 -0.3880 -1.4175 0.3399 1.7439 -1.7855 3.0626 -0.0526
1.9466 -0.6060 -0.2452 0.6367 -0.8552 0.2417 0.3301 0.4039 0.6269 -1.4241
0.6947 -0.4526 -1.4981 2.3178 -0.1857 -0.4655 -1.4804 -1.0227 -1.2948 -0.2342
0.3982 0.1092 1.8420 -1.1016 - 1.4655 -0.1568 -0.4998 -1.0772 -0.4776 0.3044
-1.9760 0.3013 -1.0754 2.2538 0.2810 1.0067 3.8306 -0.1465 -0.2595 -1.1833
1.2749 -1.3010 -0.8147 0.2485 -1.5294 0.2597 -0.3160 -0.7570 3.2355 -0.2540
-l.U~06 -0.1517 -0.6264 -0.3259 -0.2138 0.2344 -1.4570 1.0950 0.2183 0.1515
-1.1553 2.2340 -0.5611 -0.5717 -0.9523 -0.2603 -0.3859 -0.0134 -0.2194 -0.8178
4.2132 -0.6998 0.8881 -0.7762 -0.2138 -0.1504 -0.4196 2.0429 -1.0294 0.2643
0.3140 -0.4378 3.7994 -1.2701 -0.4573 -0.3335 1.0099 -0.6327 0.1963 -0.1462
-0.8154 -0.3648 1.4191 -0.8595 -1.1017 1.7996 -0.0024 0.4220 -0.3136 -1.2075
-0.8118 0.1803 0.3486 2.5939 0.5823 -1.0132 -0.5640 -0.5312 - 1.1353 0.0013
APPENDIX F
F.l Chapter 2
2.4.1
co nk co nk- 1
0: 2 = k~O k 2 e- nk! = n e-nk~l (k _ 1)!k (substitute k = k - 1 + 1)
(a) prob {k:O:; 17} = 0.986 and prob {k:O:; 18} = 0.993. Hence, round-
ed upwards to the next integer value of k
U O=(l8-11)S=7S=7000.
(c) U 0 = 7000.
2.5.2 54300
374 APPENDIX F
e-1
L: (k -
00
2.6.3 Conditions (i) and (ii) are obvious. For proof of condition
(iii) define a variable W~ U= 1,2; k = 1,2, ... ) which gives the time
when the kth event of the process j occurs. The variables W: and
W; are independent and their distribution functions continuous
(see exercise 2.6.2). Then W: - W; is also continuously distributed.
Hence, W: i= W; by probability 1, i.e. the probability of a multiple
event is zero.
f
2.9.2
OO 1
M (s) = eSz _ e- z Zh-l d7.
r 0 r(h) -
Substitute u = (1 - s)z
1 1 foo
Mr(z) = (1 _ S)h x r(h) 0 e- uuh - 1 du
=(l-s)-h.
( h+k-l)(_h )h(_n )k
k n+h n+h
(h+k-l)(h+k-2) ... (h+ l)h 1 nk _nn k
-'-------~--,----------'-------'--'---
k!
X
(l+iY X ----+e
(n+h)k
-
k!'
2.9.5
k 0 I 2 3 4 5 6 7 8 9 10 II 12 13 14 15
P.( x 10 000) 67 337 842 1404 1755 1755 1462 1044 653 363 181 82 34 13 5 2
P.( x 10000) 173 578 1060 1413 1531 1429 1191 907643428271 164 96 54 30 14
376 APPENDIX F
0·2
-,L_-,
I
I
L_
0·1 I
L __
5 10 k 15
Figure F.1.l Poisson probabilities (solid line) and Polya probabilities;
n=5,h=lO.
- 2 [ h J2h [ 2h J2h
M(S) = h + n(l - eS ) = 2h + 2n(1 - e') .
= F(k, n),
(see exercise 2.6.2).
2.9.8
F.2 Chapter 3
3.3.1
n = 100 x 0.01 =1
It is convenient for the computations to take £100 as the monetary
unit. S is a step function having a step 2/3 at I and 1/3 at 2. The
total amount of the claims can be only a non-negative integer
X = 0, 1,2, ... , N, .... Constructing all the possible combinations
of the sums 1 and 2 which can lead to N, the following expansions
are obtained making use ofthe abbreviation Pk(l) = Pk (the difference
compared with (3.2.3) is that N, not k, is taken as the variable):
F(X)=p o =e- 1 =0.37 whenO~X<1
F(X) = Po + P1(t) = 0.61 when 1 ~ X < 2
F(X) = F(l) + P2(t)2 + P1(t) = 0.82 when 2 ~ X < 3.
The following steps are constructed in a similar way: F(3) = 0.92,
F(4) = 0.97, F(5) = 0.99, F(6) = 1.00.
3.3.2
E(X) = 1 x (1 x f + 2 x t) = 1.33 or £133
(J = J[1 X (12 x f+ 22 x t)] = 1.41 or £141.
3.3.3
3.3.4
The assertion follows from the fact that at least one of the factors
included in the last two sums is always zero.
3.3.5
i= 1 i= 1
Let Fn be the simple compound Poisson dJ. with the same claim size
dJ. as X and having n as expected number of claims, and let Gn be
the corresponding standardized dJ., i.e.
Therefore
k
Gn(x) -+ I hic(x - qJ = H(x) as h -+ 00.
i= 1
3.4.3 The proof is the same as for exercise 2.9.3 when eS is substi-
tuted by M z(s).
3.5.2 Assuming the formula valid for k - I the next step is verified
f:
by substituting (3.5.4) and the expression of S into:
it follows
at(az + b) = aat(z) + b = a
a,{/!:, + b) = a2 a2 (z) + 2aba t (z) + b2 = a(a + 1)
a3 (az + b) = a3 a3 (z) + 3a 2 ba 2 (z) + 3ab 2 a t (z) + b 3 = a(a + l)(a + 2).
e-l1.al1.
1 - r(w, a) = r(a + 1)
fro el1.-u (u)l1.-t
w ~ du
h
= 3C(q(w) + 4q(w + h) + 2q(w + 2h) + 4q(w + 3h) + ... ),
1 1 + -~
llC =j(2na) ( 1 + ~- 1 1 + ... ) .
l2a 288a 2
The series should be extended until the terms vanish within the
required limits of accuracy. The integration step h can be taken
as h = (l/k)ja, where k is another coefficient; e.g. k = 10 seems to
give a satisfactory accuracy. For z < 0 the integral can be taken from
w to - 00 giving r(w, a) and h should be negative.
F.2 CHAPTER 3 381
3.5.5
prob{Z:::; Z} = prob {Y:::; In(Z - a)}
1 f1n(z-a)
= -- e-(1/2a 2 )(Y-/l) dy,
afo -00
3.5.7
rJ. .
a.=--ZJ for j < rJ..
J rx-j 0
3.6.1
for Z < M
for Z ~ M.
3.8.1
X= 0 1 2 3 4 5 6
F = 0.162 0.215 0.441 0.521 0.695 0.760 0.858
382 APPENDIX F
3.11.2
prob{X < - nm 2 2)} = prob{X < O} < s.
J (na 2 +n Z
m a q
This implies
nm/ j(na 2 + n2m2a~) > - xe where xe = N y-l(S).
The condition is obtained from this inequality.
For calculation of xe the skewness y = Yx is needed. However, this
depends, according to (3.3.7), on the unknown n. Hence an iteration
is necessary.
Note that a conservative evaluation is obtained by taking xe =
N- 1(s). In this case x = N- 1(lO-4) = - 3.72 which can be taken as
an initial value for iteration. It follows from (3.11.15) that n = 143.
Furthermore, substituting in (3.3.7) this value and the given value
y = 0.531 are obtained. Then according to (3.11.15) and (3.11.16)
x = - 3.01. Repeating the iteration loop four times more the final
evaluation n = 85 is obtained.
If the initial s were 10- 3 , then n = 60 would result.
a 1 = a/a = J1.x
a 2 = a(a + 1}/a 2 = ai + J1.i .
Solving these equations the following is obtained
a= J1.x/ai, a = J1.i/ a i and Yx = 2ax/J1.x·
3.12.2
y = y/6 - 6jy + 3(2jy)2 /3(2/y)1 /3[1 + yz/2]1/ 3.
Develop the term ( )1 /3 as a Taylor series
F.3 Chapter 4
4.2.1
U = yj(na ) - ).P; U + LlU = yj(0.9na 2) - ).P. Hence LlU =
(U + ).P)(:]0.9 - 1) = (20 + 0.05 x 100)(~O.9 - 1) = -1.28
F.3 CHAPTER 4 383
4 3yZ
p = "3 - 4nA Z ~ 0.52,
4.2.3
U = yJ(rz/n + (j'~)P - AP = 0
A = yJ(r z/n + (j'~)
4.2.4 Substitute
a i M-~+i
ai =--·---· .
a-I a-I
into
and calculate U for suitably chosen M values. After some trials the
requested value is obtained: M = 2.0.
4.5.4 Reading from Fig. 4.5.1 one finds M ~ £0.5 million. The
approximate value is £0.45 million.
4.5.6 Let M move upwards until the rate of ir(M), calculated in the
previous exercise, reaches i o. It is convenient to solve M 2 /a 2 from
the equation. From the given data the value 1130 is found for it.
The line most nearly corresponding to it in Table 3.6.1 is M = £2.512
million. Then P = £80.8 million and U = £18.4 million are obtained
from (4.1.8).
F.3 CHAPTER 4 385
Substituting these expressions and the given data into (4.l.8) the
equation
1
20 = j(253.3 + 217.21nM) - 4+M
4.7.1 Applying (3.3.7) and counting also the cases Zre = 0 into n we
have
f:
where
4.7.2
nbZ~[_l_ __
IX - 1 A,,-l
1_J
B"- 1 •
4.7.4
Sre(Zre) = (S(M + Zre) - S(M))/(1 - S(M))
= 1 - e- czre
4.9.1 7 years.
4.10.1
2 1 - (1 - Z)2t 2 2
0" p = 1 _ (1 _ zf Z 0"
For proof of the second part of the exercise replace the individual
variances by their joint upper limit.
4.10.2
4.10.3
Z_ 0.1 _
- 2J(10/200) - 0.224
180000
Pi = 0.224 x 1000 + 0.776 x 300 = £273.
F.4 CHAPl ER 5 387
F.4 Chapter 5
5.1.1 (i) is obvious and (ii) is simply the formula E(E(Y 1 X)) = E(Y)
given in exercise 2.8.1. To prove (iii) note that quite generally
V(Y) = V(E(YIX)) + E(V(YIX)) ~ V(E(YIX)),
where V(Y 1X) = E[ (Y - E(Y 1X) )21 X] is the conditional variance
of Y. In fact, since E(Y 1X) = R(X) is a function of X, it is seen that
V(Y) - V(E(YIX)) = E(y2) - E(R2(X))
= E[E(y2 - R 2(X)IX)]
= E[E(y2 - 2R(X)Y + R 2(X)IX)]
= E{E[(Y - R(X))2IX]} = E[(Y - R(X)f] ~ 0
Then
5.2.2
388 APPENDIX F
(iii) v= YPP-AP.
F.5 Chapter 6
with
with T = 3.25. Verify the result for t = 1,2, ... comparing with the
values which are obtained directly from the difference equation;
the first step being
f(l) = - tf(O) - tf( - 1) = - 0.075.
Compared with Fig. 6.2.1 the oscillation is now rapidly vanishing
as t increases. This shows that the stationary oscillation in Fig. 6.2.1
is maintained by the continued impulses due to the 'noise' e(t).
6.4.1 Now
n(r) = nr;(1 + z(r»
Jlx(r) = n(r)m(r) = nmr~r~(l + z(r»
P(r) = Jlx(r)/(l + z(r»
where nand m are obtained from (3.7.2) and (3.7.4). Then (see (3.7.9»
t2
6.7.1 The second term of the right hand side of (6.7.11) is replaced
by
6.8.1 As primary entries in all the following cases are either uni-
formly (0,1) generated random numbers r or normally N(O, 1)
distributed numbers y.
390 APPENDIX F
(i)
(I) n small
First make a program for the calculation of
k
F(k) = L Pi(n),
i=O
(1) Generate r
(2) k=0
(3) If r :::; F(k) deliver k
(4) k --+ k + 1; go to (3).
(II) n large
Denote by y the argument of N[ ] in (3.5.14), substitute (see
(2.4.5» Y= llJn and z=(k-n)l)n and solve k:
(a) k = [(y + A)IBr - n - t,
where y is normally (0, I) distributed and
(1) Generate y
(2) Calculate k from (a).
F.S CHAPTER 6 391
(ii)
1
X = --In(l- F).
a
(1) Generate r
1
(2) X = - -In r.
a
Note that rand 1 - r are equally distributed.
(iii)
(1) Generate rl' r 2 , ••• , rh
h 1 1 h
(2) X = L --Inri = --In Il rio
i=l C C i=l
(iv)
(1) Generate y
(2) X = a + exp(yu -IL).
(v)
(1) Generate r
(2) X = X or- 1 /a..
). It}
0.4
-0.4
5 10 15 20 25
t
Figure F.S.l.
392 APPENDIX F
x o 2 4
c(X) 1352 0 2394 2013 1494 986 703 439 256 159
F(X) 0.1354 0.1354 0.3749 0.5763 0.7257 0.8244 0.8947 0.9385 0.9641 0.9800
x 10 11 12 13 14 15 16 17 18 19
c(X) 85 40 28 17 15 6 2
F(X) 0.9885 0.9924 0.9952 0.9969 0.9984 0.9990 0.9992 0.9996 0.9997 0.9998
F.6 Chapter 7
7.10.1 From
it follows
A
- - - ~ Yt (1 - c - Ab)O'q (1 - r~19p) - t
1 - r.
+ Z m + ur ,
19p
A = Y,O'qj(1 - rigp).
1 + r igp
Substituting the numerical values from item 7.1(d) (note Ye = 2.33,
ur and w = 1.71) we have A~ 0.028.
Hence
t[A
L=r u(0) +---y
r- l'
J(r2(1- C - Ab)2(1 -
n(l - p)
pt)p)] - -A-
r - l'
which proves that L --+ 00 when t --+ 00 if p < 1 and the expression
in the brackets is positive.
F.7 Chapter 8
8.2.1
F.8 Chapter 9
and since the variables X(t) are mutually independent it follows that
k = flu(t)/O"u(t).
Then
(a) cp(t) < prob {I u(t) - flu(t) I> flu(t)} < O"u(t)2 / flJt)2.
Denoting briefly r = r igp ' it follows from equation (6.6.8a)
1 - rt
(b) fl (t) = u(O)rt + A - - = A[l + o(rt)],
u 1- r
where A is a constant and the expression 0(') vanishes when its
argument tends to zero. Furthermore, according to (6.6.11) (see
exercise 7.1 0.1)
r 1 _pt
(c) 0"2(t) = --.l.(1 - C- A )2 pr2t _ _ .
U n b I-p
F.9 CHAPTER 10 395
and then
qJ(t) < [r 21 - (r2p)l]C,
where Band C are constant. Because rand r2 p = l/rg are by assump-
tion < 1 the premises of exercise 9.1.2 are satisfied which proves the
assertion.
9.2.2
e -RU/(e -RU e -RM) =e RM =e
(-RU)-M/U .
For e- RU = 0.Q1 and M/U :::; 0.02 this is :::; 1.096 and for e- RU = 0.001
it is :::; 1.148.
F.9 Chapter 10
E(H(U)) = aE(G(U)) + b,
from which the assertion follows directly.
Bibliography
Cornish, E.A. and Fisher, R.A. (1937) Moments and cumulants in the
specification of distributions, Rev. Int. Statist. Inst., 5, 307.
Cox, D.R. and Miller, H.D. (1965), The Theory of Stochastic Processes,
Methuen. London.
Cramer, H. (1926) Review of F. Lundberg, SA.
Cramer, H. (1930) On the mathematical theory of risk, Skandia Jubilee
Volume, Stockholm.
Cramer, H. (1945) Mathematical Methods of Statistics, Almqvist and Wick-
sells, Uppsala.
Cramer, H. (1955) Collective risk theory, a survey of the theory from the point
of view of the theory of stochastic processes, Skandia Jubilee Volume,
Stockholm.
De Wit, G.W. and Kastelijn, W.M. (1980) The solvency margin in non-life
insurance companies, AB.
D'Hooge, L. and Goovaerts, M.J. (1976) Numerical treatment of the deter-
mination of the structure function of a tariff class, CA.
DuMouchel, W.H. and Olsten, R.A. (1974) On the distribution of claim
costs, Berkeley Actuarial Research Conference on Credibility.
Eggenberger, F. and Polya, G. (1923) Uber die Statistik der vergetteter
Vorgiinge, Zeitschriftfiir angewandte Mathematik und Mechanik, I.
Ferrari, J.R. (1968) The relationship of underwriting, investment, leverage,
and exposure to total return on owners' equity, peAS.
Forrester, J.W. (1972) Industrial Dynamics, The MIT Press, Massachusetts.
Friedman, J.W. (1977) Oligopoly and the Theory of Games, North Holland
Publishing Company, Amsterdam, New York and Oxford.
Frisque, A. (1974) Dynamic model for insurance company's management,
AB.
Galbraith, J.K. (1973) Economics and the Public Purpose.
Galitz, L. (1982) The ASIR model, GP.
Gelder, H.v. and Schauwers, C. (1981) Planning in theory and practice with
reference to insurance, Delta Lloyd Insurance Group, Amsterdam.
General Electric Company (1962) Tables of the Individual and Cumulative
Terms of Poisson Distribution, D. van Nostrand Co., Princeton.
Gerber, H.U. (1979) An Introduction to the Mathematical Risk Theory,
S.S. Huebner foundation monographs, University of Pennsylvania.
Gerber, H.U. (1982) On the numerical evaluation of the distribution of
aggregate claims and its stop loss premiums, Insurance: Mathematics and
Economics, I.
Geusau, A.B.J.J. von (1981) Some applicable actuarial forecasting models,
Nederlandske Reassurantie Groep.
Godolphin, E.J. (1980) Specifying univariate models for the Zoete equity
index, Maturity Guarantees Working Party.
Gossiaux, A-M. and Lemaire, J. (1981) Methodes d'ajustement de distri-
butions de sinistres, MS.
BILIOGRAPHY 399