0% found this document useful (0 votes)
15 views53 pages

Stochastic Reserving Gi

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views53 pages

Stochastic Reserving Gi

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 53

Stochastic Reserving in

General Insurance

Peter England, PhD


EMB

GIRO 2002
Aims
• To provide an overview of stochastic
reserving models, using England and
Verrall (2002, BAJ) as a basis.
• To demonstrate some of the models
in practice, and discuss practical
issues
Why Stochastic Reserving?
• Computer power and statistical methodology make it
possible
• Provides measures of variability as well as location
(changes emphasis on best estimate)
• Can provide a predictive distribution
• Allows diagnostic checks (residual plots etc)
• Useful in DFA analysis
• Useful in satisfying FSA Financial Strength proposals
Actuarial Certification
• An actuary is required to sign that the
reserves are “at least as large as those
implied by a ‘best estimate’ basis
without precautionary margins”
• The term ‘best estimate’ is intended to
represent “the expected value of the
distribution of possible outcomes of the
unpaid liabilities”
Conceptual Framework

R e s e rv e e stim a te
(M e a su re of lo ca tio n)

V a ria b ility
(P re d ic tio n E rro r)

P re d ic tiv e D istrib u tion


Example

357848 766940 610542 482940 527326 574398 146342 139950 227229 67948 0
352118 884021 933894 1183289 445745 320996 527804 266172 425046 94,634
290507 1001799 926219 1016654 750816 146923 495992 280405 469,511
310608 1108250 776189 1562400 272482 352053 206286 709,638
443160 693190 991983 769488 504851 470639 984,889
396132 937085 847498 805037 705960 1,419,459
440832 847631 1131398 1063269 2,177,641
359480 1061648 1443370 3,920,301
376686 986608 4,278,972
344014 4,625,811
18,680,856
3.491 1.747 1.457 1.174 1.104 1.086 1.054 1.077 1.018 1.000
Prediction Errors
Mack's Over-
Distribution dispersed Negative
Year Free Poisson Bootstrap Binomial Gamma Log-Normal
2 80 116 117 116 48 54
3 26 46 46 46 36 39
4 19 37 36 36 29 32
5 27 31 31 30 26 28
6 29 26 26 26 24 26
7 26 23 23 22 24 26
8 22 20 20 19 26 28
9 23 24 24 23 29 31
10 29 43 43 41 37 41
Total 13 16 16 15 15 16
Figure 1. Predictive Aggregate Distribution of Total Reserves

10000 14000 18000 22000 26000 30000 34000


Total Reserves
Stochastic Reserving Model Types
• “Non-recursive”
– Over-dispersed Poisson
– Log-normal
– Gamma
• “Recursive”
– Negative Binomial
– Normal approximation to Negative Binomial
– Mack’s model
Stochastic Reserving Model Types
• Chain ladder “type”
– Models which reproduce the chain ladder results
exactly
– Models which have a similar structure, but do not
give exactly the same results
• Extensions to the chain ladder
– Extrapolation into the tail
– Smoothing
– Calendar year/inflation effects
• Models which reproduce chain ladder results
are a good place to start
Definitions
Assume that the data consist of a triangle of incremental
claims:
C ij : j  1, , n  i  1; i  1, , n

The cumulative claims are defined by:


j
Dij   Cik
k 1
and the development factors of the chain-ladder technique are
denoted by
 j : j  2, , n
Basic Chain-ladder
n  j 1

D ij
j  i 1
n  j 1

D
i 1
i , j 1

Dˆ i ,n i  2  ˆn  j  2 Di ,n i 1

Dˆ i , j  ˆ j Dˆ i , j 1  j  n  i  3, , n 
Over-Dispersed Poisson

Cij ~ IPoi ( ij )


log ij  ij  Cij   ij
Var Cij   o  Cij 
log likelihood   Cij log ij  ij 
What does Over-Dispersed
Poisson mean?
• Relax strict assumption that variance=mean
• Key assumption is variance is proportional
to the mean
• Data do not have to be positive integers
• Quasi-likelihood has same form as Poisson
likelihood up to multiplicative constant
Predictor Structures

ηij  c  ai  b j (Chain ladder type)

i (t)  c  ai  b.t  d log(t) (Hoerl curve)

i (t)  c  ai  s1 (t )  s2 (log(t)) (Smoother)

plus many others


Chain-ladder
ηij  c  ai  b j
a1  0
b1  0
log  ij  ij
log likelihood  Cij log  ij   ij
Other constraints are possible, but this is usually the easiest.
This model gives exactly the same reserve estimates as the
chain ladder technique.
Excel
• Input data
• Create parameters with initial values
• Calculate Linear Predictor
• Calculate mean
• Calculate log-likelihood for each point in the
triangle
• Add up to get log-likelihood
• Maximise using Solver Add-in
Recovering the link ratios
Incrementals
c c  b2 c  b3
e e e e c  b4

c  a2  b2 c  a2  b3
e c  a2
e e
c  a3 c  a3  b2
e e
c  a4
e
Recovering the link ratios
Calculate ratios of cumulatives, which are the
same for each row. Eg row 2:
Column 2 to Column 1:
c  a2 c  a2  b2
e e 1 e b2

c  a2

e 1
Column 3 to Column 2:
c  a2 c  a2  b2 c  a2  b3
e e e 1 e  e
b2 b3

c  a2 c  a2  b2

e e 1 e b2
Recovering the link ratios
In general, remembering that b1  0

e e e e
b1 b2 b3 bn
n  b1 b2
e  e   e bn1
Variability in Claims Reserves
• Variability of a forecast
• Includes estimation variance and process
variance

prediction error  (process variance  estimation variance)


1
2

• Problem reduces to estimating the two


components
Prediction Variance

E  y  yˆ   E  y  E  y    yˆ  E  y  


2
 2

   


 E  y  E  y    yˆ  E  yˆ  


2

 

 E  y  E  y    2 E  y  E  y   yˆ  E  yˆ   E  yˆ  E  yˆ  


2 2

   

 E  y  E  y    E  yˆ  E  yˆ  
  
2 2

   

Prediction variance=process variance + estimation variance


Prediction Variance (ODP)

Individual cell
MSE   ij   ij Var (ij )
2

Row/Overall total

MSE    ij    ij Var (ij )


2

 2 Cov(ij ,ik )  ij  ik
Bootstrapping
• Used where standard errors are difficult
to obtain analytically
• Can be implemented in a spreadsheet
• England & Verrall (BAJ, 2002) method
gives results analogous to ODP
• When supplemented by simulating
process variance, gives full distribution
Bootstrapping - Method
• Re-sampling (with replacement) from
data to create new sample
• Calculate measure of interest
• Repeat a large number of times
• Take standard deviation of results

• Common to bootstrap residuals in


regression type models
Bootstrapping the Chain Ladder
(simplified)

1. Fit chain ladder model


C
2. Obtain Pearson residuals rP 
3. Resample residuals 
4. Obtain pseudo data, given rP* , 
C *  rP*  
5. Use chain ladder to re-fit model, and
estimate future incremental payments
Bootstrapping the Chain Ladder
6. Simulate observation from process
distribution assuming mean is
incremental value obtained at Step 5
7. Repeat many times, storing the
reserve estimates, giving a predictive
distribution
8. Prediction error is then standard
deviation of results
Log Normal Models
• Log the incremental claims and use a
normal distribution
• Easy to do, as long as incrementals are
positive
• Deriving fitted values, predictions, etc is
not as straightforward as ODP
Log Normal Models

log Cij ~ IN (  ij ,  ) 2

 ij  ij
(Cij )  mij
ˆ ij  exp(ˆ ij  ˆ )
m 1
2
2
ij

ˆ  Var (ˆij )  ˆ
2
ij
2
Log Normal Models
• Same range of predictor structures
available as before
• Note component of variance in the
mean on the untransformed scale
• Can be generalised to include non-
constant process variances
Prediction Variance
Individual cell
ˆ 2
ˆ
MSE (Cij )  mij exp( ij )  1
2

Row/Overall total

MSE   m 2
ij 
ˆ exp(ˆ )  1 2
ij 
 2 m  
ˆ ik expCov (ˆij ,ˆik )  1
ˆ ij m
Over-Dispersed Negative
Binomial
Cij ~ negative binomial, with

mean  j  1Di , j 1 and

variance  j  j  1Di , j 1
Over-Dispersed Negative
Binomial

Dij ~ negative binomial, with

mean  j Di , j 1 and

variance  j   j  1 Di , j 1
Derivation of Negative
Binomial Model from ODP
• See Verrall (IME, 2000)
• Estimate Row Parameters first
• Reformulate the ODP model, allowing
for fact that Row Parameters have been
estimated
• This gives the Negative Binomial model,
where the Row Parameters no longer
appear
Prediction Errors
Prediction variance = process variance +
estimation variance

Estimation variance is larger for ODP than NB

but

Process variance is larger for NB than ODP

End result is the same


Estimation variance and
process variance
• This is now formulated as a recursive
model
• We require recursive procedures to
obtain the estimation variance and
process variance
• See Appendices 1&2 of England and
Verrall (BAJ, 2002) for details
Normal Approximation to
Negative Binomial

Dij ~ normal, with

mean  j Di , j 1 and

variance  j Di , j 1
Joint modelling
1. Fit 1st stage model to the mean, using
arbitrary scale parameters (e.g. =1)
2. Calculate (Pearson) residuals
3. Use squared residuals as the response in a
2nd stage model
4. Update scale parameters in 1st stage model,
using fitted values from stage 3, and refit
5. (Iterate for non-Normal error distributions)
Estimation variance and
process variance
• This is also formulated as a recursive
method
• We require recursive procedures to
obtain the estimation variance and
process variance
• See Appendices 1&2 of England and
Verrall (BAJ, 2002) for details
Mack’s Model

Specifies first two moments only

Dij has mean  j Di , j 1 and

variance  Di , j 1
2
j
Mack’s Model
Provides estimators for  j and  2j

n  j 1

w ij f ij
ˆ j  i 1
n  j 1

w
i 1
ij

Dij
wij  Di , j 1 and f ij 
Di , j 1
Mack’s Model

n  j 1
1
 
2
j 
ˆ 2

n  j i 1
wij fij  ˆ j

 
2  
n 1
ˆ 1 1
MSEP  Ri   Din  2
ˆ ˆ 2 k 1   nk 
ˆ ˆ
k  n i 1 k 1  Dik 
  Dqk 
 q 1 
Comparison

• The Over-dispersed Poisson and


Negative Binomial models are different
representations of the same thing
• The Normal approximation to the
Negative Binomial and Mack’s model
are essentially the same
The Bornhuetter-Ferguson Method
• Useful when the data are unstable
• First get an initial estimate of ultimate
• Estimate chain-ladder development
factors
• Apply these to the initial estimate of
ultimate to get an estimate of
outstanding claims
Estimates of outstanding
claims
To estimate ultimate claims using the chain ladder technique, you
would multiply the latest cumulative claims in each row by f, a
product of development factors .

Hence, an estimate of what the latest cumulative claims should be is


obtained by dividing the estimate of ultimate by f. Subtracting this
from the estimate of ultimate gives an estimate of outstanding
claims:

 1
Estimated Ultimate  1  
 f 
The Bornhuetter-Ferguson Method
Let the initial estimate of ultimate claims for
accident year i be M i

The estimate of outstanding claims for accident


year i is  1 
M i 1  
 n  i  2 n  i  3  n 

1
 Mi n i  2 n i 3  n  1
n  i  2 n  i  3  n
Comparison with Chain-ladder

1
Mi
n  i  2 n  i  3  n
replaces the latest cumulative
claims for accident year i, to which the usual chain-
ladder parameters are applied to obtain the estimate of
outstanding claims. For the chain-ladder technique, the
estimate of outstanding claims is

Di , n i 1 n i  2 n i  3  n  1
Multiplicative Model for
Chain-Ladder
Cij ~ IPoi ( ij )
(Cij )  ij  ij
n
E Cij   xi y j with y k 1
k 1

xi is the expected ultimate for origin year i


y j is the proportion paid in development year j
BF as a Bayesian Model
Put a prior distribution on the row parameters.
The Bornhuetter-Ferguson method assumes there
is prior knowledge about these parameters, and
therefore uses a Bayesian approach. The prior
information could be summarised as the
following prior distributions for the row
parameters:
xi ~ independent  i ,  i 
BF as a Bayesian Model
• Using a perfect prior (very small
variance) gives results analogous to the
BF method
• Using a vague prior (very large
variance) gives results analogous to the
standard chain ladder model
• In a Bayesian context, uncertainty
associated with a BF prior can be
incorporated
Stochastic Reserving and
Bayesian Modelling
• Other reserving models can be fitted in a
Bayesian framework
• When fitted using simulation methods, a
predictive distribution of reserves is
automatically obtained, taking account of
process and estimation error
• This is very powerful, and obviates the need
to calculate prediction errors analytically
Limitations
• Like traditional methods, different stochastic
methods will give different results
• Stochastic models will not be suitable for all
data sets
• The model results rely on underlying
assumptions
• If a considerable level of judgement is
required, stochastic methods are unlikely to
be suitable
• All models are wrong, but some are useful!
References
England, PD and Verrall, RJ (2002) Stochastic Claims
Reserving in General Insurance, British Actuarial Journal
Volume 8 Part II (to appear).
Verrall, RJ (2000) An investigation into stochastic claims
reserving models and the chain ladder technique,
Insurance: Mathematics and Economics, 26, 91-99.

Also see list of references in the first paper.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy