0% found this document useful (0 votes)
61 views13 pages

Integracion Romberg

This document discusses Romberg integration, a technique for numerically estimating definite integrals and extrapolating integration results to improve accuracy. It begins by introducing the mid-point rule for approximating integrals with rectangles. It then shows that the error of this approximation involves even powers of the step size h. The document describes how repeated integrations can be used to express the error in terms of unknown coefficients and derive formulas for estimating errors and extrapolating results to higher orders of accuracy using multiple integrations with different step sizes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views13 pages

Integracion Romberg

This document discusses Romberg integration, a technique for numerically estimating definite integrals and extrapolating integration results to improve accuracy. It begins by introducing the mid-point rule for approximating integrals with rectangles. It then shows that the error of this approximation involves even powers of the step size h. The document describes how repeated integrations can be used to express the error in terms of unknown coefficients and derive formulas for estimating errors and extrapolating results to higher orders of accuracy using multiple integrations with different step sizes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Aarhus University November 17, 2005

Department of Computer Science Ole Østerby

Romberg integration

Extrapolation and error estimation

1. Mathematical preliminaries

Many problems in Numerical Analysis involve the transformation of a continuous


problem into a discrete one. Our example here is the calculation of the value
of a definite integral which is turned into computing a weighted sum of function
values. An important parameter in this discretization is the mesh length or step
size, h:
n
b 2i − 1 b−a
Z X
f (x)dx ≈ h f (a + h); h= . (1)
a i=1 2 n

Geometrically speaking we approximate the area under f (x) by the sum of the
areas of a succession of rectangles of width h and height equal to the function
value at the mid-point.

Fig. 1. The mid-point rule

If the integrand is sufficiently smooth the error of this approximation can be


expressed as a series of terms involving even powers of the step size h.
To see this in a simple case consider the integral of f (x) over a single subinterval

1
of length h placed symmetrically around 0:
Z h/2
f (x)dx.
−h/2

Now replace f (x) by its Taylor series around 0:

xk (k) 1
f (0) = f (0) + xf 0 (0) + x2 f 00 (0) + · · ·
X
f (x) = (2)
k=0
k! 2

Disregarding small details about convergence and the interchange of integration


and summation we have
h/2 1 (k) h/2
Z Z
xk dx.
X
f (x)dx = f (0)
−h/2 k=0 k! −h/2

Now all the odd-term integrals are 0 and we get


Z h/2 1 (2j) Z h/2
x2j dx
X
f (x)dx = f (0) · 2
−h/2 j=0 (2j)! 0

2 h
f (2j) (0)( )2j+1
X
= (3)
j=0 (2j + 1)! 2
1 3 00 1 5 (4)
= hf (0) + h f (0) + h f (0) + · · ·
24 1920

So the integral of f (x) is equal to the function value at the mid-point times the
interval length plus a remainder series involving only odd powers of h.
The above expression sheds some light on what we mean by f (x) being smooth.
f (x) should be differentiable a certain number of times with a continuous deriva-
tive of some high order, such that we can replace the three dots in the Taylor
expansion by a suitable remainder term.
Now for the integral over [a, b] which is divided into n subintervals each of length
h = b−a
n
. Using formula (3) in each subinterval and adding terms we get
Z b n
X
f (x)dx = h f (mi ) + R, (4)
a i=1

where mi = a + 2i−1
2
h are the mid-points of the subintervals, and the error term
R can be written as
n
h2j+1
f (2j) (mi ).
X X
R = 2j
(5)
j=1 (2j + 1)!2 i=1

2
Using a generalized mean value theorem a sum of n values of a continuous function
can be written as n times a function value at some intermediate point. With
nh = b − a we get
h2j
f (2j) (ξjn ).
X
R = (b − a) 2j
(6)
j=1 (2j + 1)!2

Since ξjn may depend on n this is not quite what we need. So we look for another
argument to get the ξ independent of n. If we apply (4) to f 00 , turn the formula
around, and use it together with (5) we get
n Z b
ĉj h2j+1 .
X X
h f 00 (mi ) = f 00 (x)dx − (7)
i=1 a j=1

So we can express a sum of function values as an integral (independent of n) and


a remainder term which involves even powers of h times similar sums of function
values. Using this argument repeatedly we arrive at
Z b n
f (mi ) + c2 h2 + c4 h4 + · · ·
X
f (x)dx = h (8)
a i=1

where the c-coefficients involve corresponding derivatives of f with numerical


factors which we do not wish to compute analytically.

2. Extrapolation

Before we carry on let us generalize (8) a bit. Let M (f ) be a mathematical


problem (such as determining the value of a definite integral) and let N (f, h) be
a numerical method to produce an approximation to M based on a step size h.
Furthermore assume that the leading terms in the remainder contain h to the
powers p and q with coefficients cp and cq independent of h:

M (f ) = N (f, h) + cp hp + cq hq + · · · (9)

A calculation with twice the step size leads to

M (f ) = N (f, 2h) + cp (2h)p + cq (2h)q + · · · (10)

Subtracting (9) from (10) gives

N (f, h) − N (f, 2h) = cp (2p − 1)hp + cq (2q − 1)hq + · · · (11)

The leading error term in (9) can now be expressed as


N (f, h) − N (f, 2h)
c p hp = p
− c̃q hq − · · · (12)
2 −1

3
with
2q − 1
c̃q = cq . (13)
2p − 1
So we have a formula for estimating (the leading term of) the error if we know p.
But what if we do not know p, or if we think we know it but are not too sure?
Well, we can perform a third calculation doubling the step size once again:

M (f ) = N (f, 4h) + cp (4h)p + cq (4h)q + · · · (14)

Subtract (10) from (14):

N (f, 2h) − N (f, 4h) = cp (2p − 1)(2h)p + cq (2q − 1)(2h)q + · · · (15)

and divide (15) by (11):

N (f, 2h) − N (f, 4h) cp + c̃q (2h)q−p + · · ·


= 2p (16)
N (f, h) − N (f, 2h) cp + c̃q hq−p + · · ·
As the step size becomes smaller the leading terms in the numerator and the
denominator become more and more dominant and the whole expression will
approach 2p (if p < q). So we have an experimental method for determining or
verifying the value of p. And once we know p we can use formula (12) to estimate
(the leading term of) the error. Even more, we can use this expression to modify
the computed result:
N (f, h) − N (f, 2h)
N (f, h) + = M (f ) + (c̃q − cq )hq + · · · (17)
2p − 1
So by adding the correction term we have eliminated the leading term of the error
and have arrived at a method of order q instead of p. The constant in front of q has
changed, but this is of less concern since we did not know it from the beginning
anyway. This process is known as extrapolation and is often associated with the
name Richardson, who described this process in a special case [7]. According
to Eduard Stiefel [10] the process was known as early as 1654, when Christiaan
Huygens used it to calculate approximations to π [5].

3. Back to integration

A necessary condition for formula (8) to hold is that f is differentiable a certain


number of times; and for the process of extrapolation to be useful it is also
important that these derivatives do not grow too fast. Instead of spending a few
hours with a tedious pencil-and-paper analysis it would be a lot more efficient
to have the computer use a few seconds on extra calculations in order to help us

4
make decisions. But is it possible? - and is it safe? In my opinion the answer to
both questions is yes.
Formula (16) provides the necessary tool if used properly. In our integration
procedure we expect the powers of h to be 2, 4, (6, 8, . . . ). Therefore we expect
the ratio in (16) to be close to 4 (= 22 ). Now what do we mean by ‘close to’ ?
And what if it is not?
Well, we should not base far reaching conclusions on just a single number. Rather
we could imagine a series of calculations each with half the step size of the previous
one: h = b−a
n
; n = 2k ; k = 0, 1, 2, . . .
With k calculations we have k − 1 differences and k − 2 quotients. As h gets
smaller the effect of the second term (cq ) and subsequent terms in the error will
diminish and we would expect to see a series of quotients with values approaching
4.00. In Fig. 2 we show the results of a series of calculations using the mid-point
formula on the integral of exp(x) from 0 to 1.
k n N (f, h) = Rk,0 difference quotient
0 1 1.6487212707001282
1 2 1.7005127166502081 0.0517914459500799
2 4 1.7138152797710871 0.0133025631208790 3.893
3 8 1.7171636649956870 0.0033483852245999 3.973
4 16 1.7180021920526605 0.0008385270569735 3.993
5 32 1.7182119133838587 0.0002097213311982 3.998
6 64 1.7182643493168632 0.0000524359330045 4.000
7 128 1.7182774586501623 0.0000131093332991 4.000
8 256 1.7182807360053651 0.0000032773552028 4.000
9 512 1.7182815553455351 0.0000008193401699 4.000
10 1024 1.7182817601806615 0.0000002048351264 4.000

Fig. 2.

It is rather obvious that the quotients approach 4.000 as n increases in accor-


dance with the fact that exp(x) is a smooth function. It is therefore justified to
extrapolate which in this case amounts to adding 1/3 of the difference to N (f, h).
Renaming N (f, h) as Rk,0 we can write

Rk,0 − Rk−1,0
Rk,1 = Rk,0 + . (18)
3

In this way we eliminate the h2 -term in (8). The resulting extrapolated values
(Rk,1 ) will satisfy a similar expansion with exponents 4, 6, 8, . . . in h. So why

5
not continue the process now with p = 4, q = 6. In Fig. 3 we show Rk,1 together
with the corresponding differences and quotients.
k n Rk,1 difference quotient
1 2 1.7177765319669014
2 4 1.7182494674780466 0.0004729355111452
3 8 1.7182797934038869 0.0000303259258403 15.595
4 16 1.7182817010716516 0.0000019076677646 15.897
5 32 1.7182818204942580 0.0000001194226065 15.974
6 64 1.7182818279611980 0.0000000074669400 15.994
7 128 1.7182818284279286 0.0000000004667307 15.998
8 256 1.7182818284570993 0.0000000000291707 16.000
9 512 1.7182818284589250 0.0000000000018257 15.978
10 1024 1.7182818284590369 0.0000000000001119 16.313

Fig. 3.

Again the quotients display a convergent behaviour, now towards 24 = 16.000.


We can conclude that indeed exp(x) is smooth and that the error can be assumed
to be of the form as in (9) and consequently that we can extrapolate once more

Rk,1 − Rk−1,1
Rk,2 = Rk,1 + .
15
In general we can define
Rk,m−1 − Rk−1,m−1
Rk,m = Rk,m−1 + (19)
22m − 1
because the leading term in the error of Rk,m−1 contains h2m when the integrand
is smooth. But it is always a good idea to check that extrapolation is justified
by checking that these quotients actually look like 22m .

4. Romberg integration

The R0 s in the previous section should remind us of the Norwegian mathematician


Werner Romberg [8] who first described this systematic extrapolation procedure
in connection with a related numerical integration formula called the trapezoidal
rule which is based on the function values at the end points of each subinterval
and which has an error series similar to (8). The use of the mid-point rule was
also mentioned by Romberg and later by Tore Håvie [6].

6
5. Accuracy – rounding errors

The quotients of Rk,0 (see Fig. 2) show a very nice monotonic convergence to
4.000. The quotients of Rk,1 (see Fig. 3) converge monotonically to 16.000 in
the beginning but display an erratic behaviour for high values of k. The reason
for this is the two sources of error in our computed integrals. The first one, the
truncation error, is the one we have been concerned with hitherto, cf. formula (8).
This error is a smooth function of h in the case shown. Thus the nice behaviour
for large values of h. As h becomes smaller the effect of the higher order terms
in (8) diminishes and we have convergence. But there is another contribution to
the total error due to the fact that we have a limited accuracy when we calculate
function values and add them together. This rounding error will grow with n,
although in an erratic fashion, and will from a certain point dominate the total
error. This becomes even more evident when we make differences and quotients
of Rk,2 , see Fig. 4.
k n Rk,2 difference quotient
2 4 1.7182809965121231
3 8 1.7182818151322763 0.0000008186201532
4 16 1.7182818282495025 0.0000000131172262 62.408
5 32 1.7182818284557650 0.0000000002062626 63.595
6 64 1.7182818284589940 0.0000000000032290 63.879
7 128 1.7182818284590440 0.0000000000000500 64.631
8 256 1.7182818284590440 0.0000000000000000 Inf
9 512 1.7182818284590466 0.0000000000000027 0.000
10 1024 1.7182818284590444 -0.0000000000000022 -1.200

Fig. 4.

The monotone behaviour seen in the first three quotients is overtaken by the
erratic effects of the rounding error in the last four values. We can see enough,
though, to be convinced that the error is of sixth order and that extrapolation
is justified, at least in the beginning of the table. In the last three rows the
differences, which are exclusively due to round-off, are so small anyway that after
division by 63 the correction terms are so small that their effect cannot be seen.
For Rk,3 the effect of the truncation error can only be seen in the first two quo-
tients (see Fig. 5) which are sufficiently close to 256 (with time we become less
choosy) that we dare to extrapolate once more.
When round-off errors take over the computed values are often very similar and
the differences therefore small. For the higher order extrapolations we divide
these differences by 63, 255, 1023, or the like, so quite often these corrections
cannot be seen. We should not perform these extrapolations in the first place,

7
but if we do anyway they do little harm, usually they do nothing at all.
k n Rk,3 difference quotient
3 8 1.7182818281262471
4 16 1.7182818284577124 0.0000000003314653
5 32 1.7182818284590391 0.0000000000013267 249.839
6 64 1.7182818284590453 0.0000000000000062 213.393
7 128 1.7182818284590449 -0.0000000000000004 -14.000
8 256 1.7182818284590440 -0.0000000000000009 0.500
9 512 1.7182818284590466 0.0000000000000027 -0.333
10 1024 1.7182818284590444 -0.0000000000000022 -1.200

Fig. 5.

k Rk,0 Rk,1 Rk,2 Rk,3 Rk,4


0 1.6487212707001282
1 1.7005127166502081 1.7177765319669014
2 1.7138152797710871 1.7182494674780466 1.7182809965121231
3 1.7171636649956870 1.7182797934038869 1.7182818151322763 1.7182818281262471
4 1.7180021920526605 1.7182817010716516 1.7182818282495025 1.7182818284577124 1.7182818284590122
5 1.7182119133838587 1.7182818204942580 1.7182818284557650 1.7182818284590391 1.7182818284590442
6 1.7182643493168632 1.7182818279611980 1.7182818284589940 1.7182818284590453 1.7182818284590453

Fig. 6. The Romberg scheme

Looking at the extrapolated values we note that R6,4 is accurate to within 2 bits.
So with just 127 function evaluations we do much better with extrapolation than
with the original mid-point rule with 2047 function evaluations.

6. Error estimation

‘No numerical result is worth anything without a reliable error estimate’ my old
Numerical Analysis teacher instructed me. In this context the correction term
Rk,m−1 − Rk−1,m−1
(20)
22m − 1
can be viewed as an estimate of the error in Rk,m−1 and the reliability hinges
on whether 2m is the right order. This can be ascertained if we have one (or
preferably more than one) quotient with a value close to 22m . And when we say
‘close’ we mean that there can be a small deviation in the second digit (4.2 for 4,
and 210 for 256). Furthermore if Rk,m−1 is close to Rk−1,m−1 then the order may
not even be that important if we intend to use (20) as an error estimate.

8
7. Efficiency

Usually function values are more expensive to calculate than simple arithmetic
operations, so when we measure the efficiency of a numerical quadrature rule we
count the number of function evaluations needed to achieve a specified accuracy.
As k increases the number of subintervals and therefore the number of function
evaluations grows exponentially. Each time we increase k by 1 we double the
work. It is therefore extremely important to keep k small. When efficiency is
important we should really compute the triangular Romberg-scheme (Fig. 6) row
by row instead of column by column.

8. What happens if we extrapolate with the wrong p

The value of p is essential in the extrapolation because we divide the difference


by 2p − 1. What happens if we guess wrong? We can use our previous analysis
and put cp = 0 since p is not the right order, and look at the effect on some other
term, cq hq . From formula (17) we have.

N (f, h) − N (f, 2h)


M (f ) = N (f, h) + + ĉq hq + · · · (21)
2p − 1

So the error changes with extrapolation from cq hq to ĉq hq where

2q − 1 2p − 2 q
ĉq = cq − c̃q = cq (1 − ) = c q . (22)
2p − 1 2p − 1

If p < q, i.e. we try to eliminate a low order term which isn’t there then the
(leading term of the) error changes sign and may increase in magnitude if the
difference between p and q is large enough. For example, if a method is of order
2 but we think it is of order 1 and act accordingly then we shall roughly double
the error as illustrated in Fig. 7 where M indicates the true value, Nh and N2h
two numerical approximations and Ex the (wrong) extrapolation.
N2h Nh M Ex

Fig. 7.

9
If p > q, i.e. we miss a low order term (q) then the error is of the same sign
and always reduced by a factor, but the (low) order remains. So the effect of
extrapolation is limited but at least not harmful.
If p ≈ q, i.e. we guess almost right then the error is reduced by a substantial
factor, but the order remains.
In practice it may be difficult to distinguish between order q and order p when
p < q and |cp |  |cq |. For large values of h, cq hq will dominate and the quotients
(16) will be close to 2q , but as h becomes smaller they will tend towards 2p (unless
rounding errors take over before that). In the large interval where the two terms
interfere the order determination is difficult.

9. Aitken extrapolation

If the order is p then the quotients (16) will approach 2p . We really do not need
the numerical value of p by itself, but actually only 2p − 1 for error estimation
or extrapolation, so why not just use the best value for the quotient, subtract 1,
and use the result in the denominator of (17). This procedure is known as Aitken
extrapolation [1]. Well, if we have no idea what p really should be, even after
having studied the quotients, then we can use this procedure, but of course only
if the quotients seem to converge to something. But Aitken extrapolation should
only be used as a last resort. The reason for this is given in the previous section.
Since we probably do not have the right value of p we do not reduce the order.
If the value is almost right then we shall get a reduction in the size of the error
and this of course is worth something. But we do not make life easier for further
extrapolations because all the different orders are still present, the smaller ones
perhaps with reduced coefficients.

10. An example of a non-smooth function

Consider Z 1 √
xdx
0
The integrand is not differentiable at 0 and all the derivatives attain rather large
values in an interval to the right of 0. Computing Rk,0 gives the results shown in
Fig. 8.
The differences become smaller and so does the error, and the quotients seem to
converge although not towards 4. If we go ahead as usual and try to eliminate a
second order term we get the results of Fig. 9.

10
k n Rk,0 difference quotient
0 1 0.7071067811865476
1 2 0.6830127018922193 -0.0240940792943283
2 4 0.6729773970061621 -0.0100353048860572 2.401
3 8 0.6690321721300020 -0.0039452248761601 2.544
4 16 0.6675366756806553 -0.0014954964493467 2.638
5 32 0.6669826864780721 -0.0005539892025832 2.700
6 64 0.6667805032151449 -0.0002021832629272 2.740
7 128 0.6667074406562306 -0.0000730625589144 2.767
8 256 0.6666812141233761 -0.0000262265328544 2.786
9 512 0.6666718428880157 -0.0000093712353604 2.799
10 1024 0.6666685049669573 -0.0000033379210584 2.808

Fig. 8.

k n Rk,1 difference quotient


1 2 0.6749813421274432
2 4 0.6696322953774764 -0.0053490467499668
3 8 0.6677170971712819 -0.0019151982061945 2.793
4 16 0.6670381768642064 -0.0006789203070755 2.821
5 32 0.6667980234105444 -0.0002401534536620 2.827
6 64 0.6667131087941692 -0.0000849146163752 2.828
7 128 0.6666830864699258 -0.0000300223242434 2.828
8 256 0.6666724719457580 -0.0000106145241678 2.828
9 512 0.6666687191428956 -0.0000037528028624 2.828
10 1024 0.6666673923266044 -0.0000013268162912 2.828

Fig. 9.

The error √is reduced slightly in passing from Rk,0 to Rk,1 and the convergence
towards 2 2 is now evident. With this information we can look back at√Fig. 8
and realize that those quotients were in the process of converging to 2 2 but
that the interference of the second order term was substantial. We can deduce
now that the error probably looks like

c3/2 h3/2 + c2 h2 + · · ·

In order to eliminate the first term we should extrapolate using


Rk,0 − Rk−1,0
R̂k,1 = Rk,0 +
23/2 − 1
which gives

11
k n R̂k,1 difference quotient
1 2 0.6698352123559569
2 4 0.6674889065137966 -0.0023463058421603
3 8 0.6668744569963907 -0.0006144495174059 3.819
4 16 0.6667187615129443 -0.0001556954834464 3.946
5 32 0.6666796997222362 -0.0000390617907081 3.986
6 64 0.6666699255168198 -0.0000097742054164 3.996
7 128 0.6666674814158784 -0.0000024441009414 3.999
8 256 0.6666668703562605 -0.0000006110596179 4.000
9 512 0.6666667175892070 -0.0000001527670535 4.000
10 1024 0.6666666793973107 -0.0000000381918963 4.000

Fig. 10.

From now on the error series fits the usual pattern with exponents 2, 4, 8, . . . The
only change was the extra h3/2 -term. Why? Well, we can always be wise after
the event and argue that since the integrand involves a broken exponent it is
not unnatural to look for related broken exponents. In this special case we can
actually supply a mathematical argument. Consider the error of the mid-point
rule in the very first subinterval of length h:
s
Z h √ h 2 3/2 1
xdx − h = h − √ h3/2 .
0 2 3 2
Here is clearly a contribution of order 3/2 to the error and since the integrand
is smooth in the rest of the interval up to 1 it is perhaps not surprising that
this is the only deviation from the normal case. The idea to generalize Romberg
integration to other exponents was first suggested by Leslie Fox [3].
Should we necessarily eliminate the error terms in the right order? Is there
anything wrong with proceeding as in Fig. 8 and on the basis of the quotients
in Fig. 9 then eliminate the 3/2-term when we are sure it is there? In principle
we should eliminate the error terms in order of importance, but small deviations
from this rule only produce small deviations in the results due to rounding errors
in the computer so it is hard to reject one procedure in favour of the other.

12
References
[1] Alexander Craig Aitken. On Bernoulli’s Numerical Solution of Algebraic
Equations. Proc. Roy. Soc. Edinburgh, 46:289–305, 1926.

[2] Thøger Busk. Numerisk Integration. Numerisk Institut, DtH, 1973.

[3] Leslie Fox. Romberg Integration for a Class of Singular Integrands. Comput.
J., 10:87–93, 1967.

[4] Leslie Fox and Linda Hayes. On the Definite Integration of Singular Inte-
grands. SIAM Review, 12:449–457, 1970.

[5] Christiån Huygens. Oeuvres Completes, Vol. 1. Correspondence 1638–1656

[6] Tore Håvie. On a Modification of Romberg’s Algorithm. BIT, 6:24–30, 1966.

[7] Lewis F. Richardson and J. Arthur Gaunt. The Deferred Approach to the
Limit I - II. Trans. Roy. Soc. London, 226A:299–361, 1927.

[8] Werner Romberg. Vereinfachte Numerische Integration. Norske Vid. Selsk.


Forh., Trondheim, 28:30–36, 1955.

[9] J. A. Shanks. Romberg Tables for Singular Integrands. Comput. J., 15:360–
361, 1972.

[10] Eduard Stiefel. Altes und Neues über numerische Quadratur. ZAMM,
41:408–413, 1961.

13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy