Integracion Romberg
Integracion Romberg
Romberg integration
1. Mathematical preliminaries
Geometrically speaking we approximate the area under f (x) by the sum of the
areas of a succession of rectangles of width h and height equal to the function
value at the mid-point.
1
of length h placed symmetrically around 0:
Z h/2
f (x)dx.
−h/2
xk (k) 1
f (0) = f (0) + xf 0 (0) + x2 f 00 (0) + · · ·
X
f (x) = (2)
k=0
k! 2
2 h
f (2j) (0)( )2j+1
X
= (3)
j=0 (2j + 1)! 2
1 3 00 1 5 (4)
= hf (0) + h f (0) + h f (0) + · · ·
24 1920
So the integral of f (x) is equal to the function value at the mid-point times the
interval length plus a remainder series involving only odd powers of h.
The above expression sheds some light on what we mean by f (x) being smooth.
f (x) should be differentiable a certain number of times with a continuous deriva-
tive of some high order, such that we can replace the three dots in the Taylor
expansion by a suitable remainder term.
Now for the integral over [a, b] which is divided into n subintervals each of length
h = b−a
n
. Using formula (3) in each subinterval and adding terms we get
Z b n
X
f (x)dx = h f (mi ) + R, (4)
a i=1
where mi = a + 2i−1
2
h are the mid-points of the subintervals, and the error term
R can be written as
n
h2j+1
f (2j) (mi ).
X X
R = 2j
(5)
j=1 (2j + 1)!2 i=1
2
Using a generalized mean value theorem a sum of n values of a continuous function
can be written as n times a function value at some intermediate point. With
nh = b − a we get
h2j
f (2j) (ξjn ).
X
R = (b − a) 2j
(6)
j=1 (2j + 1)!2
Since ξjn may depend on n this is not quite what we need. So we look for another
argument to get the ξ independent of n. If we apply (4) to f 00 , turn the formula
around, and use it together with (5) we get
n Z b
ĉj h2j+1 .
X X
h f 00 (mi ) = f 00 (x)dx − (7)
i=1 a j=1
2. Extrapolation
M (f ) = N (f, h) + cp hp + cq hq + · · · (9)
3
with
2q − 1
c̃q = cq . (13)
2p − 1
So we have a formula for estimating (the leading term of) the error if we know p.
But what if we do not know p, or if we think we know it but are not too sure?
Well, we can perform a third calculation doubling the step size once again:
3. Back to integration
4
make decisions. But is it possible? - and is it safe? In my opinion the answer to
both questions is yes.
Formula (16) provides the necessary tool if used properly. In our integration
procedure we expect the powers of h to be 2, 4, (6, 8, . . . ). Therefore we expect
the ratio in (16) to be close to 4 (= 22 ). Now what do we mean by ‘close to’ ?
And what if it is not?
Well, we should not base far reaching conclusions on just a single number. Rather
we could imagine a series of calculations each with half the step size of the previous
one: h = b−a
n
; n = 2k ; k = 0, 1, 2, . . .
With k calculations we have k − 1 differences and k − 2 quotients. As h gets
smaller the effect of the second term (cq ) and subsequent terms in the error will
diminish and we would expect to see a series of quotients with values approaching
4.00. In Fig. 2 we show the results of a series of calculations using the mid-point
formula on the integral of exp(x) from 0 to 1.
k n N (f, h) = Rk,0 difference quotient
0 1 1.6487212707001282
1 2 1.7005127166502081 0.0517914459500799
2 4 1.7138152797710871 0.0133025631208790 3.893
3 8 1.7171636649956870 0.0033483852245999 3.973
4 16 1.7180021920526605 0.0008385270569735 3.993
5 32 1.7182119133838587 0.0002097213311982 3.998
6 64 1.7182643493168632 0.0000524359330045 4.000
7 128 1.7182774586501623 0.0000131093332991 4.000
8 256 1.7182807360053651 0.0000032773552028 4.000
9 512 1.7182815553455351 0.0000008193401699 4.000
10 1024 1.7182817601806615 0.0000002048351264 4.000
Fig. 2.
Rk,0 − Rk−1,0
Rk,1 = Rk,0 + . (18)
3
In this way we eliminate the h2 -term in (8). The resulting extrapolated values
(Rk,1 ) will satisfy a similar expansion with exponents 4, 6, 8, . . . in h. So why
5
not continue the process now with p = 4, q = 6. In Fig. 3 we show Rk,1 together
with the corresponding differences and quotients.
k n Rk,1 difference quotient
1 2 1.7177765319669014
2 4 1.7182494674780466 0.0004729355111452
3 8 1.7182797934038869 0.0000303259258403 15.595
4 16 1.7182817010716516 0.0000019076677646 15.897
5 32 1.7182818204942580 0.0000001194226065 15.974
6 64 1.7182818279611980 0.0000000074669400 15.994
7 128 1.7182818284279286 0.0000000004667307 15.998
8 256 1.7182818284570993 0.0000000000291707 16.000
9 512 1.7182818284589250 0.0000000000018257 15.978
10 1024 1.7182818284590369 0.0000000000001119 16.313
Fig. 3.
Rk,1 − Rk−1,1
Rk,2 = Rk,1 + .
15
In general we can define
Rk,m−1 − Rk−1,m−1
Rk,m = Rk,m−1 + (19)
22m − 1
because the leading term in the error of Rk,m−1 contains h2m when the integrand
is smooth. But it is always a good idea to check that extrapolation is justified
by checking that these quotients actually look like 22m .
4. Romberg integration
6
5. Accuracy – rounding errors
The quotients of Rk,0 (see Fig. 2) show a very nice monotonic convergence to
4.000. The quotients of Rk,1 (see Fig. 3) converge monotonically to 16.000 in
the beginning but display an erratic behaviour for high values of k. The reason
for this is the two sources of error in our computed integrals. The first one, the
truncation error, is the one we have been concerned with hitherto, cf. formula (8).
This error is a smooth function of h in the case shown. Thus the nice behaviour
for large values of h. As h becomes smaller the effect of the higher order terms
in (8) diminishes and we have convergence. But there is another contribution to
the total error due to the fact that we have a limited accuracy when we calculate
function values and add them together. This rounding error will grow with n,
although in an erratic fashion, and will from a certain point dominate the total
error. This becomes even more evident when we make differences and quotients
of Rk,2 , see Fig. 4.
k n Rk,2 difference quotient
2 4 1.7182809965121231
3 8 1.7182818151322763 0.0000008186201532
4 16 1.7182818282495025 0.0000000131172262 62.408
5 32 1.7182818284557650 0.0000000002062626 63.595
6 64 1.7182818284589940 0.0000000000032290 63.879
7 128 1.7182818284590440 0.0000000000000500 64.631
8 256 1.7182818284590440 0.0000000000000000 Inf
9 512 1.7182818284590466 0.0000000000000027 0.000
10 1024 1.7182818284590444 -0.0000000000000022 -1.200
Fig. 4.
The monotone behaviour seen in the first three quotients is overtaken by the
erratic effects of the rounding error in the last four values. We can see enough,
though, to be convinced that the error is of sixth order and that extrapolation
is justified, at least in the beginning of the table. In the last three rows the
differences, which are exclusively due to round-off, are so small anyway that after
division by 63 the correction terms are so small that their effect cannot be seen.
For Rk,3 the effect of the truncation error can only be seen in the first two quo-
tients (see Fig. 5) which are sufficiently close to 256 (with time we become less
choosy) that we dare to extrapolate once more.
When round-off errors take over the computed values are often very similar and
the differences therefore small. For the higher order extrapolations we divide
these differences by 63, 255, 1023, or the like, so quite often these corrections
cannot be seen. We should not perform these extrapolations in the first place,
7
but if we do anyway they do little harm, usually they do nothing at all.
k n Rk,3 difference quotient
3 8 1.7182818281262471
4 16 1.7182818284577124 0.0000000003314653
5 32 1.7182818284590391 0.0000000000013267 249.839
6 64 1.7182818284590453 0.0000000000000062 213.393
7 128 1.7182818284590449 -0.0000000000000004 -14.000
8 256 1.7182818284590440 -0.0000000000000009 0.500
9 512 1.7182818284590466 0.0000000000000027 -0.333
10 1024 1.7182818284590444 -0.0000000000000022 -1.200
Fig. 5.
Looking at the extrapolated values we note that R6,4 is accurate to within 2 bits.
So with just 127 function evaluations we do much better with extrapolation than
with the original mid-point rule with 2047 function evaluations.
6. Error estimation
‘No numerical result is worth anything without a reliable error estimate’ my old
Numerical Analysis teacher instructed me. In this context the correction term
Rk,m−1 − Rk−1,m−1
(20)
22m − 1
can be viewed as an estimate of the error in Rk,m−1 and the reliability hinges
on whether 2m is the right order. This can be ascertained if we have one (or
preferably more than one) quotient with a value close to 22m . And when we say
‘close’ we mean that there can be a small deviation in the second digit (4.2 for 4,
and 210 for 256). Furthermore if Rk,m−1 is close to Rk−1,m−1 then the order may
not even be that important if we intend to use (20) as an error estimate.
8
7. Efficiency
Usually function values are more expensive to calculate than simple arithmetic
operations, so when we measure the efficiency of a numerical quadrature rule we
count the number of function evaluations needed to achieve a specified accuracy.
As k increases the number of subintervals and therefore the number of function
evaluations grows exponentially. Each time we increase k by 1 we double the
work. It is therefore extremely important to keep k small. When efficiency is
important we should really compute the triangular Romberg-scheme (Fig. 6) row
by row instead of column by column.
2q − 1 2p − 2 q
ĉq = cq − c̃q = cq (1 − ) = c q . (22)
2p − 1 2p − 1
If p < q, i.e. we try to eliminate a low order term which isn’t there then the
(leading term of the) error changes sign and may increase in magnitude if the
difference between p and q is large enough. For example, if a method is of order
2 but we think it is of order 1 and act accordingly then we shall roughly double
the error as illustrated in Fig. 7 where M indicates the true value, Nh and N2h
two numerical approximations and Ex the (wrong) extrapolation.
N2h Nh M Ex
Fig. 7.
9
If p > q, i.e. we miss a low order term (q) then the error is of the same sign
and always reduced by a factor, but the (low) order remains. So the effect of
extrapolation is limited but at least not harmful.
If p ≈ q, i.e. we guess almost right then the error is reduced by a substantial
factor, but the order remains.
In practice it may be difficult to distinguish between order q and order p when
p < q and |cp | |cq |. For large values of h, cq hq will dominate and the quotients
(16) will be close to 2q , but as h becomes smaller they will tend towards 2p (unless
rounding errors take over before that). In the large interval where the two terms
interfere the order determination is difficult.
9. Aitken extrapolation
If the order is p then the quotients (16) will approach 2p . We really do not need
the numerical value of p by itself, but actually only 2p − 1 for error estimation
or extrapolation, so why not just use the best value for the quotient, subtract 1,
and use the result in the denominator of (17). This procedure is known as Aitken
extrapolation [1]. Well, if we have no idea what p really should be, even after
having studied the quotients, then we can use this procedure, but of course only
if the quotients seem to converge to something. But Aitken extrapolation should
only be used as a last resort. The reason for this is given in the previous section.
Since we probably do not have the right value of p we do not reduce the order.
If the value is almost right then we shall get a reduction in the size of the error
and this of course is worth something. But we do not make life easier for further
extrapolations because all the different orders are still present, the smaller ones
perhaps with reduced coefficients.
Consider Z 1 √
xdx
0
The integrand is not differentiable at 0 and all the derivatives attain rather large
values in an interval to the right of 0. Computing Rk,0 gives the results shown in
Fig. 8.
The differences become smaller and so does the error, and the quotients seem to
converge although not towards 4. If we go ahead as usual and try to eliminate a
second order term we get the results of Fig. 9.
10
k n Rk,0 difference quotient
0 1 0.7071067811865476
1 2 0.6830127018922193 -0.0240940792943283
2 4 0.6729773970061621 -0.0100353048860572 2.401
3 8 0.6690321721300020 -0.0039452248761601 2.544
4 16 0.6675366756806553 -0.0014954964493467 2.638
5 32 0.6669826864780721 -0.0005539892025832 2.700
6 64 0.6667805032151449 -0.0002021832629272 2.740
7 128 0.6667074406562306 -0.0000730625589144 2.767
8 256 0.6666812141233761 -0.0000262265328544 2.786
9 512 0.6666718428880157 -0.0000093712353604 2.799
10 1024 0.6666685049669573 -0.0000033379210584 2.808
Fig. 8.
Fig. 9.
The error √is reduced slightly in passing from Rk,0 to Rk,1 and the convergence
towards 2 2 is now evident. With this information we can look back at√Fig. 8
and realize that those quotients were in the process of converging to 2 2 but
that the interference of the second order term was substantial. We can deduce
now that the error probably looks like
c3/2 h3/2 + c2 h2 + · · ·
11
k n R̂k,1 difference quotient
1 2 0.6698352123559569
2 4 0.6674889065137966 -0.0023463058421603
3 8 0.6668744569963907 -0.0006144495174059 3.819
4 16 0.6667187615129443 -0.0001556954834464 3.946
5 32 0.6666796997222362 -0.0000390617907081 3.986
6 64 0.6666699255168198 -0.0000097742054164 3.996
7 128 0.6666674814158784 -0.0000024441009414 3.999
8 256 0.6666668703562605 -0.0000006110596179 4.000
9 512 0.6666667175892070 -0.0000001527670535 4.000
10 1024 0.6666666793973107 -0.0000000381918963 4.000
Fig. 10.
From now on the error series fits the usual pattern with exponents 2, 4, 8, . . . The
only change was the extra h3/2 -term. Why? Well, we can always be wise after
the event and argue that since the integrand involves a broken exponent it is
not unnatural to look for related broken exponents. In this special case we can
actually supply a mathematical argument. Consider the error of the mid-point
rule in the very first subinterval of length h:
s
Z h √ h 2 3/2 1
xdx − h = h − √ h3/2 .
0 2 3 2
Here is clearly a contribution of order 3/2 to the error and since the integrand
is smooth in the rest of the interval up to 1 it is perhaps not surprising that
this is the only deviation from the normal case. The idea to generalize Romberg
integration to other exponents was first suggested by Leslie Fox [3].
Should we necessarily eliminate the error terms in the right order? Is there
anything wrong with proceeding as in Fig. 8 and on the basis of the quotients
in Fig. 9 then eliminate the 3/2-term when we are sure it is there? In principle
we should eliminate the error terms in order of importance, but small deviations
from this rule only produce small deviations in the results due to rounding errors
in the computer so it is hard to reject one procedure in favour of the other.
12
References
[1] Alexander Craig Aitken. On Bernoulli’s Numerical Solution of Algebraic
Equations. Proc. Roy. Soc. Edinburgh, 46:289–305, 1926.
[3] Leslie Fox. Romberg Integration for a Class of Singular Integrands. Comput.
J., 10:87–93, 1967.
[4] Leslie Fox and Linda Hayes. On the Definite Integration of Singular Inte-
grands. SIAM Review, 12:449–457, 1970.
[7] Lewis F. Richardson and J. Arthur Gaunt. The Deferred Approach to the
Limit I - II. Trans. Roy. Soc. London, 226A:299–361, 1927.
[9] J. A. Shanks. Romberg Tables for Singular Integrands. Comput. J., 15:360–
361, 1972.
[10] Eduard Stiefel. Altes und Neues über numerische Quadratur. ZAMM,
41:408–413, 1961.
13