0% found this document useful (0 votes)
749 views6 pages

Laplace's Approximation of The Gamma Function

This document discusses Laplace's approximation of the gamma function. It begins by defining Laplace's approximation formula for the gamma function as s approaches infinity. It then discusses the history of related approximations by Stirling, de Moivre, Euler, and others. The main part derives Laplace's approximation directly through a variable transformation approach, without relying on complicated theory. It transforms the integral representation of the gamma function and derives an asymptotic series expansion of the transformed variable to obtain Laplace's formula.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
749 views6 pages

Laplace's Approximation of The Gamma Function

This document discusses Laplace's approximation of the gamma function. It begins by defining Laplace's approximation formula for the gamma function as s approaches infinity. It then discusses the history of related approximations by Stirling, de Moivre, Euler, and others. The main part derives Laplace's approximation directly through a variable transformation approach, without relying on complicated theory. It transforms the integral representation of the gamma function and derives an asymptotic series expansion of the transformed variable to obtain Laplace's formula.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Laplace’s Approximation of the Gamma Function,

A Direct Approach
With the term “Laplace’s Approximation” we want to denote the following approximation of
Z ∞
Γ(s + 1) = e−x xs dx
0

if the modulus of the complex number s ∈ C+ (the set of complex numbers with a positive real
part) tends to ∞:
n
!
√ X 1 · 3 · 5 · · · (2k + 1)a 2k+1
Γ(s + 1) = 2πsss e−s 1 + + rn (s) , (1)
k=1
2k+1/2 sk

where for the residual rn (n ∈ N) the relation

rn (s)|s|n → 0 (|s| → ∞) (2)

is valid and the coefficients aj are given by


j−1
√ 2aj−1 1 X
a1 = 2, aj = − ai aj+1−i (j ≥ 2). (3)
(j + 1)a1 2a1 i=2

Around 1730, Stirling and, at the same time, his friend de Moivre found similar asymptotic
formulas for log s! by using special series expansions. The now most common version
1 1 π X B2k
log s! = (s + ) log s − s + log + log 2 + 2k−1
,
2 2 2 k≥1
2k(2k − 1)s

which very often — in a slightly erroneous way — is named after Stirling, was found and
deduced by Euler in 1755. Euler’s reasoning was essentially based on “his” summation formula.
Laplace, on the other hand, found a direct approach to an asymptotic series expansion for
s!. In modern terms, his method was a special case of the now so called “method of steepest
descent” to the function Γ(s + 1), which was originated by Laplace for real functions in 1774.
Thereafter, Laplace deduced the first series terms of (1) at several places in his work. In the
18th century, however, no mathematician was troubled by the “modern” problem of estimating
the residuals of the expansions (as rn (s) in (1)), because everybody was convinced of a very
fast “convergence” of the referring series due to the daily numerical practise at that times.
Moreover, the law for the general coefficients (3) apparently was not found by Laplace, and
the equivalence of Laplace’s and Euler’s expansions was not clear. As late as in 1844, Cauchy
considered the general term in Laplace’s series to be unknown. Dirichlet, however, in his course
on integral calculus of 1838, had already given a method for deducing (3), as unpublished lecture
notes reveal. The problem of a closer discussion of the residual was solved in the framework of
Euler’s method around 1850, and in the case of Laplace’s method only by modern expositions
(for the latter see [Copson 1965, 53-57] with an application of Watson’s lemma). For historical
details on the history of asymptotic approximations to the Gamma function see [Tweddle 2003]

1
(on Stirling and de Moivre), [Fischer 2000, 17–23; 73–76] (on Euler, Laplace, Dirichlet, and
Cauchy), and [Hald 1998, 203–228] (on Laplace).
In the classroom, the typical difficulty with both methods, Euler’s as well as Laplace’s, is that
the deduction of the referring asymptotic series, at least according to the majority of textbooks,
is based on a more or less complicated “theory” which has to be treated first. In the following,
I present an elementary approach to Laplace’s method, which directly can be traced back to
the historical roots.

Laplace’s Method

The Laplacian idea for deducing an asymptotic expansion of Γ(s + 1) (for a positive real s) was
based on a special representation of the integrand in
Z ∞ Z ∞
−x s
Γ(s + 1) = e x dx = e−(z+s) (z + s)s dz. (4)
0 −s

Laplace only considered the case of a positive real s. The integrand attains its maximal value
2
M = e−s ss for x = s or z = 0. Laplace set e−s e−z (z + s)s equal to M e−t (z) . From this
representation he developed the idea of a transformation of the variable of integration from z
to t by  p
−pz − s log(1 + z/s) for −s < z ≤ 0
t := (5)
+ z − s log(1 + z/s) for 0 ≤ z.
Laplace expanded t(z) as a series of powers in z, and then, vice versa, z as a series of powers
in t. Thus he obtained
Z ∞ √ t2
 
−t2 4t
Γ(s + 1) = M e 2s 1 + √ + + · · · dt
−∞ 3 2s 6s

 
s+1/2 −s 1 1
=s e 2π 1 + + + ··· .
12s 288s2
As already mentioned above, Laplace did not care about whether his operations were valid
in a modern rigorous sense. Thus, from our point of view, some modifications and further
considerations are necessary for a proof of (1) and (2).

The Transformation of Variable

We use a representation of Γ(s + 1) (s ∈ C+ ), which is slightly different from Laplace’s, and


which results directly from (4):
Z ∞
s+1 −s
Γ(s + 1) = s e e−vs (v + 1)s dv. (6)
−1

We now use the transformation of variable


p
x := sign(v) v − log(1 + v) =: f (v) (−1 < v), (7)

2
such that Z ∞
s+1 −s 2
Γ(s + 1) = s e e−sx v 0 (x)dx.
−∞

For |v| < 1 we have r


1 v v2
f (v) = v − + − · · ·, (8)
2 3 4
from which the existence of derivatives
√ (of indefinite order) in a neighborhood of v = 0 can be
seen. In particular: f 0 (0) = 1/ 2. Altogether, we have f ∈ C ∞ (] − 1; ∞[).
For a closer discussion of the inverse function g(x) of the function f (v) we start with the remark
that g ∈ C ∞ (R) because f 0 (v) 6= 0. From (f (v))2 = v − log(1 + v) we get by substitution of
v = g(x):
x2 = g(x) − log(1 + g(x)).
By differentiating this equation with respect to x we obtain:

g(x)g 0 (x) = 2x(1 + g(x)). (9)

The use of this differential equation for finding the law of Laplace’s series was essentially
Dirichlet’s idea. By iteration of (9) one gets:
(
(n+1)
−(g 0 (x))2 + 2[xg 0 (x) + g(x)] + 2 for n = 1
g(x)g (x) = Pn n (j) (10)
− j=1 j g (x)g (n+1−j) (x) + 2[xg (n) (x) + ng (n−1) (x)] for n ≥ 2.

p
For x 6= 0, by (10), and by substitution of x = f (g(x)) = sign(g(x)) g(x) − log(1 + g(x)) we
get with the abbreviation v = g(x):
p
sign(v)2 v − log(1 + v)(1 + v)
v0 =
v,
2
2(1 + v)(v − 2v + 2 log(1 + v))
v 00 = ,
p v3
4 v − log(1 + v)(1 + v)(v 2 + 6v − 4v log(1 + v) − 6 log(1 + v))
v 000 = sign(v) .
v5
Generally, based on (10), we are able to prove the following lemma by induction:
Let x 6= 0, or v 6= 0, respectively. Then, for n ≥ 1:
(
sign(v) v − log(1 + v)(1 + v) Πn (v,log(1+v))
p
(n) v 2n−1
for n odd
v = Πn (v,log(1+v)) (11)
(1 + v) v 2n−1
for n even,

where Πn (v, log(1 + v)) is a polynomial in v and log(1 + v) with

Π1 = 2, Π2 = 2(v 2 − 2v + 2 log(1 + v)), deg(Πn ) < 2n − 3 (n ≥ 3). (12)

For the proof, we firstly have to note that the assertion is true for n ≤ 3. Now, let’s assume
that the assertion is true up to the n-th derivative, and that n is an even number. Then, by

3
(10), for v 6= 0:
 
n/2  
X n+1 p
vv (n+1) = −  v (j) v (n+1−j)  + 2[v (n) sign(v) v − log(1 + v) + nv (n−1) ] =
j=1
j

p Π1 Πn (n + 1)n Π2 Πn−1
− sign(v) v − log(1 + v)(1 + v) (n + 1)(1 + v) 2n−1
+ (1 + v) 3 2n−3 + · · ·
v v 2 v v
Πn−1 v 3
    
n+1 Πn/2 Πn/2+1 p Πn v
+ (1 + v) n−1 n+1 + 2sign(v) v − log(1 + v)(1 + v) + n 2n .
n/2 v v v 2n v
We have:
deg((1 + v)Π1 Πn ) < 2n − 2, deg((1 + v)Π2 Πn−1 ) < 2n − 2,
deg((1 + v)Πj Πn−j+1 ) < 2n − 3 ( n2 ≥ j ≥ 3).
Moreover:
deg(Πn v) < 2n − 2, and deg(Πn−1 v 3 ) < 2n − 2.
p
Altogether, vv (n+1) is equal to sign(v) v − log(1 + v)(1 + v) times a fraction whose numerator
is a polynomial in v and log(1 + v) with degree less than 2n − 2, and whose denominator is v 2n .
Now, let’s assume that n is an odd number. For abbreviation we introduce

1 for j even
pj (v) := (j ∈ N).
v − log(1 + v) for j odd

Then, for v 6= 0:
 
(n−1)/2    
X n + 1 (j) (n+1−j) n n+1 n+1
vv (n+1) = − v v + n+1 v ( 2 ) v ( 2 ) 
j=1
j 2
p
+ 2[v (n) sign(v) v − log(1 + v) + nv (n−1) ] =

2 Π 1 Πn (n + 1)n Π2 Πn−1
− (1 + v) (n + 1)(v − log(1 + v)) 2n−1
+ + ···
v v 2 v 3 v 2n−3
!

n+1
 Π n−1 Π n+3  n  Π n+1 Π n+1
2 2
+ n−1 p n−1 (v) n−2 n+2
+ n+1 p n+1 (v) n2 n
2

2
2 v v 2
2 v v
Πn−1 v 3
 
Πn v
+ 2(1 + v) (v − log(1 + v)) 2n + n 2n .
v v
We have:
deg((1 + v)(v − log(1 + v))Π1 Πn ) < 2n − 1, deg((1 + v)Π2 Πn−1 ) < 2n − 2,
deg((1 + v)(v − log(1 + v))Πj Πn−j+1 ) < 2n − 2 ( n−1
2
≥ j ≥ 3 odd),
n−1
deg((1 + v)Πj Πn−j+1 ) < 2n − 3 ( 2 ≥ j ≥ 3 even).

Moreover:
deg((v − log(1 + v))Πn v) < 2n − 1, and deg(Πn−1 v 3 ) < 2n − 2.
Altogether, vv (n+1) is equal to (1 + v) times a fraction whose numerator is a polynomial in v
and log(1 + v) with degree less than 2n − 1, and whose denominator is v 2n .

4
From the two cases n even and n odd we can see that the assertion is true for n + 1.
Because of (11) we are able to conclude that for n ≥ 3:

lim g (n) (x) = lim v (n) = 0 and lim g (n) (x) = lim v (n) = 0.
x→−∞ v→−1 x→∞ v→∞

Because g (n) (x) is continuous for x ∈ R, and because, additionally, g 00 (x) is bounded, the result
is:
For n ≥ 2 the derivative g (n) (x) is a bounded function.

Making Laplace’s Idea Rigorous

We are ready now, for a closer discussion of the coefficients in the expansion

0 0 00 g (m) (0)xm−1 g (m+1) (ϑx) m


g (x) = g (0) + g (0)x + · · · + + x (0 < ϑ < 1). (13)
(m − 1)! m!
g (j) (0)
1.) Let aj = j!
. Then, by substitution of x = 0 = g(0) in (10) we obtain for n ≥ 2:
n  
X n
0=− j!aj (n + 1 − j)!an+1−j + 2n(n − 1)!an−1 ,
j=1
j

and after a little algebra:


n−1
X
(n + 1)a1 an = 2an−1 − (n + 1 − j)aj an+1−j .
j=2

Because of the identity


n−1 n−1
X n+1X
(n + 1 − j)aj an+1−j = aj an+1−j
j=2
2 j=2

we come to the result:


j−1
2aj−1 1 X
aj = − ai aj+1−i (j ≥ 2). (14)
(j + 1)a1 2a1 i=2

We already know, that a1 = 2. By (14), a2 = 23 , a3 = 9√1 2 (and so on) can be calculated.
2.) By the lemma above, we can conclude immediately that for m ≥ 1:

g (m+1) (ϑx)
= Km+1 (x),
m!
where Km+1 (x) is a bounded function, and altogether
m
X
g 0 (x) = jaj xj−1 + Km+1 (x)xm , (m ≥ 1). (15)
j=1

5
Now, we obtain for n ∈ N0 on account of (15):
Z ∞
s+1 −s 2
Γ(s + 1) = s e e−sx v 0 (x)dx =
−∞

2n+2
!
Z ∞
2
X
= ss+1 e−s jaj xj−1 + K2n+3 (x)x2n+2 e−sx dx.
−∞ j=1

By the well known relations



Z r
−st2 π
e dt =
−∞ s
and ∞ √
(2n − 1)!! π
Z
2n −st2
t e dt = (n ∈ N)
−∞ sn+1/2 2n
— it is a nice problem for homework to show that these equations are valid also for s ∈ C+ —
we get !
n
√ X (2k + 1)a (2k − 1)!!
Γ(s + 1) = ss e−s 2πs 1 + √2k+1 + rn (s) ,
2s k 2k
k=1

where Z ∞
Cn 2
rn (s) ≤ n+1 t2n+2 e−t dt,
s −∞

with a positive constant Cn . Therefrom, we can see that the assertions (1) and (2) are true.

Concluding Remark

The advantage of the line of argument just described is that the case of complex s automatically
is included. On the other hand, we have the disadvantage of lacking numerically usable estimates
for the residual rn (s) (for r1 (s) and r2 (s), obtained by other elementary considerations, see
[Michel 2002]). For those estimates, a general characterization of the polynomials Πn (v, log(1 +
v)), which seems to be difficult however not impossible, might be very useful.

References
Copson, E.T. 1965. Asymptotic Expansions. Cambridge: University Press.
Fischer, H. 2000. Die verschiedenen Formen und Funktionen des zentralen Grenzwertsatzes in der Entwicklung
von der klassischen zur modernen Wahrscheinlichkeitsrechnung. Aachen: Shaker
Hald, A. 1998. A History of Mathematical Statistics from 1750 to 1930. New York: Wiley.
Michel, R. 2002. On Stirling’s Formula. American Mathematical Monthly 109, 388–390.
Tweddle, I. 2003. James Stirling’s Methodus Differentialis: An Annotated Translation of Stirling’s Text. London:
Springer.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy