Laplace's Approximation of The Gamma Function
Laplace's Approximation of The Gamma Function
A Direct Approach
With the term “Laplace’s Approximation” we want to denote the following approximation of
Z ∞
Γ(s + 1) = e−x xs dx
0
if the modulus of the complex number s ∈ C+ (the set of complex numbers with a positive real
part) tends to ∞:
n
!
√ X 1 · 3 · 5 · · · (2k + 1)a 2k+1
Γ(s + 1) = 2πsss e−s 1 + + rn (s) , (1)
k=1
2k+1/2 sk
Around 1730, Stirling and, at the same time, his friend de Moivre found similar asymptotic
formulas for log s! by using special series expansions. The now most common version
1 1 π X B2k
log s! = (s + ) log s − s + log + log 2 + 2k−1
,
2 2 2 k≥1
2k(2k − 1)s
which very often — in a slightly erroneous way — is named after Stirling, was found and
deduced by Euler in 1755. Euler’s reasoning was essentially based on “his” summation formula.
Laplace, on the other hand, found a direct approach to an asymptotic series expansion for
s!. In modern terms, his method was a special case of the now so called “method of steepest
descent” to the function Γ(s + 1), which was originated by Laplace for real functions in 1774.
Thereafter, Laplace deduced the first series terms of (1) at several places in his work. In the
18th century, however, no mathematician was troubled by the “modern” problem of estimating
the residuals of the expansions (as rn (s) in (1)), because everybody was convinced of a very
fast “convergence” of the referring series due to the daily numerical practise at that times.
Moreover, the law for the general coefficients (3) apparently was not found by Laplace, and
the equivalence of Laplace’s and Euler’s expansions was not clear. As late as in 1844, Cauchy
considered the general term in Laplace’s series to be unknown. Dirichlet, however, in his course
on integral calculus of 1838, had already given a method for deducing (3), as unpublished lecture
notes reveal. The problem of a closer discussion of the residual was solved in the framework of
Euler’s method around 1850, and in the case of Laplace’s method only by modern expositions
(for the latter see [Copson 1965, 53-57] with an application of Watson’s lemma). For historical
details on the history of asymptotic approximations to the Gamma function see [Tweddle 2003]
1
(on Stirling and de Moivre), [Fischer 2000, 17–23; 73–76] (on Euler, Laplace, Dirichlet, and
Cauchy), and [Hald 1998, 203–228] (on Laplace).
In the classroom, the typical difficulty with both methods, Euler’s as well as Laplace’s, is that
the deduction of the referring asymptotic series, at least according to the majority of textbooks,
is based on a more or less complicated “theory” which has to be treated first. In the following,
I present an elementary approach to Laplace’s method, which directly can be traced back to
the historical roots.
Laplace’s Method
The Laplacian idea for deducing an asymptotic expansion of Γ(s + 1) (for a positive real s) was
based on a special representation of the integrand in
Z ∞ Z ∞
−x s
Γ(s + 1) = e x dx = e−(z+s) (z + s)s dz. (4)
0 −s
Laplace only considered the case of a positive real s. The integrand attains its maximal value
2
M = e−s ss for x = s or z = 0. Laplace set e−s e−z (z + s)s equal to M e−t (z) . From this
representation he developed the idea of a transformation of the variable of integration from z
to t by p
−pz − s log(1 + z/s) for −s < z ≤ 0
t := (5)
+ z − s log(1 + z/s) for 0 ≤ z.
Laplace expanded t(z) as a series of powers in z, and then, vice versa, z as a series of powers
in t. Thus he obtained
Z ∞ √ t2
−t2 4t
Γ(s + 1) = M e 2s 1 + √ + + · · · dt
−∞ 3 2s 6s
√
s+1/2 −s 1 1
=s e 2π 1 + + + ··· .
12s 288s2
As already mentioned above, Laplace did not care about whether his operations were valid
in a modern rigorous sense. Thus, from our point of view, some modifications and further
considerations are necessary for a proof of (1) and (2).
2
such that Z ∞
s+1 −s 2
Γ(s + 1) = s e e−sx v 0 (x)dx.
−∞
The use of this differential equation for finding the law of Laplace’s series was essentially
Dirichlet’s idea. By iteration of (9) one gets:
(
(n+1)
−(g 0 (x))2 + 2[xg 0 (x) + g(x)] + 2 for n = 1
g(x)g (x) = Pn n (j) (10)
− j=1 j g (x)g (n+1−j) (x) + 2[xg (n) (x) + ng (n−1) (x)] for n ≥ 2.
p
For x 6= 0, by (10), and by substitution of x = f (g(x)) = sign(g(x)) g(x) − log(1 + g(x)) we
get with the abbreviation v = g(x):
p
sign(v)2 v − log(1 + v)(1 + v)
v0 =
v,
2
2(1 + v)(v − 2v + 2 log(1 + v))
v 00 = ,
p v3
4 v − log(1 + v)(1 + v)(v 2 + 6v − 4v log(1 + v) − 6 log(1 + v))
v 000 = sign(v) .
v5
Generally, based on (10), we are able to prove the following lemma by induction:
Let x 6= 0, or v 6= 0, respectively. Then, for n ≥ 1:
(
sign(v) v − log(1 + v)(1 + v) Πn (v,log(1+v))
p
(n) v 2n−1
for n odd
v = Πn (v,log(1+v)) (11)
(1 + v) v 2n−1
for n even,
For the proof, we firstly have to note that the assertion is true for n ≤ 3. Now, let’s assume
that the assertion is true up to the n-th derivative, and that n is an even number. Then, by
3
(10), for v 6= 0:
n/2
X n+1 p
vv (n+1) = − v (j) v (n+1−j) + 2[v (n) sign(v) v − log(1 + v) + nv (n−1) ] =
j=1
j
p Π1 Πn (n + 1)n Π2 Πn−1
− sign(v) v − log(1 + v)(1 + v) (n + 1)(1 + v) 2n−1
+ (1 + v) 3 2n−3 + · · ·
v v 2 v v
Πn−1 v 3
n+1 Πn/2 Πn/2+1 p Πn v
+ (1 + v) n−1 n+1 + 2sign(v) v − log(1 + v)(1 + v) + n 2n .
n/2 v v v 2n v
We have:
deg((1 + v)Π1 Πn ) < 2n − 2, deg((1 + v)Π2 Πn−1 ) < 2n − 2,
deg((1 + v)Πj Πn−j+1 ) < 2n − 3 ( n2 ≥ j ≥ 3).
Moreover:
deg(Πn v) < 2n − 2, and deg(Πn−1 v 3 ) < 2n − 2.
p
Altogether, vv (n+1) is equal to sign(v) v − log(1 + v)(1 + v) times a fraction whose numerator
is a polynomial in v and log(1 + v) with degree less than 2n − 2, and whose denominator is v 2n .
Now, let’s assume that n is an odd number. For abbreviation we introduce
1 for j even
pj (v) := (j ∈ N).
v − log(1 + v) for j odd
Then, for v 6= 0:
(n−1)/2
X n + 1 (j) (n+1−j) n n+1 n+1
vv (n+1) = − v v + n+1 v ( 2 ) v ( 2 )
j=1
j 2
p
+ 2[v (n) sign(v) v − log(1 + v) + nv (n−1) ] =
2 Π 1 Πn (n + 1)n Π2 Πn−1
− (1 + v) (n + 1)(v − log(1 + v)) 2n−1
+ + ···
v v 2 v 3 v 2n−3
!
n+1
Π n−1 Π n+3 n Π n+1 Π n+1
2 2
+ n−1 p n−1 (v) n−2 n+2
+ n+1 p n+1 (v) n2 n
2
2
2 v v 2
2 v v
Πn−1 v 3
Πn v
+ 2(1 + v) (v − log(1 + v)) 2n + n 2n .
v v
We have:
deg((1 + v)(v − log(1 + v))Π1 Πn ) < 2n − 1, deg((1 + v)Π2 Πn−1 ) < 2n − 2,
deg((1 + v)(v − log(1 + v))Πj Πn−j+1 ) < 2n − 2 ( n−1
2
≥ j ≥ 3 odd),
n−1
deg((1 + v)Πj Πn−j+1 ) < 2n − 3 ( 2 ≥ j ≥ 3 even).
Moreover:
deg((v − log(1 + v))Πn v) < 2n − 1, and deg(Πn−1 v 3 ) < 2n − 2.
Altogether, vv (n+1) is equal to (1 + v) times a fraction whose numerator is a polynomial in v
and log(1 + v) with degree less than 2n − 1, and whose denominator is v 2n .
4
From the two cases n even and n odd we can see that the assertion is true for n + 1.
Because of (11) we are able to conclude that for n ≥ 3:
lim g (n) (x) = lim v (n) = 0 and lim g (n) (x) = lim v (n) = 0.
x→−∞ v→−1 x→∞ v→∞
Because g (n) (x) is continuous for x ∈ R, and because, additionally, g 00 (x) is bounded, the result
is:
For n ≥ 2 the derivative g (n) (x) is a bounded function.
We are ready now, for a closer discussion of the coefficients in the expansion
g (m+1) (ϑx)
= Km+1 (x),
m!
where Km+1 (x) is a bounded function, and altogether
m
X
g 0 (x) = jaj xj−1 + Km+1 (x)xm , (m ≥ 1). (15)
j=1
5
Now, we obtain for n ∈ N0 on account of (15):
Z ∞
s+1 −s 2
Γ(s + 1) = s e e−sx v 0 (x)dx =
−∞
2n+2
!
Z ∞
2
X
= ss+1 e−s jaj xj−1 + K2n+3 (x)x2n+2 e−sx dx.
−∞ j=1
where Z ∞
Cn 2
rn (s) ≤ n+1 t2n+2 e−t dt,
s −∞
with a positive constant Cn . Therefrom, we can see that the assertions (1) and (2) are true.
Concluding Remark
The advantage of the line of argument just described is that the case of complex s automatically
is included. On the other hand, we have the disadvantage of lacking numerically usable estimates
for the residual rn (s) (for r1 (s) and r2 (s), obtained by other elementary considerations, see
[Michel 2002]). For those estimates, a general characterization of the polynomials Πn (v, log(1 +
v)), which seems to be difficult however not impossible, might be very useful.
References
Copson, E.T. 1965. Asymptotic Expansions. Cambridge: University Press.
Fischer, H. 2000. Die verschiedenen Formen und Funktionen des zentralen Grenzwertsatzes in der Entwicklung
von der klassischen zur modernen Wahrscheinlichkeitsrechnung. Aachen: Shaker
Hald, A. 1998. A History of Mathematical Statistics from 1750 to 1930. New York: Wiley.
Michel, R. 2002. On Stirling’s Formula. American Mathematical Monthly 109, 388–390.
Tweddle, I. 2003. James Stirling’s Methodus Differentialis: An Annotated Translation of Stirling’s Text. London:
Springer.