Special Functions
Special Functions
Reading Problems
Outline
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Factorial function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Digamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Incomplete Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Beta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Incomplete Beta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Assigned Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1
Background
Louis Franois Antoine Arbogast (1759 - 1803) a French mathematician, is generally credited
with being the first to introduce the concept of the factorial as a product of a fixed number
of terms in arithmetic progression. In an effort to generalize the factorial function to non-
integer values, the Gamma function was later presented in its traditional integral form by
Swiss mathematician Leonhard Euler (1707-1783). In fact, the integral form of the Gamma
function is referred to as the second Eulerian integral. Later, because of its great importance,
it was studied by other eminent mathematicians like Adrien-Marie Legendre (1752-1833),
Carl Friedrich Gauss (1777-1855), Cristoph Gudermann (1798-1852), Joseph Liouville (1809-
1882), Karl Weierstrass (1815-1897), Charles Hermite (1822 - 1901), as well as many others.1
The first reported use of the gamma symbol for this function was by Legendre in 1839.2
The first Eulerian integral was introduced by Euler and is typically referred to by its more
common name, the Beta function. The use of the Beta symbol for this function was first
used in 1839 by Jacques P.M. Binet (1786 - 1856).
At the same time as Legendre and Gauss, Cristian Kramp (1760 - 1826) worked on the
generalized factorial function as it applied to non-integers. His work on factorials was in-
dependent to that of Stirling, although Sterling often receives credit for this effort. He did
achieve one “first” in that he was the first to use the notation n! although he seems not to
be remembered today for this widely used mathematical notation3 .
A complete historical perspective of the Gamma function is given in the work of Godefroy4
as well as other associated authors given in the references at the end of this chapter.
1
http://numbers.computation.free.fr/Constants/Miscellaneous/gammaFunction.html
2
Cajori, Vol.2, p. 271
3
Elements d’arithmtique universelle , 1808
4
M. Godefroy, La fonction Gamma; Theorie, Histoire, Bibliographie, Gauthier-Villars, Paris (1901)
2
Definitions
1. Factorial
2. Gamma
also known as: generalized factorial, Euler’s second integral
The factorial function can be extended to include all real valued arguments
excluding the negative integers as follows:
∞
z! = e−t tz dt z = −1, −2, −3, . . .
0
∞
Γ(z) = e−t tz−1 dt = (z − 1)! z = −1, −2, −3, . . .
0
3. Digamma
also known as: psi function, logarithmic derivative of the gamma function
d ln Γ(z) Γ (z)
ψ(z) = = z = −1, −2, −3, . . .
dz Γ(z)
4. Incomplete Gamma
The gamma function can be written in terms of two components as follows:
3
x
γ(z, x) = e−t tz−1 dt x>0
0
∞
Γ(z, x) = e−t tz−1 dt x>0
x
5. Beta
also known as: Euler’s first integral
1
B(y, z) = ty−1 (1 − t)z−1 dt
0
Γ(y) · Γ(z)
=
Γ(y + z)
6. Incomplete Beta
x
Bx (y, z) = ty−1 (1 − t)z−1 dt 0≤x≤1
0
Bx (y, z)
Ix (y, z) =
B(y, z)
4
Theory
Factorial Function
The classical case of the integer form of the factorial function, n!, consists of the product of
n and all integers less than n, down to 1, as follows
n(n − 1)(n − 2) . . . 3 · 2 · 1 n = 1, 2, 3, . . .
n! = (1.1)
1 n=0
where by definition, 0! = 1.
The integer form of the factorial function can be considered as a special case of two widely
used functions for computing factorials of non-integer arguments, namely the Pochham-
mer’s polynomial, given as
Γ(z + n)
z(z + 1)(z + 2) . . . (z + n − 1) = n>0
Γ(z)
(z)n = (z + n − 1)! (1.2)
=
(z − 1)!
1 = 0! n=0
While it is relatively easy to compute the factorial function for small integers, it is easy to see
how manually computing the factorial of larger numbers can be very tedious. Fortunately
given the recursive nature of the factorial function, it is very well suited to a computer and
can be easily programmed into a function or subroutine. The two most common methods
used to compute the integer form of the factorial are
direct computation: use iteration to produce the product of all of the counting numbers
between n and 1, as in Eq. 1.1
recursive computation: define a function in terms of itself, where values of the factorial
are stored and simply multiplied by the next integer value in the sequence
5
Another form of the factorial function is the double factorial, defined as
n(n − 2) . . . 5 · 3 · 1 n>0 odd
n!! = n(n − 2) . . . 6 · 4 · 2 n>0 even (1.4)
1 n = −1, 0
0!! = 1 5!! = 15
1!! = 1 6!! = 48
2!! = 2 7!! = 105
3!! = 3 8!! = 384
4!! = 8 9!! = 945
While there are several identities linking the factorial function to the double factorial, perhaps
the most convenient is
Potential Applications
n!
C(n, k) = (1.6)
k!(n − k)!
6
Gamma Function
The factorial function can be extended to include non-integer arguments through the use of
Euler’s second integral given as
∞
z! = e−t tz dt (1.7)
0
Through a simple translation of the z− variable we can obtain the familiar gamma function
as follows
∞
Γ(z) = e−t tz−1 dt = (z − 1)! (1.8)
0
The gamma function is one of the most widely used special functions encountered in advanced
mathematics because it appears in almost every integral or series representation of other
advanced mathematical functions.
Let’s first establish a direct relationship between the gamma function given in Eq. 1.8 and
the integer form of the factorial function given in Eq. 1.1. Given the gamma function
Γ(z + 1) = z! use integration by parts as follows:
u dv = uv − v du
u = tz ⇒ du = ztz−1 dt
dv = e−t dt ⇒ v = −e−t
which leads to
∞ ∞ ∞
−t −t
Γ(z + 1) = e z
t dt = −e t z
+z e−t tz−1 dt
0 0 0
7
Given the restriction of z > 0 for the integer form of the factorial function, it can be seen
that the first term in the above expression goes to zero since, when
t = 0 ⇒ tn → 0
t = ∞ ⇒ e−t → 0
Therefore
∞
Γ(z + 1) = z e−t tz−1 dt = z Γ(z), z>0 (1.9)
0
Γ(z)
∞
∞
Γ(1) = 0! = e−t dt = −e−t 0
=1
0
and in turn
Γ(2) = 1 Γ(1) = 1 · 1 = 1!
Γ(3) = 2 Γ(2) = 2 · 1 = 2!
Γ(4) = 3 Γ(3) = 3 · 2 = 3!
Γ(n + 1) = n! n = 1, 2, 3, . . . (1.10)
8
The gamma function constitutes an essential extension of the idea of a factorial, since the
argument z is not restricted to positive integer values, but can vary continuously.
Γ(z + 1)
Γ(z) =
z
From the above expression it is easy to see that when z = 0, the gamma function approaches
∞ or in other words Γ(0) is undefined.
Given the recursive nature of the gamma function, it is readily apparent that the gamma
function approaches a singularity at each negative integer.
However, for all other values of z, Γ(z) is defined and the use of the recurrence relationship
for factorials, i.e.
Γ(z + 1) = z Γ(z)
effectively removes the restriction that x be positive, which the integral definition of the
factorial requires. Therefore,
Γ(z + 1)
Γ(z) = , z = 0, −1, −2, −3, . . . (1.11)
z
Several other definitions of the Γ-function are available that can be attributed to the pio-
neering mathematicians in this area
Gauss
n! nz
Γ(z) = lim , z = 0, −1, −2, −3, . . . (1.12)
n→∞ z(z + 1)(z + 2) . . . (z + n)
Weierstrass
∞
1 z
=ze γ·z
1+ e−z/n (1.13)
Γ(z) n=1
n
9
15
10
z 5
0
5
10
15
4 2 0 2 4
z
1 √
10 − 1 = 0.57721 73 . . .
3
γ=
2
Other forms of the gamma function are obtained through a simple change of variables, as
follows
∞
y 2z−1 e−y dy
2
Γ(z) = 2 by letting t = y 2 (1.15)
0
1 z−1
1
Γ(z) = ln dy by letting e−t = y (1.16)
0 y
10
Relations Satisfied by the Γ-Function
Recurrence Formula
Duplication Formula
2z−1
1 √
2 Γ(z) Γ z + = π Γ(2z) (1.18)
2
Reflection Formula
π
Γ(z) Γ(1 − z) = (1.19)
sin πz
∞ √
e−y dy =
2
Γ(1/2) = (−1/2)! = 2 π (1.20)
0
I
11
Combining the results of Eq. 1.20 with the recurrence formula, we see
√
Γ(1/2) = π
√
1 π
Γ(3/2) = Γ(1/2) =
2 2
√ √
3 3 π 3 π
Γ(5/2) = Γ(3/2) = =
2 2 2 4
..
.
1 1 · 3 · 5 · · · (2n − 1) √
Γ n+ = π n = 1, 2, 3, . . .
2 2n
For z > 0, Γ(z) has a single minimum within the range 1 ≤ z ≤ 2 at 1.46163 21450
where Γ(z) = 0.88560 31944. Some selected 10 decimal place values of Γ(z) are found in
Table 1.1.
z Γ(z)
For other values of z (z = 0, −1, −2. . . . ), Γ(z) can be computed by means of the
recurrence formula.
12
Approximations
Asymptotic expansions of the factorial and gamma functions have been developed for
z >> 1. The expansion for the factorial function is
√
z! = Γ(z + 1) = 2πz z z e−z A(z) (1.21)
where
1 1 139 571
A(z) = 1 + + − − + ··· (1.22)
12z 288z 2 51840z 3 2488320z 4
1 1 1 1 1
ln Γ(z) = z− ln z − z + ln(2π) + − +
2 2 12z 360z 3 1260z 5
1
− + ··· (1.23)
1680z 7
The absolute value of the error is less than the absolute value of the first term neglected.
For large values of z, i.e. as z → ∞, both expansions lead to Stirling’s Formula, given as
√
z! = 2π z z+1/2 e−z (1.24)
Even though the asymptotic expansions in Eqs. 1.21 and 1.23 were developed for very large
values of z, they give remarkably accurate values of z! and Γ(z) for small values of z. Table
1.2 shows the relative error between the asymptotic expansion and known accurate values
for arguments between 1 ≤ z ≤ 7, where the relative error is defined as
13
Table 1.2: Comparison of Approximate value of z! by Eq. 1.21 and Γ(z) by Eq. 1.23 with
the Accurate values of Mathematica 5.0
The asymptotic expansion for Γ(z) converges very quickly to give accurate values for rela-
tively small values of z. The asymptotic expansion for z! converges less quickly and does
not yield 9 decimal place accuracy even when z = 7.
More accurate values of Γ(z) for small z can be obtained by means of the recurrence formula.
For example, if we want Γ(1+z) where 0 ≤ z ≤ 1, then by means of the recurrence formula
we can write
Γ(n + z)
Γ(1 + z) = (1.25)
(1 + z)(2 + z)(3 + z) . . . (n − 1 + z)
Γ(5.3)
Γ(1 + 0.3) = = 0.89747 0699
(1.3)(2.3)(3.3)(4.3)
This value can be compared with the 10 decimal place value given previously in Table 1.1.
We observe that the absolute error is approximately 3 × 10−9 . Comparable accuracy can
be obtained by means of the above equation with n = 6 and 0 ≤ z ≤ 1.
14
Polynomial Approximation of Γ(z + 1) within 0 ≤ z ≤ 1
Numerous polynomial approximations which are based upon the use of Chebyshev polyno-
mials and the minimization of the maximum absolute error have been developed for varying
degrees of accuracy. One such approximation developed for 0 ≤ z ≤ 1 due to Hastings8 is
Γ(z + 1) = z!
where
|(z)| ≤ 3 × 10−7
15
Series Expansion of 1/Γ(z) for |z| ≤ ∞
The function 1/Γ(z) is an entire function defined for all values of z. It can be expressed as
a series expansion according to the relationship
1
∞
= Ck z k , |z| ≤ ∞ (1.27)
Γ(z) k=1
where the coefficients Ck for 0 ≤ k ≤ 26, accurate to 16 decimal places are tabulated in
Abramowitz and Stegun1 . For 10 decimal place accuracy one can write
1
19
= Ck z k (1.28)
Γ(z) k=1
k Ck k Ck
16
Potential Applications
1. Gamma Distribution: The probability density function can be defined based on the
Gamma function as follows:
1
f (x, α, β) = xα−1 e−x/β
Γ(α)β α
17
Digamma Function
The digamma function is the regularized (normalized) form of the logarithmic derivative of
the gamma function and is sometimes referred to as the psi function.
d ln Γ(z) Γ (z)
ψ(z) = = (1.29)
dz Γ(z)
The digamma function is shown in Figure 1.2 for a range of arguments between −4 ≤ z ≤ 4.
20
10
Ψz
10
20
4 2 0 2 4
z
The ψ-function satisfies relationships which are obtained by taking the logarithmic derivative
of the recurrence, reflection and duplication formulas of the Γ-function. Thus
1
ψ(z + 1) = + ψ(z) (1.30)
z
ψ(1 − z) − ψ(z) = π cot(π z) (1.31)
These formulas may be used to obtain the following special values of the ψ-function:
18
where γ is the Euler-Mascheroni constant defined in Eq. (1.14). Using Eq. (1.30)
n
1
ψ(n + 1) = −γ + n = 1, 2, 3, . . . (1.34)
k=1
k
n
1
ψ(n + 1/2) = −γ − 2 ln 2 + 2 , n = 1, 2, 3, . . . (1.36)
k=1
2k − 1
The ψ-function has simple representations in the form of definite integrals involving the
variable z as a parameter. Some of these are listed below.
1
ψ(z) = −γ + (1 − t)−1 (1 − tz−1 ) dt, z>0 (1.37)
0
1
ψ(z) = −γ − π cot(π z) + (1 − t)−1 (1 − t−z ) dt, z<1 (1.38)
0
∞
e−t e−zt
ψ(z) = − dt, z>0 (1.39)
0 t 1 − e−t
∞
dt
ψ(z) = e−t − (1 + t)−z , z>0
0 t
∞
dt
= −γ + (1 + t)−1 − (1 + t)−z , z>0 (1.40)
0 t
∞
1 1
ψ(z) = ln z + − e−zt dt, z>0
0 t 1 − e−t
∞
1 1 1 1
= ln z − − − − e−zt dt, z>0 (1.41)
2z 0 1− e−t t 2
19
Series Representation of ψ(z)
∞
1 1
ψ(z) = −γ − − z = −1, −2, −3, . . . (1.42)
k=0
z+k 1+k
1
∞
1
ψ(x) = −γ − +x z = −1, −2, −3, . . . (1.43)
x k=1
k(z + k)
∞
1 1
ψ(z) = ln z − − ln 1 + z = −1, −2, −3, . . . (1.44)
k=0
z+k z+k
1
∞
B2n
ψ(z) = ln z − − z→∞ (1.45)
2z n=1
2nz 2n
B0 = 1 B6 = 1/42
B2 = 1/6 B8 = −1/30 (1.46)
B4 = −1/30 B10 = 5/66
1 1 1 1
ψ(z) = ln z − − + − + ··· z→∞ (1.47)
2z 12z 2 120z 4 252z 6
20
The Incomplete Gamma Function γ(z, x), Γ(z, x)
We can generalize the Euler definition of the gamma function by defining the incomplete
gamma function γ(z, x) and its compliment Γ(z, x) by the following variable limit integrals
x
γ(z, x) = e−t tz−1 dt z>0 (1.48)
0
and
∞
Γ(z, x) = e−t tz−1 dt z>0 (1.49)
x
so that
Figure 1.3 shows plots of γ(z, x), Γ(z, x) and Γ(z) all regularized with respect to Γ(z).
We can clearly see that the addition of γ(z, x)/Γ(z) and Γ(z, x)/Γ(z) leads to a value of
unity or Γ(z)/Γ(z) for each value of z.
Some special values, integrals and series are listed below for convenience
n
xk
γ(1 + n, x) = n! 1 − e−x n = 0, 1, 2, . . . (1.51)
k=0
k!
∞
xk
−x
Γ(1 + n, x) = n! e n = 0, 1, 2, . . . (1.52)
k=0
k!
(−1)n
n−1
k!
Γ(−n, x) = Γ(0, x) − e−x (−1)k n = 1, 2, 3 . . . (1.53)
n! k=0
xk+1
21
Γz,xz, z 1, 2, 3, 4
Γz,xz
0.8
0.6
0.4
0.2
0 2 4 6 8 10
x
z,xz, a 1, 2, 3, 4
1
z,xz
0.8
0.6
0.4
0.2
0 2 4 6 8 10
x
zz, a 1, 2, 3, 4
1
0.8
xa
0.6
0.4
0.2
0 2 4 6 8 10
x
Figure 1.3: Plot of the Incomplete Gamma Function where
γ(z, x) Γ(z, x) Γ(z)
+ =
Γ(z) Γ(z) Γ(z)
22
Integral Representations of the Incomplete Gamma Functions
π
z
γ(z, x) = x cosec(π z) ex cos θ cos(zθ + x sin θ) dθ
0
x = 0, z > 0, z = 1, 2, . . . (1.54)
e−x xz ∞
e−t t−z
Γ(z, x) = dt z < 1, x > 0 (1.55)
Γ(1 − z) 0 x+t
∞
z −xy
Γ(z, xy) = y e e−ty (t + x)z−1 dt y > 0, x > 0, z > 1 (1.56)
0
∞
(−1)n xz+n
γ(z, x) = (1.57)
n=0
n! (z + n)
∞
(−1)n xz+n
Γ(z, x) = Γ(z) − (1.58)
n=0
n! (z + n)
∞
Lz (x)
Γ(z + x) = e−x xz n
x>0 (1.59)
n=0
n+1
Γ(z + n, x) Γ(z, x)
n−1
xz+k
−x
= + e (1.62)
Γ(z + n) Γ(z) k=0
Γ(z + k + 1)
dγ(z, x) dΓ(z, x)
= − = xz−1 e−x (1.63)
dx dx
23
Asymptotic Expansion of Γ(z, x) for Large x
−x
(z − 1) (z − 1)(z − 2)
Γ(z, x) = x z−1
e 1+ + + ··· x→∞ (1.64)
x x2
e−x xz
Γ(z, x) = (1.65)
1−z
z+
1
1+
2−z
x+
2
1+
3−z
x+
1 + ...
24
Beta Function B(a, b)
Another definite integral which is related to the Γ-function is the Beta function B(a, b)
which is defined as
1
B(a, b) = ta−1 (1 − t)b−1 dt, a > 0, b > 0 (1.72)
0
The relationship between the B-function and the Γ-function can be demonstrated easily. By
means of the new variable
t
u=
(1 − t)
∞
ua−1
B(a, b) = du a > 0, b > 0 (1.73)
0 (1 + u)a+b
∞
Γ(z)
e−pt tz−1 dt = (1.74)
0 pz
which is obtained from the definition of the Γ-function with the change of variable s = pt.
Setting p = 1 + u and z = a + b, we get
∞
1 1
= e−(1+u)t ta+b−1 dt (1.75)
(1 + u)a+b Γ(a + b) 0
and substituting this result into the Beta function in Eq. 1.73 gives
∞ ∞
1 −t
B(a, b) = e t a+b−1
dt e−ut ua−1 du
Γ(a + b) 0 0
∞
Γ(a)
= e−t tb−1 dt
Γ(a + b) 0
Γ(a) · Γ(b)
= (1.76)
Γ(a + b)
25
Betay,.5
15
10
By,z 5
5
10
4 2 0 2 4
y
All the properties of the Beta function can be derived from the relationships linking the
Γ-function and the Beta function.
Other forms of the beta function are obtained by changes of variables. Thus
∞
ua−1 du u
B(a, b) = by t = (1.77)
0 (1 + u)a+b 1−u
π/2
B(a, b) = 2 sin2a−1 θ cos2a−1 θ dθ by t = sin2 θ (1.78)
0
Potential Applications
1. Beta Distribution: The Beta distribution is the integrand of the Beta function. It can
be used to estimate the average time of completing selected tasks in time management
problems.
26
x
Bx (a, b) = ta−1 (1 − t)b−1 dt 0≤x≤1 (1.79)
0
Bx (a, b)
Ix (a, b) = (1.80)
B(a, b)
I1 (a, b) = 1
The incomplete beta function and Ix (a, b) satisfies the following relationships:
Symmetry
Recurrence Formulas
27
Beta.25, y, .5
40
20
0
Bx,y,z 20
40
60
80
4 2 0 2 4
y
Beta.75, y, .5
10
5
Bx,y,z
5
10
4 2 0 2 4
y
Beta1, y, .5
15
10
5
Bx,y,z
5
10
4 2 0 2 4
y
Figure 1.5: Plot of the Incomplete Beta Function
28
Assigned Problems
Problem Set for Gamma and Beta Functions
1. Use the definition of the gamma function with a suitable change of variable to prove
that
∞
1
i) e−ax xn dx = n+1 Γ(n + 1) with n > −1, a > 0
0 a
∞ √
π
ii) exp(2ax − x ) dx =
2
exp(a2 )
a 2
2. Prove that
π/2 π/2
√
n n
π Γ([1 + n]/2)
sin θ dθ = cos θ dθ =
0 0 2 Γ([2 + n]/2)
3. Show that
1 1 π
Γ +x Γ −x =
2 2 cos πx
1 7
4. Evaluate Γ − and Γ − .
2 2
5. Show that the area enclosed by the axes x = 0, y = 0 and the curve x4 + y 4 = 1 is
2
1
Γ
4
√
8 π
Use both the Dirichlet integral and a conventional integration procedure to substantiate
this result.
29
6. Express each of the following integrals in terms of the gamma and beta functions and
simplify when possible.
1 1/4
1
i) −1 dx
0 x
b
ii) (b − x)m−1 (x − a)n−1 dx, with b > a, m > 0, n > 0
a
∞
dt
iii) √
0 (1 + t) t
Note: Validate your results using various solution procedures where possible.
2
1
Γ
A 1 n
=
4ab 2n 2
Γ
n
for n = 0.2, 0.4, 0.8, 1.0, 2.0, 4.0, 8.0, 16.0, 32.0, 64.0, 100.0
a) the first quadrant area bounded by the curve and two axes
b) the centroid (x, y) of this area
c) the volume generated when the area is revolved about the y−axis
d) the moment of inertia of this volume about its axis
Note: Validate your results using various solution procedures where possible.
9. Starting with
1 ∞
e−t dt
Γ = √
2 0 t
30
2 ∞ ∞
1
Γ =4 exp −(x2 + y 2 ) dx dy
2 0 0
Further prove that the above double integral over the first quadrant when evaluated
using polar coordinates (r, θ) yields
1 √
Γ = π
2
31
References
1. Abramowitz, M. and Stegun, I.A., (Eds.), “Gamma (Factorial) Function” and
“Incomplete Gamma Function.” §6.1 and §6.5 in Handbook of Mathematical Functions
and Formulas, Graphs and Mathematical Tables, 9th printing, Dover, New York, 1972,
pp. 255-258 and pp 260-263.
2. Andrews, G.E., Askey, R. and Roy, R., Special Functions, Cambridge University
Press, Cambridge, 1999.
3. Artin, E. The Gamma Function, Holt, Rinehart, and Winston, New York, 1964.
4. Barnew, E.W., “The Theory of the Gamma Function,” Messenger Math., (2), Vol.
29, 1900, pp.64-128..
5. Borwein, J.M. and Zucker, I.J., “Elliptic Integral Evaluation of the Gamma Func-
tion at Rational Values and Small Denominator,” IMA J. Numerical Analysis, Vol.
12, 1992, pp. 519-526.
6. Davis, P.J., “Leonhard Euler’s Integral: Historical profile of the Gamma Function,”
Amer. Math. Monthly, Vol. 66, 1959, pp. 849-869.
7. Erdelyl, A., Magnus, W., Oberhettinger, F. and Tricomi, F.G., “The Gamma
Function,” Ch. 1 in Higher Transcendental Functions, Vol. 1, Krieger, New York,
1981, pp. 1-55.
8. Hastings, C., Approximations for Digital Computers, Princeton University Press,
Princeton, NJ, 1955.
9. Hochstadt, H., Special Functions of Mathematical Physics, Holt, Rinehart and Win-
ston, New York, 1961.
10. Koepf, W.. “The Gamma Function,” Ch. 1 in Hypergeometric Summation: An
Algorithmic Approach to Summation and Special Identities, Vieweg, Braunschweig,
Germany, 1998, pp. 4-10.
11. Krantz, S.C., “The Gamma and Beta Functions,” §13.1 in Handbook of Complex
Analysis, Birkhauser, Boston, MA, 1999, pp. 155-158.
12. Legendre, A.M., Memoires de la classe des sciences mathematiques et physiques de
l’Institut de France, Paris, 1809, p. 477, 485, 490.
13. Magnus, W. and Oberhettinger, F., Formulas and Theorems for the Special Func-
tions of Mathematical Physics, Chelsea, New York, 1949.
14. Saibagki, W., Theory and Applications of the Gamma Function, Iwanami Syoten,
Tokyo, Japan, 1952.
15. Spanier, J. and Oldham, K.B., “The Gamma Function Γ(x)” and “The Incomplete
Gamma γ(ν, x) and Related Functions,” Chs. 43 and 45 in An Atlas of Functions,
Hemisphere, Washington, DC, 1987, pp. 411-421 and pp. 435-443.
32
Error and Complementary Error Functions
Reading Problems
Outline
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Gaussian function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Complementary Error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Assigned Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1
Background
The error function and the complementary error function are important special functions
which appear in the solutions of diffusion problems in heat, mass and momentum transfer,
probability theory, the theory of errors and various branches of mathematical physics. It
is interesting to note that there is a direct connection between the error function and the
Gaussian function and the normalized Gaussian function that we know as the “bell curve”.
The Gaussian function is given as
2 /(2σ 2 )
G(x) = Ae−x
The Gaussian function can be normalized so that the accumulated area under the curve is
unity, i.e. the integral from −∞ to +∞ equals 1. If we note that the definite integral
∞
r
π
Z
−ax2
e dx =
−∞ a
1 2 /(2σ 2 )
G(x) = √ e−x
2πσ
If we let
2
x2 1
t = and dt = √ dx
2σ 2 2σ
x x
1
Z Z
2
G(x) dx = √ e−t dt
−x π −x
or recognizing that the normalized Gaussian is symmetric about the y−axis, we can write
2
x x
2 x
Z Z
−t2
G(x) dx = √ e dt = erf x = erf √
−x π 0 2σ
∞
2
Z
2
erfc x = 1 − erf x = √ e−t dt
π x
Historical Perspective
The normal distribution was first introduced by de Moivre in an article in 1733 (reprinted in
the second edition of his Doctrine of Chances, 1738 ) in the context of approximating certain
binomial distributions for large n. His result was extended by Laplace in his book Analytical
Theory of Probabilities (1812 ), and is now called the Theorem of de Moivre-Laplace.
Laplace used the normal distribution in the analysis of errors of experiments. The important
method of least squares was introduced by Legendre in 1805. Gauss, who claimed to have
used the method since 1794, justified it in 1809 by assuming a normal distribution of the
errors.
The name bell curve goes back to Jouffret who used the term bell surface in 1872 for a
bivariate normal with independent components. The name normal distribution was coined
independently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around 1875 [Stigler].
This terminology is unfortunate, since it reflects and encourages the fallacy that “everything
is Gaussian”.
3
Definitions
1. Gaussian Function
The normalized Gaussian curve represents the probability distribution with standard
distribution σ and mean µ relative to the average of a random distribution.
1 2 /(2σ 2 )
G(x) = √ e−(x−µ)
2πσ
This is the curve we typically refer to as the “bell curve” where the mean is zero and
the standard distribution is unity.
2. Error Function
The error function
√ equals twice the integral of a normalized Gaussian function between
0 and x/σ 2.
x
2
Z
2
y = erf x = √ e−t dt for x ≥ 0, y [0, 1]
π 0
where
x
t= √
2σ
∞
2
Z
2
1 − y = erfc x = 1 − erf x = √ e−t dt for x ≥ 0, y [0, 1]
π x
inerf y exists for y in the range −1 < y < 1 and is an odd function of y with a
Maclaurin expansion of the form
4
∞
X
inverf y = cn y 2n−1
n=1
5
Theory
Gaussian Function
The Gaussian function or the Gaussian probability distribution is one of the most fundamen-
tal functions. The Gaussian probability distribution with mean µ and standard deviation σ
is a normalized Gaussian function of the form
1 2 /(2σ 2 )
G(x) = √ e−(x−µ) (1.1)
2πσ
where G(x), as shown in the plot below, gives the probability that a variate with a Gaussian
distribution takes on a value in the range [x, x + dx]. Statisticians commonly call this
distribution the normal distribution and, because of its shape, social scientists refer to it as
the “bell curve.” G(x) has been normalized so that the accumulated area under the curve
between −∞ ≤ x ≤ +∞ totals to unity. A cumulative distribution function, which totals
the area under the normalized distribution curve is available and can be plotted as shown
below.
GHxL DHxL
x x
-4 -2 2 4 -4 -2 2 4
When the mean is set to zero (µ = 0) and the standard deviation or variance is set to unity
(σ = 1), we get the familiar normal distribution
1 2
G(x) = √ e−x /2 dx (1.2)
2π
which is shown in the curve below. The normal distribution function N (x) gives the prob-
ability that a variate assumes a value in the interval [0, x]
x
1
Z
2 /2
N (x) = √ e−t dt (1.3)
2π 0
6
NHxL
0.4
0.3
0.2
0.1
x
-4 -2 2 4
Gaussian distributions have many convenient properties, so random variates with unknown
distributions are often assumed to be Gaussian, especially in physics, astronomy and various
aspects of engineering. Many common attributes such as test scores, height, etc., follow
roughly Gaussian distributions, with few members at the high and low ends and many in
the middle.
Potential Applications
1. Statistical Averaging:
7
Error Function
The error function is obtained by integrating the normalized Gaussian distribution.
x
2
Z
2
erf x = √ e−t dt (1.4)
π 0
where the coefficient in front of the integral normalizes erf (∞) = 1. A plot of erf x over
the range −3 ≤ x ≤ 3 is shown as follows.
0.5
erfHxL
-0.5
-1
-3 -2 -1 0 1 2 3
x
The error function is defined for all values of x and is considered an odd function in x since
erf x = −erf (−x).
The error function can be conveniently expressed in terms of other functions and series as
follows:
1 1 2
erf x = √ γ ,x (1.5)
π 2
2x 1 3 2
2x −x2 3 2
= √ M , , −x = √ e M 1, , x (1.6)
π 2 2 π 2
∞
2 X (−1)n x2n+1
= √ (1.7)
π n=0 n!(2n + 1)
where γ(·) is the incomplete gamma function, M(·) is the confluent hypergeometric function
of the first kind and the series solution is a Maclaurin series.
8
Computer Algebra Systems
Potential Applications
∂ 2T 1 ∂T
=
∂x2 α ∂t
T (x, t) − Ts
x
= erfc √
Ti − Ts 2 αt
9
complementary Error Function
The complementary error function is defined as
erfc x = 1 − erf x
Z ∞
2 2
= √ e−t dt (1.8)
π x
1.5
erfHxL
0.5
-3 -2 -1 0 1 2 3
x
and similar to the error function, the complementary error function can be written in terms
of the incomplete gamma functions as follows:
1 1 2
erfc x = √ Γ ,x (1.9)
π 2
As shown in Figure 2.5, the superposition of the error function and the complementary error
function when the argument is greater than zero produces a constant value of unity.
Potential Applications
1. Diffusion: In a similar manner to the transient conduction problem described for the
error function, the complementary error function is used in the solution of the diffusion
equation when the boundary conditions are constant surface heat flux, where qs = q0
10
1 Erf x + Erfc x
0.8 Erf x
erfHxL
0.6
0.4
0.2 Erfc x
∂T
and surface convection, where −k = h[T∞ − T (0, t)]
∂x x=0
" √ !#
T (x, t) − Ti h2 αt
x hx x h αt
= erfc √ − exp + erfc √ +
T∞ − Ti 2 αt k k2 2 αt k
11
Relations and Selected Values of Error Functions
erf 0 = 0 erfc 0 = 1
erf ∞ = 1 erfc ∞ = 0
erf (−∞) = −1
∞
√
Z
erfc x dx = 1/ π
0
∞ √ √
Z
erfc2 x dx = (2 − 2)/ π
0
Ten decimal place values for selected values of the argument appear in Table 2.1.
x erf x x erf x
12
Approximations
Since
∞
2 x
2 x
(−1)n t2n
Z Z
−t2
X
erf x = √ e dt = √ dt (1.10)
π 0 π 0 n=0
n!
and the series is uniformly convergent, it may be integrated term by term. Therefore
∞
2 X (−1)n x2n+1
erf x = √ (1.11)
π n=0 (2n + 1)n!
x3 x5 x7 x9
2 x
= √ − + − + − ··· (1.12)
π 1 · 0! 3 · 1! 5 · 2! 7 · 3! 9 · 4!
Since
∞ ∞
2 2 1
Z Z
−t2 2
erfc x = √ e dt = √ e−t t dt
π x π x t
1 2
u = dv = e−t d dt
t
1 2
du = −t−2 dt v = − e−t
2
therefore
∞ Z ∞ −t2
∞ ∞
1 −t2 ∞
1 1e
Z Z
−t2
e t dt = uv − v du = − e − 2
dt
x t x x 2t x x 2 t
13
Thus
( 2
)
2 1 1 ∞
e−t
Z
−x2
erfc x = √ e − dt (1.13)
π 2x 2 x t2
∞
√ 2
X 1 · 3 · 5 · · · (2n − 1)
πxex erfc x = 1 + (−1)n (1.15)
n=1
(2x2 )n
This series does not converge, since the ratio of the nth term to the (n−1)th does not remain
less than unity as n increases. However, if we take n terms of the series, the remainder,
2
1 · 3 · · · (2n − 1) ∞
e−t
Z
dt
2n x t2n
2
∞
e−t ∞
dt
Z Z
−x2
dt < e <
x t2n 0 t2n
We can therefore stop at any term taking the sum of the terms up to this term as an
approximation of the function. The error will be less in absolute value than the last term
retained in the sum. Thus for large x, erfc x may be computed numerically from the
asymptotic expansion.
∞
√ x2
X 1 · 3 · 5 · · · (2n − 1)
πxe erfc x = 1 + (−1)n
n=1
(2x2 )n
1 1·3 1·3·5
= 1− + − + ··· (1.16)
2x2 (2x2 )2 (2x2 )3
14
Some other representations of the error functions are given below:
∞
2 −x2 X x2n+1
erf x = √ e (1.17)
π n=0
(3/2)n
2x 1 3 2
= √ M , , −x (1.18)
π 2 2
2x −x2 3 2
= √ e M 1, , x (1.19)
π 2
1 1 2
= √ γ ,x (1.20)
π 2
1 1 2
erfc x = √ Γ ,x (1.21)
π 2
The symbols γ and Γ represent the incomplete gamma functions, and M denotes the con-
fluent hypergeometric function or Kummer’s function.
2
erfc x = √ e−x = − √ (2x)e−x (1.23)
dx dx π π
d3
d 2 −x2
2 2
3
erfc x = − √ (2x)e = √ (4x2 − 2)e−x (1.24)
dx dx π π
dn+1 n
2 −x2
erf x = (−1) √ H n (x) e (n = 0, 1, 2 . . .) (1.25)
dxn+1 π
15
Repeated Integrals of the Complementary Error Function
Z ∞
n
i erfc x = in−1 erfc t dt n = 0, 1, 2, . . . (1.26)
x
where
2 2
i−1 erfc x = √ e−x (1.27)
π
1
= √ exp(−x2 ) − x erfc x (1.29)
π
Z ∞
2
i erfc x = i erfc t dt
x
1 2
2 2
= (1 + 2x ) erfc x − √ x exp(−x )
4 π
1
= [erfc x − 2x · ierfc x] (1.30)
4
1
in erfc 0 − (n = −1, 0, 1, 2, 3, . . .) (1.32)
n
n
2 Γ 1+
2
d2 y dy
+ 2x − 2ny = 0 (1.33)
dx2 dx
16
The general solution of
is of the form
d
[in erfc x] = (−1)n−1 erfc x (n = 0, 1, 2, 3 . . .) (1.36)
dx
dn h x2
i 2
e erfc x = (−1)n 2n n!ex in erfc x (n = 0, 1, 2, 3 . . .) (1.37)
dxn
17
Some Integrals Associated with the Error Function
x2
e−t √
Z
√ dt = π erf x (1.38)
0 t
Z x √
−t y
π
e dt = erf x (1.39)
0 2y
2 x2
1
e−t π
Z
2
ex 1 − {erf x}2
dt = (1.40)
0 1 + t2 2
√
∞
e−t x π √
Z
√ dt = √ exy erfc ( xy) x>0 (1.41)
0 y+t x
2x
∞
e−t π √
Z
2
dt = exy erfc ( xy) x > 0, y > 0 (1.42)
0 t2 + y2 2y
∞
e−tx π
Z
√ dt = √ exy erfc (xy) x > 0, y 6= 0 (1.43)
0 (t + y) t y
∞ √
y
Z p
e−t x
erf ( yt) dt = (x + y)−1/2 (x + y) > 0 (1.44)
0 x
∞
1 −2√xy
Z p
e−t x erf ( y/t dt = e x > 0, y > 0 (1.45)
0 x
Z ∞
erfc (t) dt = ierfc (a) + 2a = ierfc (−a) (1.46)
−a
Z a
erf (t) dt = 0 (1.47)
−a
Z a
erfc(t) dt = 2a (1.48)
−a
∞
1
Z
ierfc (t) dt = i2 erfc (−a) = + a − i2 erfc (a) (1.49)
−a 2
∞
t+c a+c
Z
n n+1
i erfc dt = bi erfc (1.50)
a b b
18
Numerical Computation of Error Functions
The power series form of the error function is not recommended for numerical computations
when the argument approaches and exceeds the value x = 2 because the large alternat-
ing terms may cause cancellation, and because the general term is awkward to compute
recursively. The function can, however, be expressed as a confluent hypergeometric series.
2 −x2
3 2
erf x = √ x e M 1, , x (1.51)
π 2
in which all terms are positive, and no cancellation can occur. If we write
∞
X
erf x = b an 0≤x≤2 (1.52)
n=0
with
2x 2 x2
b = √ e−x a0 = 1 an = an−1 n≥1
π (2n + 1)/2
then erf x can be computed very accurately (e.g. with an absolute error less that 10−9 ).
Numerical experiments show that this series can be used to compute erf x up to x = 5
to the required accuracy; however, the time required for the computation of erf x is much
greater due to the large number of terms which have to be summed. For x ≥ 2 an alternate
method that is considerably faster is recommended which is based upon the asymptotic
expansion of the complementary error function.
∞
2
Z
2
erfc x = √ e−t dt
π x
2
e−x
1 1
= √ 2 Fo , 1, − x→∞ (1.53)
πx 2 x2
which cannot be used to obtain arbitrarily accurate values for any x. An expression that
converges for all x > 0 is obtained by converting the asymptotic expansion into a continued
fraction
19
√ 2 1
πex erfc x = x>0 (1.54)
1/2
x+
1
x+
3/2
x+
2
x+
5/2
x+
x + ...
2
e−x
1 1/2 1 3/2 2
erfc x = √ ··· x>0 (1.55)
π x+ x+ x+ x+ x+
It can be demonstrated experimentally that for x ≥ 2 the 16th approximant gives erfc x
with an absolute error less that 10−9 . Thus we can write
2
e−x
1 1/2 1 3/2 8
erfc x = √ ··· x≥2 (1.56)
π x+ x+ x+ x+ x
Using a fixed number of approximants has the advantage that the continued fraction can be
evaluated rapidly beginning with the last term and working upward to the first term.
20
Rational Approximations of the Error Functions (0 ≤ x < ∞)
Numerous rational approximations of the error functions have been developed for digital
computers. The approximations are often based upon the use of economized Chebyshev
polynomials and they give values of erf x from 4 decimal place accuracy up to 24 decimal
place accuracy.
2
erf x = 1 − [t(a1 + t(a2 + a3 t))] e−x + (x) 0≤x (1.57)
where
1
t=
1 + px
p = 0.47047
a1 = 0.3480242
a2 = −0.0958798
a3 = 0.7478556
This approximation has a maximum absolute error of |(x)| < 2.5 × 10−5 .
Another more accurate rational approximation has been developed for example
2
erf x = 1 − [t(a1 + t(a2 + t(a3 + t(a4 + a5 t))))] e−x + (x) (1.58)
where
1
t=
1 + px
21
and the coefficients are
p = 0.3275911
a1 = 0.254829592
a2 = −0.284496736
a3 = 1.421413741
a4 = −1.453152027
a5 = 1.061405429
This approximation has a maximum absolute error of |(x)| < 1.5 × 10−7 .
22
Assigned Problems
Problem Set for Error and Due Date: February 12, 2004
Complementary Error Function
1. Evaluate the following integrals to four decimal places using either power series, asymp-
totic series or polynomial approximations:
Z 2 Z 0.002
−x2 2
a) e dx b) e−x dx
0 0.001
∞ 10
2 2
Z Z
−x2 2
c) √ e dx d) √ e−x dx
π 1.5 π 5
∞
1.5 r
1 2 1
Z Z
−x2 −x2
e) e dx f) e dx
1 2 π 1 2
2. The value of erf 2 is 0.995 to three decimal places. Compare the number of terms
required in calculating this value using:
Compare the approximate errors in each case after two terms; after ten terms.
3. For the function ierfc(x) compute to four decimal places when x = 0, 0.2, 0.4, 0.8,
and 1.6.
4. Prove that
√
1 2
i) π erf (x) = γ,x
2
√
1 2
ii) π erfc(x) = Γ ,x
2
1 2
1 2
where γ ,x and Γ ,x are the incomplete Gamma functions defined as:
2 2
23
Z y
γ(a, y) = e−u ua−1 du
0
and
Z ∞
Γ(a, y) = e−u ua−1 du
y
√
5. Show that θ(x, t) = θ0 erfc(x/2 αt) is the solution of the following diffusion
problem:
∂ 2θ 1 ∂θ
= x ≥ 0, t > 0
∂x2 α ∂t
and
θ(0, t) = θ0 , constant
θ(x, t) → 0 as x → ∞
√
6. Given θ(x, t) = θ0 erf x/2 αt:
∂θ ∂θ
i) Obtain expressions for and at any x and all t > 0
∂t ∂x
√
π x ∂θ
ii) For the function
2 θ0 ∂x
√ √
show√that it has a maximum value when x/2 αt = 1/ 2 and the maximum value
is 1/ 2e .
7. Given the transient point source solution valid within an isotropic half space
q √
T = erfc(r/2 αt), dA = r dr dθ
2πkr
24
derive the expression for the transient temperature rise at the centroid of a circular
area (πa2 ) which is subjected to a uniform and constant heat flux q. Superposition of
point source solutions allows one to write
Z aZ 2π
T0 = T dA
0 0
8. For a dimensionless time F o < 0.2 the temperature distribution within an infinite
plate −L ≤ x ≤ L is given approximately by
T (ζ, F o) − Ts 1−ζ
1+ζ
=1− erfc √ + erfc √
T0 − Ts 2 Fo 2 Fo
Z 1
T = T (ζ, F o) dζ
0
The initial and surface plate temperature are denoted by T0 and Ts , respectively.
3
(2n − 1) − ζ (2n − 1) + ζ
X
n+1
θ(ζ, F o) = 1 − (−1) erfc √ + erfc √
n=1 2 Fo 2 Fo
3
X 2(−1)n+1 2
θ(ζ, F o) = e−δn F o cos(δn ζ)
n=1
δn
25
Table 1: Exact values of θ(0, F o) for the Infinite Plate
Fo θ(0, F o)
0.02 1.0000
0.06 0.9922
0.10 0.9493
0.40 0.4745
1.0 0.1080
2.0 0.0092
26
References
1. Abramowitz, M. and Stegun, I.A., Handbook of Mathematical Functions, Dover,
New York, 1965.
3. Hochsadt, H., Special Functions of Mathematical Physics, Holt, Rinehart and Win-
ston, New York, 1961.
4. Jahnke, E., Emdw, F. and Losch, F., Tables of Higher Functions, 6th Edition,
McGraw-Hill, New York, 1960.
7. Magnus, W., Oberhettinger, F. and Soni, R.P., Formulas and Theorems for the
Functions of Mathematical Physics, 3rd Edition, Springer-Verlag, New York, 1966.
9. Sneddon, I.N., Special Functions of Mathematical Physics and Chemistry, 2nd Edi-
tion, Oliver and Boyd, Edinburgh, 1961.
10. National Bureau of Standards, Applied Mathematics Series 41, Tables of Error
Function and Its Derivatives, 2nd Edition, Washington, DC, US Government Printing
Office, 1954.
11. Hastings, C., Approximations for Digital Computers, Princeton University Press,
Princeton, NJ, 1955.
27
Chebyshev Polynomials
Reading Problems
d2 y dy
(1 − x2 ) −x + n2 y = 0 n = 0, 1, 2, 3, . . .
dx2 dx
d2 y
+ n2 y = 0
dt2
y = A cos nt + B sin nt
or as
or equivalently
where Tn (x) and Un (x) are defined as Chebyshev polynomials of the first and second kind
of degree n, respectively.
1
If we let x = cosh t we obtain
d2 y
− n2 y = 0
dt2
y = A cosh nt + B sinh nt
or as
or equivalently
y = ATn (x) + BUn (x) |x| > 1
1 h p n p n i
Tn (x) = x+i 1 − x2 + x − i 1 − x2
2
The sum of the last two relationships give the same result for Tn (x).
2
Chebyshev Polynomials of the First Kind of Degree n
The Chebyshev polynomials Tn (x) can be obtained by means of Rodrigue’s formula
(−2)n n! p dn
Tn (x) = 1 − x2 (1 − x2 )n−1/2 n = 0, 1, 2, 3, . . .
(2n)! dxn
The first twelve Chebyshev polynomials are listed in Table 1 and then as powers of x in
terms of Tn (x) in Table 2.
3
Table 1: Chebyshev Polynomials of the First Kind
T0 (x) = 1
T1 (x) = x
T2 (x) = 2x2 − 1
T3 (x) = 4x3 − 3x
4
Table 2: Powers of x as functions of Tn (x)
1 = T0
x = T1
1
x2 = (T0 + T2 )
2
1
x3 = (3T1 + T3 )
4
1
x4 = (3T0 + 4T2 + T4 )
8
1
x5 = (10T1 + 5T3 + T5 )
16
1
x6 = (10T0 + 15T2 + 6T4 + T6 )
32
1
x7 = (35T1 + 21T3 + 7T5 + T7 )
64
1
x8 = (35T0 + 56T2 + 28T4 + 8T6 + T8 )
128
1
x9 = (126T1 + 84T3 + 36T5 + 9T7 + T9 )
256
1
x10 = (126T0 + 210T2 + 120T4 + 45T6 + 10T8 + T10 )
512
1
x11 = (462T1 + 330T3 + 165T5 + 55T7 + 11T9 + T11 )
1024
5
Generating Function for Tn (x)
The Chebyshev polynomials of the first kind can be developed by means of the generating
function
∞
1 − tx X
= Tn (x)tn
1 − 2tx + t2 n=0
Tn (−1) = (−1)n
6
Orthogonality Property of Tn (x)
We can determine the orthogonality properties for the Chebyshev polynomials of the first
kind from our knowledge of the orthogonality of the cosine functions, namely,
0 (m 6= n)
Z π
cos(mθ) cos(n θ) dθ = π/2 (m = n 6= 0)
0
π (m = n = 0)
Then substituting
Tn (x) = cos(nθ)
cos θ = x
0 (m 6= n)
Z 1
Tm (x) Tn (x) dx
√ = π/2 (m = n 6= 0)
−1 1 − x2
π (m = n = 0)
We observe that the Chebyshev polynomials form an orthogonal set on the interval −1 ≤
x ≤ 1 with the weighting function (1 − x2 )−1/2
7
where the coefficients An are given by
1
1 f (x) dx
Z
A0 = √ n=0
π −1 1 − x2
and
1
2 f (x) Tn (x) dx
Z
An = √ n = 1, 2, 3, . . .
π −1 1 − x2
The following definite integrals are often useful in the series expansion of f (x):
1
dx x3 dx
1
Z Z
√ = π √ = 0
−1 1 − x2 −1 1 − x2
1
x dx x4 dx
1
3π
Z Z
√ = 0 √ =
−1 1 − x2 −1 1 − x2 8
1
x2 dx π x5 dx
1
Z Z
√ = √ = 0
−1 1 − x2 2 −1 1 − x2
π 2π π
θi = 0, , , . . . (N − 1) , π
N N N
where
xi = arccos θi
We have
8
0 (m 6= n)
N −1
1 X 1
Tm (−1)Tn (−1)+ Tm (xi )Tn (xi )+ Tm (1)Tn (1) = N/2 (m = n 6= 0)
2 i=2
2
N (m = n = 0)
The Tm (x) are also orthogonal over the following N points ti equally spaced,
π 3π 5π (2N − 1)π
θi = , , , ...,
2N 2N 2N 2N
and
ti = arccos θi
0 (m 6= n)
N
X
Tm (ti )Tn (ti ) = N/2 (m = n 6= 0)
i=1
N (m = n = 0)
The set of points ti are clearly the midpoints in θ of the first case. The unequal spacing of
the points in xi (N ti ) compensates for the weight factor
W (x) = (1 − x2 )−1/2
9
Additional Identities of Chebyshev Polynomials
The Chebyshev polynomials are both orthogonal polynomials and the trigonometric cos nx
functions in disguise, therefore they satisfy a large number of useful relationships.
The differentiation and integration properties are very important in analytical and numerical
work. We begin with
and
and
or
0 0
Tn+1 (x) Tn−1 (x) 2 cos nθ sin θ
− = = 2Tn (x) n≥2
(n + 1) (n − 1) sin θ
10
Therefore
T10 (x) = T0
T00 (x) = 0
We have the formulas for the differentiation of Chebyshev polynomials, therefore these for-
mulas can be used to develop integration for the Chebyshev polynomials:
1 Tn+1 (x) Tn−1 (x)
Z
Tn (x)dx = − +C n≥2
2 (n + 1) (n − 1)
1
Z
T1 (x)dx = T2 (x) + C
4
Z
T0 (x)dx = T1 (x) + C
T0∗ = 1
T1∗ = 2x − 1
T2∗ = 8x2 − 8x + 1
11
and the following powers of x as functions of Tn∗ (x);
1 = T0∗
1
x = (T0∗ + T1∗ )
2
1
x2 = (3T0∗ + 4T1∗ + T2∗ )
8
1
x3 = (10T0∗ + 15T1∗ + 6T2∗ + T3∗ )
32
1
x4 = (35T0∗ + 56T1∗ + 28T2∗ + 8T3∗ + T4∗ )
128
∗
Tn+1 (x) = (4x − 2)Tn∗ (x) − Tn−1
∗
(x) T0∗ (x) = 1
or
1 1 1
xTn∗ (x) = ∗
Tn+1 (x) + Tn∗ (x) + ∗
Tn−1 (x)
4 2 4
where
1
xTn (x) = [Tn+1 (x) + Tn−1 (x)] n = 1, 2, 3 . . .
2
xT0 (x) = T1 (x)
12
To illustrate the method, consider x4
4 2
x2 x
x = x (xT1 ) = [T2 + T0 ] = [T1 + T3 + 2T1 ]
2 4
1 1
= [3xT1 + xT3 ] = [3T0 + 3T2 + T4 + T2 ]
4 8
1 1 3
= T4 + T2 + T0
8 2 8
∞
X
f (x) = an xn + EN (x) |x| ≤ 1
n=0
where |En (x)| does not exceed an allowed limit, it is possible to reduce the degree of the
polynomial by a process called economization of power series. The procedure is to convert
the polynomial to a linear combination of Chebyshev polynomials:
N
X N
X
n
an x = bn Tn (x) n = 0, 1, 2, . . .
n=0 n=0
It may be possible to drop some of the last terms without permitting the error to exceed
the prescribed limit. Since |Tn (x)| ≤ 1, the number of terms which can be omitted is
determined by the magnitude of the coefficient b.
The Chebyshev polynomials are useful in numerical work for the interval −1 ≤ x ≤ 1
because
13
3. The maxima and minima are spread reasonably uniformly over the interval
−1 ≤ x ≤ 1
5. They are easy to compute and to convert to and from a power series form.
The following table gives the Chebyshev polynomial approximation of several power series.
14
Table 3: Power Series and its Chebyshev Approximation
1. f (x) = a0
f (x) = a0 T0
2. f (x) = a0 + a1 x
f (x) = a0 T0 + a1 T1
3. f (x) = a0 + a1 x + a2 x2
a2 a2
f (x) = a0 + T0 + a1 T1 + T2
2 2
4. f (x) = a0 + a1 x + a2 x2 + a3 x3
a2 3a3 a2 a3
f (x) = a0 + T0 + a1 + T1 + T2 + T3
2 4 2 4
5. f (x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4
a2 a3 3a3 a2 a4 a3
f (x) = a0 + + T0 + a1 + T1 + + T2 + T3
2 8 4 2 2 8
a4
+ T4
8
6. f (x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4 + a5 x5
a2 3a4 3a3 5a5 a2 a4
f (x) = a0 + + T0 + a1 + + T1 + + T2
2 8 4 8 2 2
a3 5a5 a4 a5
+ + T3 + T4 + T5
4 16 8 16
15
Table 4: Formulas for Economization of Power Series
x = T1
1
x2 = (1 + T2 )
2
1
x3 = (3x + T3 )
4
1
x4 = (8x2 − 1 + T4 )
8
1
x5 = (20x3 − 5x + T5 )
16
1
x6 = (48x4 − 18x2 + 1 + T6 )
32
1
x7 = (112x5 − 56x3 + 7x + T7 )
64
1
x8 = (256x6 − 160x4 + 32x2 − 1 + T8 )
128
1
x9 = (576x7 − 432x5 + 120x3 − 9x + T9 )
256
1
x10 = (1280x8 − 1120x6 + 400x4 − 50x2 + 1 + T10 )
512
1
x11 = (2816x9 − 2816x7 + 1232x5 − 220x3 + 11x + T11 )
1024
For easy reference the formulas for economization of power series in terms of Chebyshev are
given in Table 4.
16
Assigned Problems
1. Obtain the first three Chebyshev polynomials T0 (x), T1 (x) and T2 (x) by means of
the Rodrigue’s formula.
3. By means of the recurrence formula obtain Chebyshev polynomials T2 (x) and T3 (x)
given T0 (x) and T1 (x).
1 h p n p n i
Tn (x) = x + i 1 − x2 + x − i 1 − x2
2
√
where i = −1.
3
X
2
x = An Tn (x)
n=0
17
Dirichlet Integral
This article has been identified as a candidate for Featured Proof status.
If you do not believe that this proof is worthy of being a Featured Proof, please state
your reasons on the talk page.
To discuss this page in more detail, feel free to use the talk page.
Contents
Theorem
Proof 1
Proof 2
Proof 3
Proof 4
Proof 5
Source of Name
Theorem
∞
sin x π
∫ dx =
0 x 2
Proof 1
By Fubini's Theorem:
∞ ∞ ∞ ∞
∫ (∫ −xy
e sin x dy) dx = ∫ (∫ e−xy sin x dx) dy
0 0 0 0
Then:
∞
sin x ∞
∫ e −xy
sin x dy = [−e −xy
] Primitive of eax
0 x 0
sin x
=
x
and:
∞ ∞
−e−xy (y sin x + cos x)
∫ e −xy
sin x dx = [ ] Primitive of eax sin bx
0 y2 + 1 0
1
=
y2 + 1
Hence:
∞ ∞
sin x 1
∫ dx = ∫ dy
0 x 0 y2 + 1
∞ 1
= [arctan y] Primitive of
0 x2 + a2
π π
= as lim arctan y =
2 y→∞ 2
Proof 2
∞
sin x
∫ dx is convergent as an improper integral.
0 x
Indeed, for all n ∈ N:
2πn 2n − 1 π(k+1)
sin x
dx = ∑ ∫
sin x
∫ dx
0 x k=0 πk x
2n − 1 π
sin x
= ∑ (−1) ∫
k
dx
k=0 0 x + πk
(−1)k
2n − 1 π
sin x
= ∑ ∫ x dx
k=0
πk 0 1 + πk
But
∣ π sin x ∣ π ∣ 1 ∣
∣∫ x dx − 2 ∣ ≤ ∫ sin x ∣ x − 1 ∣ dx
∣ 0 1 + πk ∣ 0 ∣ 1 + πk ∣
π
1
= ∫ x sin x dx
kπ 0
so that
π
sin x
∫ x dx →k→∞ 2
0 1 + kπ
Hence,
2πn n−1 π π
sin x 1 sin x 1 sin x
∫ dx = ∑ ∫ x dx − ∫ x dx
0 x k=0
2πk 0 1 + 2πk
π (2k + 1) 0 1 + π(2k+1)
2 1
×
π 2k (2k + 1)
which is the term of an absolutely convergent series.
∣ e−αx sin x ∣
∣ ∣ ≤ e−αx
∣ x ∣
We have, from Laplace Transform of Real Power:
∞
1
∫ e−αx dx =
0 α
so, by Comparison Test for Improper Integral:
∞
e−αx sin x
∫ dx converges whenever α > 0.
0 x
So, we can define a real function I : (0 . . ∞) → R by:
∞
e−αx sin x
I (α) = ∫ dx
0 x
for each α ∈ (0 . . ∞) .
∞ −αx
∂ e sin x
′
I (α) = ∫ dx
∂α 0 x
∞
∂ e−αx sin x
= ∫ dx Leibniz's Integral Rule
0 ∂α x
∞
= −∫ e−αx sin x dx Derivative of Exponential Function
0
∞
= [− ]
e−αx (−α sin x + cos x)
2
Primitive of eαx sin bx
(−α) + 1 0
1
= −
α2 + 1
I(α) = − arctan α + K
for some K ∈ R .
We also have:
∣ ∞ e−αx sin x ∣
|I (α)| = ∣∫ dx∣
∣ 0 x ∣
∞
∣ e−αx sin x ∣
≤ ∫ ∣ ∣ dx Triangle Inequality for Definite Integrals
0 ∣ x ∣
1
≤
α
so:
lim |I (α)| = 0
α→∞
That is:
lim I (α) = 0
α→∞
π π
Therefore I(α) = − arctan α, since arctan α →α→∞ .
2 2
Note that we have:
π
I(α) →α→0
2
We now need to show that
∞
sin x
I(α) →α→0 ∫ dx
0 x
Observe for this purpose that
∞
sin 2x −2αx
I(α) = ∫ e dx (change of variable
0 x x ⇝ 2x )
∞
sin x −2αx
= 2∫ e cos x dx
0 x
where all the improper integrals appearing here are convergent by Comparison Test for Improper Integral, as used
above for defining I(α).
Therefore,
∞
sin2 x −2αx ∞
sin x 2 −2αx
I(α) = α ∫ e dx + ∫ ( ) e dx
0 x 0 x
We also have
∞ ∞
sin x sin x
∫ dx = 2 ∫ cos x dx
0 x 0 x
sin2 x ∞ α ∞
sin x 2
( ) dx
sin x
= 2[ ] − 2∫ cos x dx + 2 ∫
x 0 0 x 0 x
∞ ∞
sin x 2
( ) dx
sin x
= −∫ dx + 2 ∫
0 x 0 x
where the improper integrals on the right hand side are convergent because the first one identifies with
∞
sin x sin2 x
∫ dx and the second one because is integrable on (0 . . ∞) , since it has a finite limit at 0 and is
0 x x2
smaller than 12 at ∞.
x
∞ ∞
sin x 2
( ) dx, where the second integral is absolutely convergent.
sin x
Hence ∫ dx = ∫
0 x 0 x
Moreover
x
∞
sin2 x −2αx ∞ sin2
α∫ e dx = α∫
α
e−2x dx
0 x 0 x
x2
⎛ α α2 1
1 ∞ ⎞
≤ α ∫ dx + ∫ dx + ∫ e−2x dx
⎝ 0 x α x 1 ⎠
α( − ln α + 2 )
1 1
=
2 2e
→α → 0 0
whenever α ≤ 1, and
∞
sin x 2 −2αx ∞
sin x 2
∫ ( ) e dx →α→0 ∫ ( ) dx
0 x 0 x
1
∞ 2 sin x 2 ∞
sin x 2
( ) (1 − e−2αx ) dx ∫ ( ) (1 − e−2αx ) dx + ∫ ( ) (1 − e−2αx ) d
sin x
∫
√α
=
0 x 0 x 1 x
√α
∞
sin x 2 ∞
sin x 2
≤ (1 − e−2√α ) ∫ ( ) dx + ∫ ( ) dx
0 x 1 x
√α
→α→0 0
sin2 x
because is integrable on (0 . . ∞) .
x2
Finally, we have
∞
sin2 x −2αx ∞
sin x 2 −2αx
I(α) = α∫ e dx + ∫ ( ) e dx
0 x 0 x
∞
sin x 2
→α→0 ∫ ( ) dx
0 x
∞
sin x
= ∫ dx
0 x
as well as
π
I(α) = − arctan α
2
π
→α→0
2
Let CR be the arc of the circle of radius R centred at the origin connecting R and −R anticlockwise.
We also have:
∣ eix ∣ ∣ 1 ∣
∣∫ dx∣ ≤ π max ∣ iθ ∣ Jordan's Lemma
∣ CR x ∣ 0 ≤ θ ≤ π ∣ Re ∣
π
=
R
→ 0 as R → ∞
Therefore:
R ix ∞ ix
dx e −1 e −1
lim ∫ = lim ∫ dx = ∫ dx
R→∞ CR x R → ∞ −R x −∞ x
dx π
iReiθ
∫ = ∫ dθ Definition of Complex Contour Integral
CR x 0 Reiθ
π
= i∫ dθ
0
= πi
So:
∞
eix − 1
∫ dx = πi
−∞ x
Hence:
∞
sin x π
∫ dx =
0 x 2
■
Proof 4
From Integral to Infinity of Function over Argument:
∞ f (x) →∞
∫ =∫ F (u) du
0 x 0
for a real function f and its Laplace transform L {f} = F , provided they exist.
1
L {f (x)} =
s2 + 1
Hence:
∞ →∞
sin x du
∫ dx = ∫
0 x 0 u2 + 1
∞ 1
= [arctan u] Primitive of
0 x2 + a2
π
=
2
Proof 5
Let M ∈ R>0 .
M
∣ sin x −αx ∣
|IM (α)| ≤ ∫ ∣ e ∣ dx Absolute Value of Definite Integral
0 ∣ x ∣
M
≤ ∫ e−αx dx Sine Inequality |sin x| ≤ |x|
0
M
e−αx
= [ ] Primitive of eax
−α 0
1 e−αM
= −
α α
1
(1) : ≤
α
M
∂ sin x −αx
= ∫ ( e ) dx
′ (α) Definite Integral of Partial
IM
0 ∂α x Derivative
M
= ∫ − sin xe−αx dx Primitive of eax
0
M
= [− ]
e−αx (−α sin x + cos x)
Primitive of eαx sin bx
2
(−α) + 1 0
−1 e−αM αe−αM
(2) : = + cos M + sin M
α2 + 1 α2 + 1 α2 + 1
Thus:
A Fundamental
IM (A) − IM (0) = ∫ ′
IM (α) dα Theorem of
0 Calculus
A
dα A
e−αM A
αe−αM
= −∫ + cos M ∫ dα + sin M ∫ dα by (2)
0 α2 + 1 0 α2 + 1 0 α2 + 1
Thus:
∣ A
dα ∣ ∣ A
e−αM A
αe−αM ∣
∣IM (A) − IM (0) + ∫ ∣ ≤ ∣ cos M ∫ dα + sin M ∫ dα∣
∣ 0 α2 + 1 ∣ ∣ 0 α2 + 1 0 α2 + 1 ∣
Triangle
∣ M
e−αM ∣ ∣ M
αe−αM ∣
≤ ∣ cos M ∫ dα∣ + ∣ sin M ∫ dα∣ Inequality
∣ 0 α 2 +1
∣ ∣ 0 α 2 +1
∣ for Real
Numbers
M
e−αM M
αe−αM
≤ ∫ dα + ∫ dα
0 α2 + 1 0 α2 + 1
M
≤ 2∫ e−αM dα
0
2 similarly
(3) : ≤ to (1)
M
Therefore:
∣ π ∣∣
∣I (0) − π ∣ = ∣I (A) − (I (A) − I (0) + ∫ )
A A
dα dα
∣ M + ∫ −
∣ M 2∣ 2 ∣∣
M M
∣ 0 α2 + 1 0 α2 + 1
Triangle
∣ A
dα ∣ ∣ A dα π∣
|
≤ MI (A)| + ∣I M (A) − IM (0) + ∫ ∣ + ∣∫ − ∣ Inequality
∣ 0 α2 + 1 ∣ ∣ 0 α2 + 1 2∣ for Real
Numbers
1 2 ∣ A dα π∣ by (1) and
≤ + + ∣∫ − ∣
A M ∣ 0 α2 + 1 2∣ (3)
2 as
→ A → +∞
M
by Definite
Integral to
Infinity of
1
x2 + a2
As:
M
sin x
IM (0) = ∫ dx
0 x
we have shown:
∣ M sin x π∣ 2
∀M ∈ R>0 : ∣∫ dx − ∣ ≤
∣ 0 x 2∣ M
In particular:
∞ M
sin x
∫
sin x
dx = lim ∫ dx
0 x M → +∞ 0 x
π
=
2
Source of Name
This entry was named for Johann Peter Gustav Lejeune Dirichlet.
Retrieved from "https://proofwiki.org/w/index.php?title=Dirichlet_Integral&oldid=600918"
This page was last modified on 12 November 2022, at 22:09 and is 697 bytes
Content is available under Creative Commons Attribution-ShareAlike License unless otherwise noted.