0% found this document useful (0 votes)
8 views131 pages

Lectures on Random Matrix Theory

The document titled 'Lectures on Random Matrix Theory' by Govind Menon and Thomas Trogdon provides a comprehensive overview of random matrix theory, including key limit theorems, connections to other mathematical areas, and various matrix ensembles. It covers topics such as integration on spaces of matrices, Jacobi matrices, determinantal formulas, and the equilibrium measure. The document serves as a detailed resource for understanding the mathematical foundations and applications of random matrices.

Uploaded by

bose.rajib8485
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views131 pages

Lectures on Random Matrix Theory

The document titled 'Lectures on Random Matrix Theory' by Govind Menon and Thomas Trogdon provides a comprehensive overview of random matrix theory, including key limit theorems, connections to other mathematical areas, and various matrix ensembles. It covers topics such as integration on spaces of matrices, Jacobi matrices, determinantal formulas, and the equilibrium measure. The document serves as a detailed resource for understanding the mathematical foundations and applications of random matrices.

Uploaded by

bose.rajib8485
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 131

Lectures on Random Matrix Theory

Govind Menon and Thomas Trogdon

March 12, 2018


2
Contents

1 Overview 7
1.1 What is a random matrix? . . . . . . . . . . . . . . . . . . . . . . 7
1.2 The main limit theorems . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.1 The semicircle law . . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 Fluctuations in the bulk: the sine process . . . . . . . . . 11
1.2.3 Fluctuations at the edge: the Airy point process . . . . . 12
1.2.4 Fredholm determinants, Painlevé equations, and integrable
systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.5 Universality . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 Connections to other areas of mathematics . . . . . . . . . . . . 15
1.3.1 Number theory . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.2 Combinatorics and enumerative geometry . . . . . . . . . 17
1.3.3 Random permutations . . . . . . . . . . . . . . . . . . . . 18
1.3.4 Spectral and inverse spectral theory of operators . . . . . 19

2 Integration on spaces of matrices 21


2.1 Integration on O(n) and Symm(n) . . . . . . . . . . . . . . . . . 22
2.2 Weyl’s formula on Symm(n) . . . . . . . . . . . . . . . . . . . . . 25
2.3 Diagonalization as a change of coordinates . . . . . . . . . . . . . 28
2.4 Independence and Invariance implies Gaussian . . . . . . . . . . 29
2.5 Integration on Her(n) and U(n) . . . . . . . . . . . . . . . . . . . 31
2.6 Integration on Quart(n) and USp(n) . . . . . . . . . . . . . . . . 33

3 Jacobi matrices and tridiagonal ensembles 37


3.1 Jacobi ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Householder tridiagonalization on Symm(n) . . . . . . . . . . . . 38
3.3 Tridiagonalization on Her(n) and Quart(n) . . . . . . . . . . . . . 41
3.4 Inverse spectral theory for Jacobi matrices . . . . . . . . . . . . . 42
3.5 Jacobians for tridiagonal ensembles . . . . . . . . . . . . . . . . . 48
3.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4 Determinantal formulas 53
4.1 Probabilities as determinants . . . . . . . . . . . . . . . . . . . . 53
4.2 The m-point correlation function . . . . . . . . . . . . . . . . . . 55

3
4 CONTENTS

4.3 Determinants as generating functions . . . . . . . . . . . . . . . . 57


4.4 Scaling limits of independent points . . . . . . . . . . . . . . . . 59
4.5 Scaling limits I: the semicircle law . . . . . . . . . . . . . . . . . 61
4.6 Scaling limits II: the sine kernel . . . . . . . . . . . . . . . . . . . 62
4.7 Scaling limits III: the Airy kernel . . . . . . . . . . . . . . . . . . 64
4.8 The eigenvalues and condition number of GUE . . . . . . . . . . 65
4.9 Notes on universality and generalizations . . . . . . . . . . . . . 66
4.9.1 Limit theorems for β = 1, 4 . . . . . . . . . . . . . . . . . 66
4.9.2 Universality theorems . . . . . . . . . . . . . . . . . . . . 66
4.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5 The equilibrium measure 69


5.1 The log-gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.2 Energy minimization for the log-gas . . . . . . . . . . . . . . . . 71
5.2.1 Case 1: bounded support . . . . . . . . . . . . . . . . . . 71
5.2.2 Weak lower semicontinuity . . . . . . . . . . . . . . . . . 72
5.2.3 Strict convexity . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2.4 Case 2: Measures on the line . . . . . . . . . . . . . . . . 75
5.3 Fekete points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

6 Other random matrix ensembles 81


6.1 The Ginibre ensembles . . . . . . . . . . . . . . . . . . . . . . . . 82
6.1.1 Schur decomposition of GinC (n) . . . . . . . . . . . . . . 86
6.1.2 QR decomposition of GinC (m, n) . . . . . . . . . . . . . . 87
6.1.3 Eigenvalues and eigenvectors of GinR (n) . . . . . . . . . . 88
6.1.4 QR decomposition of GinR (m, n) . . . . . . . . . . . . . . 91
6.2 SVD and the Laguerre (Wishart) ensembles . . . . . . . . . . . . 91
6.2.1 The Cholesky decomposition . . . . . . . . . . . . . . . . 92
6.2.2 Bidiagonalization of Ginibre . . . . . . . . . . . . . . . . . 95
6.2.3 Limit theorems . . . . . . . . . . . . . . . . . . . . . . . . 96

7 Sampling random matrices 101


7.1 Sampling determinantal point processes . . . . . . . . . . . . . . 101
7.2 Sampling unitary and orthogonal ensembles . . . . . . . . . . . . 101
7.3 Brownian bridges and non-intersecting Brownian paths . . . . . . 101

8 Additional topics 103


8.1 Joint distributions at the edge . . . . . . . . . . . . . . . . . . . . 103
8.2 Estimates of U(n) . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.3 Iterations of the power method . . . . . . . . . . . . . . . . . . . 103
CONTENTS 5

A The Airy function 105


A.1 Integral representation . . . . . . . . . . . . . . . . . . . . . . . . 105
A.2 Differential equation . . . . . . . . . . . . . . . . . . . . . . . . . 106
A.3 Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

B Hermite polynomials 107


B.1 Basic formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
B.2 Hermite wave functions . . . . . . . . . . . . . . . . . . . . . . . 108
B.3 Small x asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . 109
B.4 Steepest descent for integrals . . . . . . . . . . . . . . . . . . . . 110
B.5 Plancherel–Rotach asymptotics . . . . . . . . . . . . . . . . . . . 112
B.5.1 Uniform bounds . . . . . . . . . . . . . . . . . . . . . . . 118

C Fredholm determinants 121


C.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
C.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
C.2.1 Change of variables and kernel extension . . . . . . . . . . 124
C.3 Computing Fredholm determinants . . . . . . . . . . . . . . . . . 124
C.4 Separable kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

D Notation 125
D.1 Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
D.2 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
D.3 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
D.4 Lie groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
D.5 Banach spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6 CONTENTS
Chapter 1

Overview

1.1 What is a random matrix?


There are two distinct points of view that one may adopt. On one hand, our
intuitive ideas of randomness are intimately tied to the notion of sampling a
realization of a random variable. Thus, given a random number generator,
one may build a random Hermitian matrix, M ∈ Her(n), by choosing its real
diagonal and complex upper-triangular entries independently at random. It is
conventional to assume further that all the diagonal entries have the same law,
and that all the upper-triangular entries have the same law. For example, we
may assume that the entries on the diagonal are ±1 with probability 1/2, and
that the upper-triangular entries are ±1 ± i with probability 1/4. It is also
conventional to have the variance of the diagonal entries to be twice that of the
off-diagonal entries. Random matrices of this kind, are said to be drawn from
Wigner ensembles.
On the other hand, one may adopt a more analytic view. PnThe Hilbert-
Schmidt inner product of two Hermitian matrices, Tr(M N ) = j,k=1 Mjk N̄jk ,
defines a natural metric Tr(dM 2 ) and volume form DM p on Her(n) (see Lec-
ture 2). In this text, unless otherwise stated, kM k = Tr M ∗ M ). Thus, each
positive function p : Her(n) → [0, ∞) that decays sufficiently fast as kM k → ∞,
may be normalized to define a probability measure. A fundamental example is
the law of the Gaussian Unitary Ensemble (GUE)

1 − 1 Tr(M 2 )
pGUE (M )DM = e 2 DM. (1.1.1)
Zn

Here Zn is a normalization constant that ensures pGUE is a probability density


(we use the same notation for different ensembles; thus the numerical value
of Zn must be inferred from the context). The term GUE was introduced by
Freeman Dyson [9], and refers to an important invariance property of pGUE .
Each U ∈ U(n) defines a transformation Her(n) → Her(n), M 7→ U M U ∗ . It
is easily checked that the volume form DM is invariant under the map M 7→

7
8 CHAPTER 1. OVERVIEW

U M U ∗ , as is the measure pGUE (M )DM . More generally, a probability measure


on Her(n) is said to be invariant if p(M ) DM remains invariant under the map
M 7→ U M U ∗ , for each U ∈ U(n). Important examples of invariant ensembles
are defined by polynomials in one-variable of the form

g(x) = a2N x2N + a2N −1 x2N −1 + . . . + a0 , aj ∈ R, j = 0, 1, . . . , 2N, a2N > 0.


(1.1.2)
Then the following probability measure is invariant
1 − Tr g(M )
p(M )DM = e DM. (1.1.3)
Zn
We have assumed that all matrices are Hermitian simply to be concrete. The
above notions extend to ensembles of matrices from Symm(n) and Quart(n).
The notion of invariance in each case is distinct: for Symm(n), the natural
transformation is M 7→ QM QT , Q ∈ O(n); for Quart(n) it is M 7→ SM S D , S ∈
USp(n). The standard Gaussian ensembles in these cases are termed GOE (the
Gaussian Orthogonal Ensemble) and GSE (the Gaussian Symplectic Ensemble),
and they are normalized as follows:
1 − 1 Tr(M 2 ) 1 − Tr(M 2 )
pGOE (M )dM = e 4 dM, pGSE (M )dM = e DM. (1.1.4)
Zn Zn
The differing normalizations arise from the different volume forms on Symm(n),
Her(n) and Quart(n) as will be explained in Lecture 2. For now, let us note that
the densities for all the Gaussian ensembles may be written in the unified form
β
Tr(M 2 )
Zn (β)−1 e− 4 (1.1.5)

where β = 1,2 and 4 for GOE, GUE and GSE respectively. While it is true
that there are no other ensembles that respect fundamental physical invariance
(in the sense of Dyson), many fundamental results of random matrix theory
can be established for all β > 0. These results follow from the existence of
ensembles of tridiagonal matrices, whose eigenvalues have a joint distribution
that interpolates those of the β = 1,2 and 4 ensembles to all β > 0 [8].

1.2 The main limit theorems


The basic question in random matrix theory is the following: what can one
say about the statistics of the eigenvalues of a random matrix? For example,
what is the probability that the largest eigenvalue lies below a threshold? Or,
what is the probability that there are no eigenvalues in a given interval? The
difficulty here is that even if the entries of a random matrix are independent,
the eigenvalues are strongly coupled.
Gaussian ensembles play a very special, and important, role in random ma-
trix theory. These are the only ensembles that are both Wigner and invari-
ant (see Theorem 18 below). Pioneering, ingenious calculations by Dyson [9],
1.2. THE MAIN LIMIT THEOREMS 9

Gaudin and Mehta [26, 25], on the Gaussian ensembles served to elucidate the
fundamental limit theorems of random matrix theory. In this section we outline
these theorems, assuming always that the ensemble is GUE. Our purpose is
to explain the form of the main questions (and their answers) in the simplest
setting. All the results hold in far greater generality as is briefly outlined at the
end of this section.
By the normalization (1.1.1), a GUE matrix has independent standard nor-
mal entries on its diagonal (mean zero, variance 1). The off-diagonal entries
have mean zero and variance 1/2. We denote the ordered eigenvalues of the
GUE matrix by λ1 ≤ λ2 ≤ . . . λn . A fundamental heuristic for GUE matrices
(that will be proven later, and may √ be easily simulated) is√ that the largest√ and
smallest eigenvalues have size O( n). In fact, λ1 ≈ −2 n and λn ≈ 2 n as
n → ∞. Since√there are n eigenvalues, the gap between these eigenvalues is
typically O(1/ n). There are thus two natural scaling limits to consider as
n → ∞:
1. Rescale M 7→ n−1/2 M so that the spectral radius is O(1). In this scaling
limit, n eigenvalues are contained within a bounded interval, and we obtain
a deterministic limit called the semicircle law .
2. Rescale M 7→ n1/2 M so that the gaps between eigenvalues are O(1). In
this scaling limit, we expect a random limiting point process. The limiting
point process is a determinantal point process called the Sine2 process.
In fact, the situation is more subtle. While the expected value of the gap
between eigenvalues for a GUE matrix is indeed O(1/n), the gaps are O(n−2/3 )
about the edge of the spectrum. There is an an entirely different scaling
√ limit
called the Airy2 process obtained by rescaling the spectrum of M ± 2 nI.
In all that follows, we consider a sequence of random matrices of size n
sampled from GUE(n). To make this explicit, the matrix is denoted Mn , and
(n) (n) (n)
its ordered eigenvalues are denoted λ1 ≤ λ2 ≤ · · · ≤ λn .

1.2.1 The semicircle law


Definition 1. The probability density and distribution function
Z x
1 p
psc (x) = 2
4 − x 1|x|≤2 , Fsc (x) = psc (x0 ) dx0 . (1.2.1)
2π −∞

are called the semicircle density and the semicircle distribution respectively.
Theorem 2. Let Mn be a sequence of GUE matrices of size n. The rescaled
empirical spectral measures
n
1X
µn (dx) = δ (n) (dx) (1.2.2)
n j=1 n−1/2 λj

converge weakly to the semicircle density almost surely.


10 CHAPTER 1. OVERVIEW

Theorem 2 may also be interpreted √ as the statement that the empirical spec-
tral distribution of the matrices Mn / n converges to the semicircle distribution.
The shortest proof of Theorem (2) uses the following integral transform.
Definition 3. Assume µ is a measure on R that satisfies the finiteness condition
Z ∞
1
√ µ(dx) < ∞. (1.2.3)
−∞ 1 + x2
The Stieltjes transform of µ is the function
Z ∞
1
µ̂(z) = µ(dx), z ∈ C\R. (1.2.4)
−∞ x − z

The Stieltjes transform is of fundamental importance in the theory of or-


thogonal polynomials and spectral theory. This is because there are natural
Stieltjes transforms associated to the resolvent (M − z)−1 , such as

Tr(M − z)−1 and v ∗ (M − z)−1 v, v ∈ Cn , |v| = 1. (1.2.5)

The general proof of Theorem 2 uses a recursive expression for the law of Tr(z −
Mn )−1 . As n → ∞, the fixed point of this recursion, Rsc solves the quadratic
equation
R2 − zR + 1 = 0. (1.2.6)
It is then easy to verify that
1 p 
Rsc (z) = −z + z 2 − 4 , z ∈ C\[−2, 2]. (1.2.7)
2
We recover the semicircle law from Rsc (z) by evaluating the jump in Im(Rsc (z))
across the branch cut [−2, 2].
Remark 4. The heuristic to determine the typical spacings is the following.
(n)
Define γj ∈ [−2, 2] by the relation
(n)
Z γj
j
= psc (x)dx, j = 1, 2, . . . , n.
n −∞

(n) √ (n)
Then the approximation λj ≈ nγj should hold1 . We have
(n)
Z γj+1
1 (n) (n) (n)
= psc (x)dx ≈ (γj+1 − γj )psc (γj ). (1.2.8)
n (n)
γj

(n)
If j = j(n) is chosen so that γj → r, r ∈ (−2, 2) (i.e. in the “bulk”) we have

(n) (n) 1
λj+1 − λj ≈√ .
npsc (r)
1 This is made rigorous and quantitative by Erdős, Yau and Yin [13].
1.2. THE MAIN LIMIT THEOREMS 11

(n)
At the edge, consider (noting that γ1 > −2)
(n) (n)
γ1 γ1
1√
Z Z 3/2
1 2  (n)
= psc (x)dx ≈ 2 + xdx = γ1 + 2 ,
n −2 −2 π 3π
(n) c
γ1 +2≈
,
n2/3
√ (n) √
2 n + λ1 = O(n−1/6 ), λ(n)
n − 2 n = O(n
−1/6
). (1.2.9)

1.2.2 Fluctuations in the bulk: the sine process


We now rescale so that the gaps between eigenvalues is O(1), and the scaling
limit is a random process. This random process will always be denoted Sine2
(and Sineβ for the general β-ensembles). Each realization of the Sine2 process is a
countable set of points {xk }∞
k=−∞ . One of the fundamental statistics associated
to a point process is the probability of having k points in an interval. In order to
state a typical fluctuation theorem that describes these probabilities, we must
define the sine-kernel and its Fredholm determinants.
Definition 5. The sine-kernel is the integral kernel on R × R given by
sin π(x − y)
Ksine (x, y) = , x 6= y, (1.2.10)
π(x − y)
and Ksine (x, x, ) = 1.
In the following theorem we will assume that x and y are restricted to a finite
interval (a, b). The sine-kernel defines an integral operator on L2 (a, b) that we
denote by Ksine 1(a,b) . The kernel Ksine (x, y) is clearly continuous, thus bounded,
for x, y ∈ (a, b). Thus, Ksine 1(a,b) defines an integral operator on L2 (a, b) that
is trace-class, and it has a well-defined Fredholm determinant

det 1 − Ksine 1(a,b) (1.2.11)

X (−1) m Z
=1+ det (Ksine (xj , xk )1≤j,k≤m ) dx1 dx2 . . . dxm .
m=0
m! (a,b)m

Though perhaps mysterious at first sight, the origin of this formula is rather
simple. Integral operators with some smoothness and boundedness (in particu-
Rb
lar, continuous integral operators K whose trace a |K(x, x)|dx is finite) may be
approximated on a discrete-grid of size h by a finite-dimensional discretization
Kh . The determinant (I −Kh ) is then the usual determinant of a matrix and we
may use the definition of the determinant to expand det(I − Kh ) in a finite se-
ries, which is nothing but the infinite series above in the instance when all terms
beyond m = rank(Kh ) vanish. This approach was pioneered by Fredholm in
1900 before the development of functional analysis. From a probabilistic point
of view, this formula arises from the Inclusion-Exclusion Principle, taken to the
limit. But the operator theory pioneered by Fredholm allows for that limit to
be understood.
12 CHAPTER 1. OVERVIEW

Theorem 6 (Gaudin-Mehta [26]). For each finite interval (a, b) ⊂ R, and


r ∈ (−2, 2),
√  √  
(n) 
lim P npsc (r) λk − r n 6∈ (a, b), 1 ≤ k ≤ n = det 1 − Ksine 1(a,b) .
n→∞
(1.2.12)

The probabilities of the Sine2 process can be expressed


P∞ without reference to
the matrices Mn . For each interval (a, b) let N(a,b) = k=−∞ 1xk ∈(a,b) . Then,
 
P N(a,b) = 0 = det 1 − Ksine 1(a,b) . (1.2.13)

For comparison, if we had a Poisson process {x̃k }∞


k=−∞ with rate λ(dx), the
associated count Ñ(a,b) would satisfy
!
  Z b
P Ñ(a,b) = 0 = exp − λ(dx) .
a

1.2.3 Fluctuations at the edge: the Airy point process


Remark 4 and Theorem 6 reveal that the gaps between consecutive eigenvalues
(n) (n)
λj and λj+1 is of O(n−1/2 ). However, the fluctuations at the edge are much
larger O(n−1/6 ). The point process of shifted and scaled eigenvalues converges
in distribution to a limiting point process, {yk }∞
k=1 called the Airy2 process. In
order to describe the law of this process, we must define the Airy function and
the Airy kernel.

Definition 7. The Airy function is defined by the oscillatory integral


Z ∞
1 3
Ai(x) = eikx eik /3 dk. (1.2.14)
2π −∞

The Airy function is one of the classical special functions [1]. It admits
several alternative definitions. For instance, the oscillatory integral in (1.2.14)
may be deformed into an absolutely convergent integral in the complex plane.
This argument allows us to establish that the Airy function is entire and to
determine its asymptotic expansions as x → ±∞.
These properties may also be established using the theory of ordinary dif-
ferential equations in the complex plane [17]. It is easy to verify from (1.2.14),
after deformation, that Ai(x) satisfies the differential equation

ϕ00 (x) = xϕ, −∞ < x < ∞. (1.2.15)

Equation (1.2.15) admits two linearly independent solutions, only one of which
decays as x → ∞. Up to a (fixed by convention, but otherwise arbitrary)
normalization constant, the decaying solution is Ai(x).
1.2. THE MAIN LIMIT THEOREMS 13

Definition 8. The Airy kernel is the continous integral kernel on R × R given


by
Ai(x)Ai0 (y) − Ai0 (x)Ai(y)
KAiry (x, y) = , x 6= y,
x−y
and by continuity at x = y.

Observe that both the sine and Airy kernel have the form

f (x)f 0 (y) − f 0 (x)f (y)


K(x, y) = , x 6= y (1.2.16)
x−y

where f solves a second-order linear differential equation. Similar kernels arise


in various limiting models in random matrix theory. For instance, the Bessel
kernel – corresponding to f (x) = Jα (x), the Bessel function with parameter α
– describes fluctuations about the singular values of random positive definite
Hermitian matrices.

Theorem 9. For each interval (a, b) ⊂ R, −∞ < a < b ≤ ∞,


 
(n) √  
lim P n1/6 λk − 2 n 6∈ (a, b), 1 ≤ k ≤ n = det 1 − KAiry 1(a,b) .

n→∞
(1.2.17)

As in the remarks following Theorem 6, the expression det 1 − KAiry 1(a,b)
gives the probability that no points of a realization of the Airy2 point process
lie in (a, b).

1.2.4 Fredholm determinants, Painlevé equations, and in-


tegrable systems
It is immediate from Theorem 6 and Theorem  9 that the Fredholm determinants
det 1 − Ksine 1(a,b) and det 1 − KAiry 1(a,b) are positive for all (a, b). This is
astonishing, if one treats (1.2.11) as a starting point, since it is by no means clear
that the signed infinite series sums to a positive number! It is in fact, rather
challenging to extract meaningful information, such as the asymptotics of tails,
from the expression of probabilities as Fredholm determinants. A crucial piece
of the puzzle lies in the connection between Fredholm determinants and the
theory of integrable systems. More precisely, the Fredholm determinants satisfy
differential equations in a and b (or more generally in endpoints of intervals,
when one considers the Q obvious extensions of Theorem 6 and Theorem 9 to
m
a collection of intervals j=1 (am , bm )). These ordinary differential equations
have a special, integrable structure, that allows their analysis. The following
theorems illustrate this aspect of random matrix theory.

Theorem 10 (Jimbo-Miwa-Mori-Sato [18]). For all t > 0,


Z t 
  σ(s)
det 1 − Ksine 1(− 2t , 2t ) = exp ds , (1.2.18)
0 s
14 CHAPTER 1. OVERVIEW

where σ(t) is the solution to the Painlevé-5 equation


2
(tσ 00 ) + 4 (tσ 0 − σ) tσ 0 − σ + σ 2 = 0,

(1.2.19)

which satisfies the asymptotic condition

t t2 t3
σ(t) = − − 2 − 3, t ↓ 0. (1.2.20)
π π π
Theorem 11 (Tracy-Widom distribution [36]). For all real t,
 Z ∞ 
2

F2 (t) := det 1 − KAiry 1(t,∞) = exp − (s − t)q (s) ds , (1.2.21)
t

where q is the solution to the Painlevé-2 equation

q 00 = tq + 2q 3 , −∞ < t < ∞ (1.2.22)

which satisfies the asymptotic condition

q(t) ∼ Ai(t), t → ∞. (1.2.23)

The Painlevé differential equations are a special family of nonlinear ordi-


nary differential equations that generalize the classical theory of linear dif-
ferential equations in the complex plane and the associated theory of special
functions [17]. For example, the Painlevé-2 equation (1.2.22) may be viewed
as a nonlinear analogue of the Airy differential equation (1.2.15). Broadly, the
Painlevé differential equations represent a complete classification of second-order
differential equations with the Painlevé property — their only movable singular-
ities (movable by changing initial conditions) are poles — that are not solvable
with elementary functions. The theory of Painlevé equations was developed in
the early years 1900’s, by Boutroux and Painlevé, but fell into obscurity2 . It was
reborn in the 1970s with the discovery of their importance in integrable systems
and exactly solvable models in statistical mechanics, such as the Ising model in
2D [24]. We illustrate these links with a fundamental integrable system: the
Korteweg-de Vries (KdV) equation

ut + 6uux + uxxx = 0, −∞ < x < ∞, t ≥ 0. (1.2.24)

Despite the fact that KdV is nonlinear, it may be solved explicitly through the
inverse scattering transform. We will not discuss this method in detail. But in
order to make the connection with random matrix theory, let us note that if one
seeks self-similar solutions to KdV of the form
 
1 x
u(x, t) = q (1.2.25)
(3t)2/3 (3t)2/3
2 Paul Painlevé was rather restless: he began in mathematics, became an early aviation

enthusiast, and then turned to politics. He rose to become the Prime Minister of France for
part of World War I, and was later the designer of the disastrous Maginot line.
1.3. CONNECTIONS TO OTHER AREAS OF MATHEMATICS 15

then q = v 2 + v 0 and v satisfies the Painlevé-2 equation (1.2.22). In fact, it is in


this context that Hastings and McLeod established the existence of a solution
to (1.2.22) that satisfies the asymptotic condition (1.2.23) [15]. It is remarkable
that it is exactly this solution that describes the Tracy-Widom distribution
F2 (t)!

1.2.5 Universality
We have restricted attention to matrices from GUE to present some of the
central theorems in the subject in an efficient manner. One of the main achieve-
ments of the past decade has been the establishment of universality – informally,
this is the notion that the limiting fluctuations in the bulk and edge described
by the Sine2 and Airy2 processes, hold for both Wigner and invariant ensembles
which satisfy natural moment assumptions. The idea of universality is of clear
practical importance (we need understand only a few universal limits). It also
appears to hold the key to some of the connections between random matrix the-
ory and other areas of mathematics. The explanation of these connections may
lie in the fact that determinantal point processes, such as the Sine2 and Airy2
process, have the simplest structure of strongly interacting point processes. By
contrast, Poisson processes, while universal, describe non-interacting points.

1.3 Connections to other areas of mathematics


Random matrix theory has deep connections with many areas of mathematics,
many of which are still poorly understood. A brief overview of some of these
connections is presented below. While some of these notions, such as the con-
nections with stochastic PDE require more background than we assume, some
other connections (e.g. with quantum gravity) are in fact more elementary (and
fundamental) than one may naively expect. Our purpose here is to present a
small sample of the rich set of ideas that make the subject so attractive.

1.3.1 Number theory


The Riemann zeta function is defined by the infinite sum

X 1
ζ(s) = s
, Re(s) > 1. (1.3.1)
n=1
n

The function ζ(s) is central to number theory, since it provides a generating


function for the distribution of the prime numbers via Euler’s product formula

X 1 Y 1
= , Re(s) > 1. (1.3.2)
n=1
n s 1 − p−s
p prime

For instance, the divergence of the harmonic series at s = 1 provides a proof


that there are infinitely many primes. The study of ζ(s) by complex analysis is
16 CHAPTER 1. OVERVIEW

the cornerstone of analytic number theory. The basic facts are as follows. The
function ζ(z) extends to a meromorphic function on C by analytic continuation,
which has a simple pole at s = 1 where the residue is 1. A closely related
function is the Riemann ξ-function
1 s
ξ(s) = s(s − 1)Γ ζ(s). (1.3.3)
2π s/2 2
Recall that the Γ function is a ‘continuous interpolation’ of the factorial, defined
by the integral Z ∞
Γ(s) = e−x xs−1 dx, Re(s) > 0. (1.3.4)
0
The Γ-function extends to a meromorphic function C, which has simple poles at
. . . , −2, −1, 0 where the residue is 1. These poles cancel the ‘trivial’ zeros of the
ζ function, and the essential difficulties related to the study of the ζ function
are more transparent for the ξ function. It satisfies the functional equation
ξ(s) = ξ(1 − s), s ∈ C. (1.3.5)
The celebrated Riemann Hypothesis is the conjecture that all zeros of the ξ
function lie on the critical line Re(s) = 1/2 (this line is the symmetry axis for
the functional equation above). In his fundamental paper on the distribution
of prime numbers (translated in [12] and [30]) Riemann presented a series of
asymptotic expansions that would imply rigorous bounds on the distribution of
primes if the Riemann Hypothesis is true.
The connection between random matrix theory and the Riemann Hypoth-
esis is two-fold. First, if one could construct a Hermitian operator with point
spectrum whose eigenvalues coincide with the zeros of ξ(i(s − 1/2) then the
Riemann Hypothesis would follow immediately (since all eigenvalues of a Her-
mitian operator are real). The catch, of course, is to determine such an operator.
Nevertheless, as we discuss below, random matrix theory has shed new lie on
the spectral theory of several operators, deterministic and random. Thus, the
theory provides a catalog of ‘guesses’. Second, if one assumes the Riemann hy-
pothesis, the fluctuations in the zeros of ζ(s) are described by the sine-kernel!
Under the Riemann hypothesis, the non-trivial zeros of ζ(s) may be written
γn = 21 ± itn with 0 < t1 < t2 < . . .. Let
  ∞
tn tn X
wn = log , and N (x) = 1wn ≤x . (1.3.6)
2π 2π
k=1

This rescaling is chosen so that limx→∞ N (x)/x = 1 in accordance with the


Prime Number Theorem.
Despite the fact that the zeros wn are deterministic, we may introduce proba-
bilistic notions by counting the (rescaled) zeros upto a level x > 0. For example,
we may define the empirical probability measure
N (x)
1 X
µ1 (dw; x) = δwk (dw). (1.3.7)
N (x)
k=1
1.3. CONNECTIONS TO OTHER AREAS OF MATHEMATICS 17

In order to study the gaps between eigenvalues, we must consider instead the
empirical measures
1 X
µ2 (dl; x) = δwj −wk (dl). (1.3.8)
x
1≤j,k≤N (x);j6=k

The expectation of a continuous function with respect to µ2 (dl; x) is denoted


Z ∞
1 X
R2 (f ; x) = f (l)µ2 (dl; x) = f (wj − wk ) . (1.3.9)
−∞ x
1≤j<k≤N (x)

Under the assumption that f is band-limited, i.e. that its Fourier transform has
compact support, Montgomery established the following
Theorem 12 (Montgomery). Assume the Riemann Hypothesis. Assume f is
a Schwartz function whose Fourier transform fˆ is supported in [−1, 1]. Then
Z ∞  2 !
sin πl
lim R2 (f ; x) = f (l)µ2 (dl), µ2 (dl) = 1 − dl. (1.3.10)
x→∞ −∞ πl

The point here is that the right hand side of (1.3.10) is precisely the 2-point
function for the sine process. More generally, Montgomery’s theorem is now
known to hold for the distribution of n-consecutive gaps. That is, the rescaled
fluctuations converge to the Sine2 process in distribution. Bourgade’s thesis is
an excellent review of the state of the art [4].

1.3.2 Combinatorics and enumerative geometry


We will present two problems of enumerative combinatorics that connect with
random matrix theory. As a first example, we note that the 2m-th moment of
the semicircle law
Z 2  
2m 1 2m
x psc (x) dx = = Cm , (1.3.11)
−2 m + 1 m

the m-the Catalan number. An analytic proof of this identity follows from a
comparison between the Stieltjes transform Rsc (z), and the generating function

X
m 1 − 1 − 4x
Ĉ(x) = Cm x = . (1.3.12)
x
m≥0

The Catalan numbers describe the solution to many combinatorial problems 3 .


For example, Cm enumerates the number of Bernoulli excursions or Dyck paths
of length 2m: these are walks Sk , 1 ≤ k ≤ 2m such that S0 = S2m = 0, Sk ≥ 0,
0 ≤ k ≤ 2m, and |Sk+1 − Sk | = 1.
3 Stanley lists 66 examples in [32, Exercise 6.19].
18 CHAPTER 1. OVERVIEW

A deeper set of connections between integrals on Her(n) and geometry was


first noticed by the physicist ’t Hooft [34]. Ignoring (for now), physicists’ mo-
tivation, let us illustrate a particular computational technique that underlies
their work. Consider a matrix integral of the form
Z
4
Zn (z) = eTr(−zM ) pGUE (M ) DM, Re(z) > 0. (1.3.13)
Her(n)

The quartic nonlinearity prevents us from expressing this integral in closed form.
Nevertheless, this integral may be expanded in a Taylor series

(−z)k
Z
X k
Zn (z) = Tr(M 4 ) pGUE (M ) DM, Re(z) > 0. (1.3.14)
k!
k=0

A fundamental lemma on Gaussian integrals (on RN ) (Wick’s lemma) allows us


to reduce each integral above to a sum over pairings of indices. It is convenient
to keep track of these pairings with a graphical description, called a Feynman
diagram. ’t Hooft observed that when RN ≡ Her(n) the Feynman diagram
associated to each term in (1.3.14) enumerates embedded graphs on a Riemann
surface. This characterization was independently discovered by mathematicians.
Lemma 1 (Harer-Zagier [14]). Let εg (m) denote the number of ways to pair
the edges of a symmetric 2m-gon to form an orientable surface with genus g.
Then
X∞ Z
f (m, n) = εg (m)nm+1−2g = Tr(M 2m )pGUE (M ) DM. (1.3.15)
g=0 Her(n)

Note that only finitely many terms in the sum are non-zero. The series above
is an instance of a genus-expansion. It illustrates the beautiful fact that matrix
integrals serve as the generating functions for Riemann surfaces with a given
combinatorial decomposition!

1.3.3 Random permutations


Consider the symmetric group S(n) of permutations of size n. We have that
|S(n)| = n! and any element of S(n) can be represented as a ordering of the
integers 1, 2, . . . , n. For example, three elements of S(5) are

π1 = 54312, π2 = 12435, π3 = 45123.

We define a function ` : S(5) → N by `(π) = length of the longest increasing


subsequence of π. For example,

`(π1 ) = 2, `(π2 ) = 3, `(π3 ) = 4.

There is a natural probability distribution Uni(n) on S(n), the uniform distri-


1
bution, or Haar measure. If Πn ∼ Uni(n) then P(Πn = π) = n! for any π ∈ Sn .
1.3. CONNECTIONS TO OTHER AREAS OF MATHEMATICS 19

This problem was one of the first, if not the first, problem to be investigated by
Monte Carlo simulation on a computer — Ulam performed simulations in the
early 60’s [37] and conjectured that

1
√ E [`(Πn )] → c.
n

It was later established by Vershik and Kerov that c = 2 [38]. The detailed
numerical computations of Odlyzko and Rains [28] indicated

E [`(Πn )] − 2 n = O(n−1/6 ). (1.3.16)

The comparison between (1.2.9) and (1.3.16) should be striking. Indeed, the
following is often called the Baik–Deift–Johansson Theorem and it makes this
scaling rigorous.

Theorem 13 ([2]). Let S(n), ` and Πn be as above. Then for all t ∈ R


 √ 
`(Πn ) − 2 n
lim P ≤ t = det(1 − KAiry 1(t,∞) ).
n→∞ n1/6

That is, the limit is the same as the largest eigenvalue of a random Hermitian
matrix.

This theorem is discussed in great detail in [3]. This surprising connection


was explored further by Johansson [19] leading to many connections to random
growth processes and the KPZ equation.

1.3.4 Spectral and inverse spectral theory of operators


While Theorem 2–Theorem 9 associate limits to the spectrum of the operators
Mn , it is natural to ask if there are limiting operators that may be naturally
associated to the limiting spectra. Thus, for Theorem 2 we ask for a ‘natural’
operator that has spectral density given by the semicircle law, psc , and for
Theorem 6 and Theorem 9 we seek ‘natural’ random operators that have pure
point spectra with the law of the Sine2 and Airy2 point processes. What is a
‘natural’ operator is, of course, a subjective idea, but convincing candidates
operators are suggested by inverse spectral theory.
We say that a matrix T ∈ Symm(n) is a Jacobi matrix if all its off-diagonal
entries are strictly positive. The spectral measure of a Jacobi matrix is the mea-
sure whose Stieltjes transform is eT1 (T − z)−1 e1 . There is a 1 − 1 correspondence
between the space of n × n Jacobi matrices and probability measures on the line
with n atoms. This correspondence extends naturally to semi-infinite Jacobi
matrices. The essence of this theory (due to Stieltjes) is that the entries of T
may be determined from the continued fraction expansion of eT1 (T − z)−1 e1 .
This correspondence will be considered in detail in Lecture 3, but here is a
concrete example. By applying Stieltjes’ procedure to the semicircle law, we
20 CHAPTER 1. OVERVIEW

discover that psc (x) is the spectral density for the seminfinite tridiagonal ma-
trix that is 1 on the off-diagonal, and 0 in all other entries. This follows from
the continued fraction expansion

1
Rsc (−z) = (1.3.17)
1
z−
1
z−
z − ...
Ensembles of tridiagonal matrices are of practical important in numerical
linear algebra. For instance, a key pre-processing step while solving symmet-
ric linear systems is to transform the matrix to tridiagonal form by House-
holder’s procedure (Lecture 3). Dumitriu and Edelman pushed forward the
Gaussian measures under this procedure to obtain a family of tridiagonal en-
sembles, known as the general-β ensembles [8]. Further, Edelman and Sutton
made a formal expansion of these operators, and observed that as n → ∞, the
tridiagonal operators appeared to converge to the stochastic Airy operator [11]:

d2 2
Hβ = − 2
+ x + √ ḃ, 0<x<∞ (1.3.18)
dx β

with Dirichlet boundary conditions at x = 0. Here ḃ denotes (formally) white


noise (it is not hard to define Hβ rigorously).
Theorem 14 (Ramirez-Rider-Virag [29]). The spectrum σ(Hβ ) of the operator
Hβ is almost surely a countably infinite number of eigenvalues µ1 < µ2 < µ3 <
. . .. Moreover, σ(Hβ ) has the same law as the Airyβ point process.
In particular, for β = 2, the spectrum of the stochastic Airy operator de-
scribes the limiting fluctuations at the edge of the spectrum of GUE matrices.
Despite the simplicity of this characterization, it is not know how to recover the
explicit determinantal formulas of Tracy and Widom from this formulation.
Chapter 2

Integration on spaces of
matrices

In this section, we review the geometry of the classical Lie groups, as well as the
spaces Symm(n), Her(n) and Quart(n) and explain how to integrate over these
groups and spaces. Given an point on a manifold M ∈ M, we use dM to
denote the differential of M , i.e., an infintesimal element on the tangent space
TM M at M . We reserve DM to refer to a (naturally induced) volume form
defined using an inner-product on the tangent space. Note that for x ∈ R,
dx = Dx. Our main goal is the following

Theorem 15 (Weyl’s formula).


β
DM = |4(Λ)| DΛ DU (2.0.1)

where 4(Λ) is the Vandermonde determinant


n(n−1) Y
4(Λ) = (−1) 2 (λj − λk ), Λ = diag(λ1 , . . . , λn ), (2.0.2)
1≤j<k≤n

DΛ is Lebesgue measure on Rn , and DU denotes (unnormalized) Haar measure


on O(n), and an appropriately defined measure on U(n)/Tn , and USp(n)/Tn
in the cases β = 1, 2 and 4 respectively.

The main strategy to prove Theorem 15 is to treat the mapping from ma-
trices with distinct eigenvalues to their eigenvalues and eigenvectors. Then we
identify the tangent spaces, and give a formula that relates the tangent space
for the the eigenvalues and the tangent space for the eigenvectors to the tangent
space for the matrix. This formula allows one to change variables in the metric
tensor and therefore in the volume form.

Remark 16. It is common to normalize the Haar measure such that it is a


probability measure. We have ignored this constant here, though is is explored

21
22 CHAPTER 2. INTEGRATION ON SPACES OF MATRICES

in the exercises. The essential aspect of (2.0.1) is that the Jacobian for diago-
nalization is given by |4(Λ)|β . This has far-reaching consequences for random
matrix theory and has the interesting physical interpretation of eigenvalue re-
pulsion.
In what follows, we first present a detailed description of integration on O(n)
and Symm(n). The ideas are then extended to Her(n) and Quart(n).

2.1 Integration on O(n) and Symm(n)


All matrices in this section lie in Rn×n .
There is a natural volume form on each finite-dimensional inner-product
space of dimension p. For example, on Rp , the standard
Pp inner product defines
the metric with infinitesimal length element ds2 = j=1 dx2j and the volume
form Dx = dx1 dx2 . . . dxp (we follow the notation of [39] for volume forms).
More generally, each g ∈ Symm+ (p) defines an inner-product and metric on Rp .
p
X p
X
hx, yig = gjk xj yk , ds2 = gjk dxj dxk . (2.1.1)
j,k=1 j,k=1

The associated p-dimensional volume form is


p
Dx = det(g) dx1 . . . dxp . (2.1.2)
The following calculation demonstrates why the choice for the volume form,
stemming from an inner-product, is important. Let X and Y be two finite-
dimensional inner-product spaces (over R) with inner-products h·, ·iX , h·, ·iY
and let F be a (linear)1 isometry F : X → Y , i.e. hF x, F yiY = hx, yiX for
all x, y ∈ X. Assume the dimension of these spaces is p. Then select bases
(ui )1≤i≤p of X and (vi )1≤i≤p of Y . We then get the isomorphism
TX : X → Rp , TX x = [hx, u1 iX , hx, u2 iX , . . . , hx, up iX ]T , (2.1.3)
and a similar isomorphism for Y . TX becomes an isometry if we equip Rp with
the inner-product
p
X
X X
ha, biTX = gij aj bk , gjk = huj , uk iX . (2.1.4)
j,k=1

Following the previous discussion, we arrive at two naturally induced volume


forms on Rp
q
Dx = det(g X ) dx1 . . . dxp , (2.1.5)
q
Dy = det(g Y ) dy1 . . . dyp . (2.1.6)
1 It is not necessary to assume that the transformation is linear, but it takes more work to

prove that an isometry must be affine.


2.1. INTEGRATION ON O(N ) AND SYMM(N ) 23

−1
Now, let f : X → R. We define (f is measurable if f ◦ TX is Lebesgue
measurable)
Z Z q
−1
f DX = f (TX x) det(g X ) dx1 . . . dxp , (2.1.7)
X Rp

with a similar expression for g : Y → R. One should check that this definition
is independent of the choice of basis. Now choose f (x) = g(F x). We find
Z Z q
−1
f DX = g(F TX x) det(g X ) dx1 . . . dxp . (2.1.8)
X Rp

If (ui )1≤i≤p is a basis for X then (F ui )1≤i≤p is a basis for Y and we perform
all these calculations for this basis. Consider the transformation
−1
S : Rp → R p , Sx = TY F TX . (2.1.9)

But, after careful consideration, we see that S = I and det(g Y ) = det(g X ).


Then, using this change of variables y = Sx we find
Z q Z q
−1
g(F TX x) det(g X ) dx1 . . . dxp = g(TY−1 y) det(g Y ) dy1 . . . dyp .
Rp Rp
(2.1.10)
This establishes the change of variables formula
Z Z
(g ◦ F )DX = gDY, (2.1.11)
X Y

whenever F is an isometry from X to Y . In particular, this shows how inner-


product structure on X and Y is converted to properties of volume.
The Lie group O(n) is the group, under composition, of linear transforma-
tions of Rn that preserve the standard metric g = I. For each O ∈ O(n) and
each x ∈ Rn we must have (Ox)T (Ox) = xT x. Thus, O(n) is equivalent to
the group of matrices O such that OT O = I. The group operation is matrix
multiplication. It is easy to check that the group axioms are satisfied, but a
little more work is required to check that O(n) is a differentiable manifold, and
that the group operation is smooth.
We now introduce the natural volume forms on Symm(n) and O(n). We first
note that the space Symm(n) is isomorphic to Rp , p = n(n + 1)/2 via the map
M 7→ ξ = (M11 , . . . , Mnn , M12 , . . . , Mn−1,n ). (2.1.12)
Thus, all that is needed to define integrals over Symm(n) is a choice of inner-
product. We will always use the Hilbert–Schmidt inner product
Symm(n) × Symm(n) → R, (M, N ) 7→ Tr(M T N ) = Tr(M N ). (2.1.13)
The associated infinitesimal length element is
n
X X
ds2 = Tr(dM T dM ) = 2
dMjj +2 2
dMjk . (2.1.14)
j=1 j<k
24 CHAPTER 2. INTEGRATION ON SPACES OF MATRICES

In ξ coordinates on Rp , the associated metric tensor g is diagonal and takes


the value 1 for the first n coordinates (diagonal terms), and the value 2 for
all the other coordinates (off-diagonal terms). Thus, the metric tensor g ∈
Symm+ (p) has determinant 2n(n−1)/2 . We apply formula (2.1.2) to find the
following volume form on Symm(n),
n
Y Y
DM = 2n(n−1)/4 dMjj dMjk . (2.1.15)
j=1 1≤j<k≤n

Each O ∈ O(n) defines a map Symm(n) → Symm(n), M 7→ OM OT . This map


is an isometry of Symm(n) with the metric above. It is in this sense that (2.1.17)
is the natural inner-product. Since this map is an isometry, the volume element
DM is also invariant.
O(n) is a differentiable manifold. Thus, in order to define a volume form
on O(n), we must identify its tangent space T O(n), and then introduce an
inner-product on T O(n). Further, the ‘natural’ inner-product must be invariant
under the group operations. The tangent space at the identity to O(n), TI O(n),
is isomorphic to the Lie algebra, o(n), of O(n). In order to compute o(n) we
consider smooth curves (−a, a) → O(n), a > 0, t 7→ Q(t) with Q(0) = I,
differentiate the equation Q(t)T Q(t) = I with respect to t, and evaluate at
t = 0 to find
Q̇(0)T = −Q̇(0). (2.1.16)
Thus, each matrix in o(n) is antisymmetric. Conversely, given an antisymmetric
matrix A, the curve t 7→ etA gives a smooth curve in O(n) that is tangent to I
at t = 0. Thus,
TI O(n) = o(n) = {A A = −AT }. (2.1.17)
The tangent space at arbitrary O ∈ O(n) is obtained by replacing (2.2.2) with
the condition that OT Ȯ is antisymmetric. Thus,

TO O(n) = {OA |A ∈ o(n) }. (2.1.18)

Finally, given A, Ã ∈ o(n), we define their inner product hA, Ãi = Tr(AT Ã) =
− Tr(AÃ). This inner-product is natural, because it is invariant  under left-

translation. That is, for two vector OA, OÃ ∈ TO O(n) we find Tr OA)T (OÃ =
Tr(AT Ã). The associated volume form on O(n) is called Haar measure. It is
unique, up to a normalizing factor, and we write
Y
DO = 2n(n−1)/4 dAjk . (2.1.19)
1≤j<k≤n

Now let f : O(n) → R be a bounded, measurable function. Define a neigh-


borhood of O ∈ O(n) by Bε (O) = {Õ ∈ O(n) : kO − Õk < ε}. Then for
ε > 0, sufficiently small, we can find a diffeomorphism (i.e., a chart) ϕO : UO →
Bε (O) ⊂ O(n), UO open satisfying

0 ∈ UO ⊂ TO O(n), ϕO (0) = O (2.1.20)


2.2. WEYL’S FORMULA ON SYMM(N ) 25

Then for such ε > 0 define


Z Z Y
n(n−1)/4
f DO = 2 f (ϕO (A)) dAjk . (2.1.21)
Bε (O) ϕ−1
O (Bε (O)) 1≤j<k≤n

It can be verified that this is independent of the choice of ϕO . So, now consider
such mapping at the identity, ϕI . And choose

ϕO (A) = OϕI (OT A). (2.1.22)

We find
Z Z Y
f DO = 2n(n−1)/4 f (OϕI (OT A)) dAjk . (2.1.23)
Bε (O) Oϕ−1
I (Bε (I)) 1≤j<k≤n

In comparing with, (2.1.11), we use the fact that O furnishes an isometry from
TI O(n) to TO O(n) so that
Z Z Y
n(n−1)/4
f DO = 2 f (OϕI (A)) dAjk . (2.1.24)
Bε (O) ϕ−1
I (Bε (I)) 1≤j<k≤n
R
In particular, if we choose f ≡ 1, then Bε (O) DO does not depend on O ∈ O(n),
showing that this is indeed uniform measure on O(n).

2.2 Weyl’s formula on Symm(n)


Let us now recall some basic facts about Symm(n). Each matrix M ∈ Symm(n)
has n real eigenvalues and an orthonormal basis of real eigenvectors. We write
Λ for the matrix diag(λ1 , . . . , λn ) of eigenvalues, and Q for a matrix whose k-
th column is a normalized eigenvector of M associated to the eigenvalue λk ,
1 ≤ k ≤ n. Since the columns of Q are orthogonal and normalized to length 1,
it is immediate that Q ∈ O(n). Thus,

M Q = QΛ and M = QΛQT . (2.2.1)

In what follows, we will view the transformation M 7→ (Λ, Q) as a change


of variables, from Symm(n) → Rn × O(n). Strictly speaking, this change of
variables is not well-defined since (2.2.1) is unaffected if we replace the k-th
column Qk of Q by −Qk . This issue is considered more carefully in Lemma 3
and Lemma 5 below. In a loose sense, diagonalization is analogous to polar
coordinates in Rn ,
x
Rn → [0, ∞) × S n−1 , x 7→ (r, u) , r = |x|, u = . (2.2.2)
r
Polar coordinates are natural for rotation invariant probability density on Rn .
For example, the standard Gaussian measure on Rn may be written
|x|2 r2
e− 2 Dx = Cn e− 2 r n−1 dr Du, (2.2.3)
26 CHAPTER 2. INTEGRATION ON SPACES OF MATRICES

where Du denotes the normalized n − 1-dimensional measure on S n−1 and Cn


is a universal constant. The factor rn−1 is the Jacobian of this transformation.
Weyl’s formula shows that the Jacobian for (2.2.1) is |4(Λ)|. The proof of
Weyl’s formula relies on an orthogonal decomposition of TM Symm(n).

Lemma 2. Let M have distinct eigenvalues. Then

TM Symm(n) ∼
= Rn ⊕ o(n), (2.2.4)

and these subspaces are orthogonal.

Proof. We first assume that M = Λ is diagonal. Consider a smooth curve


(−a, a) → Symm(n), a > 0, t 7→ M (t) = Q(t)Λ(t)Q(t)T such that M (0) =
Λ(0) = Λ, and Q(0) = I. We differentiate2 this expression with respect to t
and evaluate it at t = 0 to find the following expression for a tangent vector in
TΛ Symm(n):
Ṁ = Λ̇ + [Q̇, Λ]. (2.2.5)
Here Λ̇ can be an arbitrary diagonal matrix, and Q̇ an arbitrary antisymmetric
matrix. By the assumption of distinct eigenvalues, given Ṁ , Λ̇ = diagonal(Ṁ )
and Λ, Q̇ is uniquely determined. Since the diagonal terms of Q̇ vanish these
two matrices are orthogonal . Thus,

TΛ Symm(n) ∼
= Rn ⊕ o(n). (2.2.6)

When M = QM QT is not diagonal, we consider a curve M (t) as above, with


M (0) = M , Λ(0) = Λ and Q(0) = Q. Now equation (2.2.5) is replaced by
 
Ṁ = Q Λ̇ + [QT Q̇, Λ] QT . (2.2.7)

The matrices QT Q̇ are antisymmetric and span o(n). Again, Q̇ is uniquely


determined by Ṁ , Λ̇ and Λ. For arbitrary Λ̇ and A we find M (t) := QetA (Λ +
tΛ̇)e−tA QT is a smooth curve in Symm(n), satisfying M (0) = M .

Remark 17. The proof of Lemma 2 reveals that all matrices of the form
Z t  
M+ Q(s) [Q(s)T Q̇(s), Λ] Q(s)T ds (2.2.8)
0

lie on an isospectral manifold – i.e. a manifold of matrices in Symm(n) with


the same spectrum as Λ. And if one makes the ansatz Q(t) = etA for an
antisymmetric matrix A, one has

Ṁ = [A, M ]. (2.2.9)
2 Differentiability is guaranteed by classical perturbation theory [21, Theorem 5.4].
2.2. WEYL’S FORMULA ON SYMM(N ) 27

Proof of Weyl’s formula for β = 1. We now have two coordinate systems on


TM Symm(n) provided that the eigenvalues of M are distinct. We take up this
issue below and show that the set of all symmetric matrices with distinct eigen-
values is open. The coordinates ξα , 1 ≤ α ≤ p give the metric(2.1.14).
 The
second coordinate system, which is always locally defined, is Λ̇, Ȧ , where
Λ̇ is a diagonal matrix and Ȧ is an antisymmetric matrix. We use (2.2.7) to
find the infinitesimal length element in this coordinate system. On the subset
of Symm(n) consisting of matrices with distinct eigenvalues, using that M is
symmetric, and QT dQ = dA, A ∈ o(n),
Tr dM 2 = Tr(dM )T dM = Tr Q(dΛ + [dA, Λ])T (dΛ + [dA, Λ])QT
= Tr dΛ2 + 2 Tr dΛ[dA, Λ] + Tr[dA, Λ]2 (2.2.10)
2 2
= Tr dΛ + Tr[dA, Λ] .
Expanding out this last trace, we find
Tr[dA, Λ]2 = Tr(dAΛ − ΛdA)2
= Tr dAΛdAΛ + Tr ΛdAΛdA − Tr ΛdA2 Λ − Tr dAΛ2 dA
X n
n X X n
n X X n
n X
=2 dAjk dAkj λk λj − dAjk dAkj λ2j − dAjk dAkj λ2k
j=1 k=1 j=1 k=1 j=1 k=1
X
2
=2 (λj − λk ) dA2jk .
j<k
(2.2.11)
Therefore
n
X X
ds2 = Tr(dM 2 ) = dλ2j + 2 (λj − λk )2 dA2jk . (2.2.12)
j=1 1≤j<k≤n

Thus, the metric tensor in these coordinates is a diagonal matrix in Symm+ (p)
that takes the value 1 on the first n coordinates, and the value 2(λj − λk )2 for
each term Ajk . By (2.1.2), the volume form is
n
Y Y
DM = 2n(n−1)/4 dλj |λj − λk | dAjk = |4(Λ)| DΛ DO. (2.2.13)
j=1 1≤j<k≤n

To interpret Weyl’s formula, in a neighborhood UM of a matrix with distinct


eigenvalues, one needs to construct a well-defined map φ(M ) = (Λ, Q) from
symmetric matrices in this neighborhood to these “spectral” variables. Then
for f with compact support in UM
Z Z
f (M )DM = f (QΛQT )|4(Λ)|DΛDO. (2.2.14)
φ(UM )

We now work to understand how to define such a map, and why matrices with
repeated eigenvalues do not cause further issues.
28 CHAPTER 2. INTEGRATION ON SPACES OF MATRICES

2.3 Diagonalization as a change of coordinates


Some care is needed when treating the map M → (Λ, Q) as a change of vari-
ables. First, the map is not even well-defined in general, since the sign of each
normalized eigenvector is arbitrary. Second, even if we fix the signs, the choice
of eigenvectors is degenerate when M has repeated eigenvalues. Third, Λ is
not uniquely defined if we do not specify an ordering of the eigenvalues. The
following lemmas address this issue. Define the Weyl chamber

W n = {Λ ∈ Rn |λ1 < λ2 < . . . < λn }. (2.3.1)

Lemma 3. Assume M0 ∈ Symm(n) has distinct eigenvalues. Then there exists


ε > 0 such that for each s ∈ {±1}n , there is a C ∞ map
 
h(s) : Bε (M0 ) → W n × O(n), M 7→ Λ, Q(s)

that is a C ∞ diffeomorphism onto its image.


Proof of Lemma 3. An outline of the proof is presented. The remaining details
are left to the exercises.
Standard perturbation theory (see [21], for example) demonstrates that the
map is C ∞ . The choice of s corresponds to fixing the signs of the eigenvectors
as follows. Let a basis of normalized eigenvectors of M0 be fixed. Call the
(s)
associated matrix of eigenvectors Q0 . For each s, let Q0 = diag(s1 , . . . , sn )Q0 .
(s)
Each Q0 is also an eigenvector matrix for M0 . Since the eigenvalues of M
are distinct, we may use the implicit function theorem to solve the algebraic
equations that determine the eigenvalues and eigenvectors, in a way that is
consistent with the choice of s.
Lemma 4 (Weilandt–Hoffman inequality). Let M1 , M2 ∈ Symm(n) and use
λj (Mi ) to denote the jth eigenvalue (in increasing order) of Mi . Then
n
X
|λj (M1 ) − λj (M2 )|2 ≤ kM1 − M2 k2 .
j=1

Proof. See [35, Section 1.3] for a particularly nice proof.


Lemma 5. Assume that M ∈ Symm(n) has a repeated eigenvalue. Then for
every ε > 0 there exists Mε ∈ Symm(n), such that kM − Mε k < ε and Mε
has distinct eigenvalues. Furthermore, the set of all matrices in Symm(n) with
distinct eigenvalues is open.
Proof. Exercise.
Lemma 3 shows that the map M 7→ (Λ, Q) provides a local coordinate
system near each matrix with distinct eigenvalues. Lemma 5 shows that set of
such matrices is dense. More is true. The set of all matrices with both distinct
eigenvalues and non-vanishing first entries in its eigenvectors is of full measure.
2.4. INDEPENDENCE AND INVARIANCE IMPLIES GAUSSIAN 29

This follows from (3.3.3) and Lemma 8 below. Truly, one has to note that
the procedure of reducing a full matrix to a tridiagonal matrix that is used to
establish (3.3.3) does not affect the first row of the eigenvector matrix.

2.4 Independence and Invariance implies Gaus-


sian
Fix M ∈ Symm(n) with spectrum σ(M ). Fix an interval (a, b) ⊂ R and let
Symm(n)(a,b) denote the set of M ∈ Symm(n) with spectrum σ(M ) ⊂ (a, b).
Each function f : (a, b) → R extends naturally to a map Symm(n)(a,b) →
Symm(n) as follows:

f (M ) = Qf (Λ)QT , M = QΛQT , f (Λ) = diag(f (λ1 ), . . . , f (λn )). (2.4.1)


Pn
Clearly, Tr(f (M )) = Tr(f (Λ)) = j=1 f (λj ). Each f : R → R that grows
sufficiently fast as x → ±∞ defines an invariant distribution on Symm(n)

1
µ(DM ) = exp (− Tr(f (M ))) DM. (2.4.2)
Z
This is the most general form of an invariant probability distribution.
By contrast, a Wigner distribution relies on independence of the entries of
M . This means that if a Wigner distribution has a density, then it must be of
the form  
n
1 Y Y
µ(DM ) = fj (Mjj ) fjk (Mjk ) DM. (2.4.3)
Z j=1
1≤j<k≤n

Theorem 18. Assume a probability measure µ on Symm(n) is both a Wigner


distribution and an invariant distribution. Assume further that µ(DM ) has a
strictly positive, smooth density of the form (2.4.2) and (2.4.3). Then µ(DM )
is a Gaussian ensemble,
1 − 12 Tr(M −γI)2
µ(DM ) = e 2σ DM, (2.4.4)
Z
with variance σ 2 and mean γ I, for some γ ∈ R.

Proof. We first illustrate the essential calculation for 2 × 2 matrices. Suppose


1
µ(DM ) = p(M ) DM = f (M11 )g(M22 )h(M12 )dM11 dM12 dM22 . (2.4.5)
Z
We compute the variation in µ along an isospectral curve (see Remark 17).
Consider the curve M (t) = Q(t)M Q(t)T with
 
0 −1
Q(t) = etR , R = . (2.4.6)
1 0
30 CHAPTER 2. INTEGRATION ON SPACES OF MATRICES

The matrix R spans so(2). We differentiate M (t) with respect to t to obtain


 
−2M12 M11 − M22
Ṁ (0) = [R, M ] = (2.4.7)
M11 − M22 2M12

Thus, the infinitesimal change in the density p(M (t)) is

1 dp f 0 (M11 ) g 0 (M22 ) h0 (M12 )


= Ṁ11 + Ṁ22 + Ṁ12 (2.4.8)
p dt t=0 f (M11 ) g(M22 ) h(M12 )
 0
f (M11 ) g 0 (M22 ) h0 (M12 )

= −2M12 − + (M11 − M22 ) .
f (M11 ) g(M22 ) h(M12 )

On the other hand, since µ(DM ) is invariant, p(M (t)) = p(M ) and

dp
= 0. (2.4.9)
dt t=0

We equate (2.4.8) and (2.4.9), and separate variables to obtain


 0
f (M11 ) g 0 (M22 ) 1 h0 (M12 )

1
− =c= , (2.4.10)
M11 − M22 f (M11 ) g(M22 ) 2M12 h(M12 )

for some constant c ∈ R. Equation (2.4.10) immediately implies that


2
h(M12 ) = h(0)ecM12 . (2.4.11)

Separating variables again in (2.4.10), we find with a second constant b ∈ R,

f0 g0
= cM11 + b, = cM22 + b, (2.4.12)
f g
which integrates to
2 2
cM11 cM22
f (M11 ) = f (0)e 2 ebM11 , g(M22 ) = g(0)e 2 ebM22 . (2.4.13)

We combine all the terms to obtain


Tr(M 2 )
p(M ) = f (0)g(0)h(0)ec 2 eb Tr(M ) . (2.4.14)

Since p(M ) integrates to 1, we must have c < 0, say c = −1/σ 2 . The scalar b is
arbitrary and contributes a shift in the mean that is a scalar multiple of I. The
combination of constants f (0)g(0)h(0) may be absorbed into the normalization
constant Z −1 . We have thus proved Theorem 18 for n = 2.
In order to prove Theorem 18 for arbitrary n we generalize the above argu-
ment as follows. Fix a pair of off-diagonal indices 1 ≤ l < m ≤ n. We consider
a rotation in Rn that rotates the xl xm plane as above, and leaves the other co-
ordinates invariant. This entails replacing the matrix R in the argument above
with the matrix Rlm ∈ so(n) with coordinates Rjk lm
= δjl δkm − δjm δkl . The
2.5. INTEGRATION ON HER(N ) AND U(N ) 31

argument above now shows that the density of p in the Mll , Mlm and Mmm
coordinates is a Gaussian distribution of the form (2.4.14):
Tr((M lm )2 ) lm
p(M lm ) = ec 2 eb Tr(M ) , (2.4.15)

where M lm denotes the 2 × 2 matrix


 
Mll Mlm
M lm = .
Mlm Mmm

At this stage, the constants c and b depend on l and m. But now note that
since the same argument applies to every pair of indices 1 ≤ l < m ≤ n, the
constants c and b must be independent of l and m.

2.5 Integration on Her(n) and U(n)


The space of Hermitian matrices Her(n) is a vector-space of real dimension n2 ,
2
as may be seen by the isomorphism Her(n) → Rn ,

M 7→ ξ = (M11 , . . . , Mnn , ReM12 , . . . , ReMn−1,n , ImM12 , . . . , ImMn−1,n ) .


(2.5.1)
The Hilbert-Schmidt inner product on Her(n) is

Her(n) × Her(n) → C, (M, N ) 7→ Tr(M ∗ N ). (2.5.2)

The associated infinitesimal length element is


n
X X
ds2 = Tr(dM 2 ) = 2 2 2

dMjj +2 d ReMjk + d ImMjk . (2.5.3)
j=1 1≤j<k≤n

Thus, in the coordinates ξ, the metric is an n2 × n2 diagonal matrix whose first


n entries are 1 and all other entries are 2. We apply (2.2.1) to obtain the volume
form on Her(n)
n
Y Y
DM = 2n(n−1)/2 dMjj d ReMjk d ImMjk . (2.5.4)
j=1 1≤j<k≤n

The unitary group, U(n) is the group of linear isometries of Cn equipped


with the standard inner-product hx, yi = x∗ y. Thus, U(n) is equivalent to the
group of matrices U ∈ Cn×n such that U ∗ U = I. The inner-product (2.5.3) and
volume form (2.5.4) are invariant under the transformation M 7→ U M U ∗ .
The Lie algebra u(n) is computed as in Section 2.1. We find

u(n) = TI U(n) = A ∈ Cn×n |A = −A∗ , TU U(n) = {U A | A ∈ u(n)} .




(2.5.5)
The transformation M 7→ iM is an isomorphism between Hermitian and anti-
Hermitian matrices. In fact, the map Her(n) → U(n), M 7→ eiM is onto and
32 CHAPTER 2. INTEGRATION ON SPACES OF MATRICES

locally one-to-one. The inner-product A, Ã 7→ Tr(A∗ Ã) is invariant under left


application of U(n). Thus, we obtain the volume form for Haar measure on U(n)
n
Y Y
DŨ = 2n(n−1)/2 dAjj d ReAjk d ImAjk . (2.5.6)
j=1 1≤j<k≤n

However, when viewing diagonalization M 7→ U ΛU ∗ as a change of variables


on Her(n), it is necessary to quotient out the following degeneracy:  For each
θ = (θ1 , . . . , θn ) ∈ Tn , the diagonal matrix D = diag eiθ1 , . . . , eiθn is unitary
and M = U ΛU ∗ if and only if M = U DΛD∗ U ∗ . Thus, for Her(n), the measure
DŨ must be replaced a measure on U(n)/Tn . The form of this measure on
U(n)/Tn follows from the following assertion, which is proved as in Section 2.1.
Lemma 6. Each matrix Ṁ ∈ TM Her(n) is of the form
 
Ṁ = U Λ̇ + [U ∗ U̇ , Λ] U ∗ , Λ̇ ∈ TΛ Rn , U̇ ∈ TU U(n), diagonal(U ∗ U̇ ) = 0.
(2.5.7)
The matrices Λ̇ and U ∗ U̇ are orthogonal under the inner-product (2.5.2).
Thus, the volume form on the quotient U(n)/Tn is locally equivalent to a
volume form on the subspace of Her(n) consisting of matrices with zero diagonal:
Y
DU = 2n(n−1)/2 d ReAjk d ImAjk . (2.5.8)
1≤j<k≤n

Furthermore, B 7→ ϕ(B) = U eU B provides a locally one-to-one mapping from
P TU U(n) = U TI U(n) to U(n)/Tn .
Lemma 6 shows that the mapping Rn ⊕ P TI U(n) → TM Her(n), P TI U(n) =
{A ∈ TI U(n) | diag(A) = 0}, defined by (Λ̇, Ȧ) 7→ U (Λ̇ + [Ȧ, Λ])U ∗ maps onto
TM Her(n). Again, the two spaces are isomorphic if M has distinct eigenvalues.
Proof of Weyl’s formula for β = 2. We write, on the subset of Symm(n) con-
sisting of matrices with distinct eigenvalues, using that M is Hermitian, and
U ∗ dU = dA, A ∈ TI U(n), diag(A) = 0,
Tr dM 2 = Tr dΛ2 + 2 Tr dΛ[dA, Λ] + Tr[dA, Λ]∗ [dA, Λ]
(2.5.9)
= Tr dΛ2 + Tr[dA, Λ]∗ [dA, Λ].
Expanding out this last trace, using that dA = dReA + i dImA, we need only
collect the real part
Tr[dA, Λ]∗ [dA, Λ] = Tr(d ReA)Λ(d ReA)Λ + Tr Λ(d ReA)Λ(d ReA)
− Tr Λ(d ReA)2 Λ − Tr(d ReA)Λ2 (d ReA)
+ Tr(d ImA)Λ(d ImA)Λ + Tr Λ(d ImA)Λ(d ImA)
(2.5.10)
− Tr Λ(d ImA)2 Λ − Tr(d ImA)Λ2 (d ImA)
X X
=2 (λj − λk )2 dReA2jk + 2 (λj − λk )2 dImA2jk .
j<k j<k
2.6. INTEGRATION ON QUART(N ) AND USP(N ) 33

Then it follows that the associated volume form satisfies

DM = |4(Λ)|2 DΛDU. (2.5.11)

2.6 Integration on Quart(n) and USp(n)


The field of quaternions, H, is the linear space

x = c0 + c1 e1 + c2 e2 + c3 e3 , ci ∈ R, i = 0, 1, 2, 3, (2.6.1)

equipped with the non-commutative rules of multiplication

e21 = e22 = e23 = e1 e2 e3 = −1. (2.6.2)

These rules ensure that the product of any two quaternions is again a quaternion.
Each x ∈ H has a complex conjugate x̄ = c0 − c1 e1 − c2 e2 − c3 e3 , and its absolute
value |x| is determined by

|x|2 = x̄x = c20 + c21 + c22 + c23 . (2.6.3)

Each non-zero x ∈ H has a multiplicative inverse 1/x = x̄/|x|2 . Thus, H is


indeed a field.
n
The normed linear vector
Pn space H consists of vectors x = (x1 , . . . , xn )T with

inner product hx, yi = j=1 x̄j yj . The adjoint, M of a linear transformation
M : Hn → Hn is defined by the inner-product

hM † x, yi := hx, M yi. (2.6.4)

It follows that the entries of M † are Mjk = M̄kj . We say that an operator is self-
adjoint if M = M † . It is anti self-adjoint if M = −M † . The space of self-adjoint
operators is denoted Quart(n). We equip this space with the Hilbert-Schmidt
norm as before.
The group USp(n) is the set of linear transformations of Hn that preserve
this inner product. We thus require that for each x, y ∈ Hn

hx, yi = hU x, U yi = hU † U x, yi. (2.6.5)

Thus, USp(n) is equivalent to U ∈ Hn×n such that U † U = I. As for U(n) we find


that its Lie algebra usp(n) is the space of anti self-adjoint matrices. The inner-
product on usp(n) and Haar measure are defined exactly as in Section 2.5, as is
the analogue of Lemma 6 and the Weyl formula. It is also clear from how the
proof of Weyl’s formula extends to β = 2, that because the field of quarternions
is a four-dimensional space, |4(Λ)|4 will arise, see (2.5.10).
34 CHAPTER 2. INTEGRATION ON SPACES OF MATRICES

Exercises
2.1. Show that  
1 ... 1
 λ1 . . . λn 
4(Λ) = det  . (2.6.6)

.. ..

 . . 
λn−1
1 ... λn−1
n .
2.2. The Pauli matrices,
     
0 1 0 −i 1 0
σ1 = , σ2 = , σ3 = , (2.6.7)
1 0 i 0 0 −1

allow a representation of the quarternions in terms of Hermitian matrices.

(a) Show that the Pauli matrices together with the identity matrix span Her(2).

(b) Show that the matrices {iσ1 , iσ2 , iσ3 } form a basis of su(2). (This is the
subalgebra of u(2) consisting of trace-free matrices).

(c) Verify that if ej = iσj , the rules (2.6.2) hold (replace 1 by I2 ).

2.3. The canonical symplectic matrix of size 2n × 2n denoted Jn , or simply J,


is the matrix  
0 I
J= , (2.6.8)
−I 0
where 0 and I denote the n×n zero and identity matrices. The symplectic group
Sp(2n, R) (not to be confused with the unitary symplectic group USp(n)!) is

Sp(2n, R) = S ∈ Rn×n S T JS = J .

(2.6.9)

Verify that Sp(2n, R) is a group and compute its Lie algebra sp(2n, R).
2.4. Use the Gaussian integral
|x|2
Z
e− 2 dx1 . . . dxn .
Rn

to compute the n − 1-dimensional volume ωn−1 of the unit sphere S n−1 . Deter-
mine the asymptotic behavior of ωn−1 as n → ∞.
Hint: Do the integral two ways– once in Cartesian coordinates, and once in
polar coordinates.
2.5. Assume given a C 1 function f : (a, b) → R, and extend it to a function
f : Symm(n) → Symm(n) as in (2.4.1). Compute the Jacobian of this transfor-
mation. Apply this formula to the function f (x) = eix to compute the analogue
of Weyl’s formula on U(n) (note that each U ∈ U(n) is of the form eiM for some
M ∈ Her(n)).
2.6. Prove Lemma 4.
2.6. INTEGRATION ON QUART(N ) AND USP(N ) 35

2.7. Let A ∈ Rm×n for m < n. Show that {x | Ax = 0} ⊂ Rn has zero Lebesgue
measure.
2.8. Assume f : R → (0, ∞) satisfies the functional equation

f (x + y) = f (x)f (y), x, y ∈ R. (2.6.10)

It is easy to check that for each a ∈ R functions of the form f (x) = eax
solve (2.6.10). Show that these are the only solutions to (2.6.10) assuming
only that f is continuous. (Do not assume that f is differentiable).
Remark 19. The use of row operations in Problem (1) underlies the intro-
duction of orthogonal polynomials. Problems (2) and (3) may be combined to
show that Sp(2n, C) ∩ U(n) ∼ = USp(n). The approach in Problem (4) yields
the volume of O(n), U(n) and USp(n) when applied to GOE, GUE and GSE.
The assumptions of Problem (7) may be weakened further – measurability is
enough! You could try to develop a similar approach for the functional equation
implicit in the proof of Theorem 18. That is, can you establish a stronger form
of Theorem 18 that does not assume differentiability ?
2.9. Show that the mapping A 7→ (I −A)(A+I)−1 from o(n) to O(n) is bijective
in a neighborhood of 0 to a neighborhood of the identity. Construct an atlas of
O(n) using this mapping.
2.10. Using the Submersion Theorem [5, Proposition 3.42] (also called the Reg-
ular Value theorem) show that O(n) is a smooth manifold.
Hint: Consider φ : Rn×n → Symm(n) defined by φ(X) = X T X. Then show
that I is a regular value and therefore φ−1 (I) = O(n) is a smooth manifold.
36 CHAPTER 2. INTEGRATION ON SPACES OF MATRICES
Chapter 3

Jacobi matrices and


tridiagonal ensembles

3.1 Jacobi ensembles


The space of real n × n tridiagonal matrices is denoted Tridiag(n). A typical
matrix in Tridiag(n) is written
 
a1 b1 0 ... 0
 b1 a2 b2 0 
..
 
 .. 
T =  0 b2 a3 . . .
 (3.1.1)
 . .. ..
 ..

. . bn−1 
0 0 ... bn−1 an

Jacobi matrices, and their closure within the space Tridiag(n) are the manifolds

Jac(n) = {T ∈ Tridiag(n) | bj > 0, 1 ≤ j ≤ n }, (3.1.2)


Jac(n) = {T ∈ Tridiag(n) | bj ≥ 0, 1 ≤ j ≤ n }.

Jacobi matrices, or more generally Jacobi operators, are of fundamental impor-


tance in spectral theory. A self-adjoint operator K on a Hilbert space can be
decomposed using its cyclic subspaces. On each of these cyclic subspaces an or-
thonormal basis for span{K j x | j = 0, 1, 2, . . . } can be found and the operator
K becomes tridiagonal in this basis. This is an idea used by conjugate gradient
algorithm [16]. They also play an important role in approximation theory, the
theory of orthogonal polynomials, and more widely in numerical linear algebra.
An essential step in the symmetric eigenvalue problem is the reduction of a full
symmetric matrix to an isospectral tridiagonal matrix (tridiagonalization) by
a sequence of orthogonal reflections. Under this procedure, the Gaussian en-
sembles push forward to ensembles of tridiagonal matrices whose laws have the
following simple description.

37
38 CHAPTER 3. JACOBI MATRICES AND TRIDIAGONAL ENSEMBLES

Definition 20 (Dumitriu–Edelman [8]). For each β > 0, the Hermite(β) en-


semble consists of T ∈ Tridiag(n) such that ak , 1 ≤ k ≤ n, are iid normal
random variables with mean zero and variance 2/β, and bk , 1 ≤ k ≤ n − 1
where are independent χ(n−k)β (1/β) random variables.
Then density for χk (σ 2 ) is supported on [0, ∞) and is proportional to
t2
tk−1 e− 2σ2 .

The point here is that the Hermite(β) ensembles are the push-forwards of
the Gaussian ensembles when β = 1, 2 or 4. Then they interpolate Dyson’s
classification of ensembles to every β > 0. When combined with classical spec-
tral theory, they provide a distinct, and important, perspective on the limit
theorems of random matrix theory. Our immediate goal in this chapter is the
following
Theorem 21. Fix β > 0 and assume T ∼ Hermite(β). Then the marginal
distribution of its eigenvalues is
1 β 2
pHermite(β) (Λ)DΛ = e− 4 Tr(Λ) |4(Λ)|β DΛ. (3.1.3)
Zn,β
The chapter concludes with a more refined version of Theorem 21 that in-
cludes the distribution of the spectral measure of matrices T ∼ Hermite(β).

3.2 Householder tridiagonalization on Symm(n)


Each M ∈ Symm(n) may be diagonalized M = QΛQT . However, the computa-
tion of Λ depends on the solvability of the characteristic polynomial det(zI −
M ) = 0. For n ≥ 5, there is no general closed form solution for the characteristic
polynomial1 . Nevertheless, every matrix always admits the following reduction.
Theorem 22. For every M ∈ Symm(n) there exists a tridiagonal matrix T and
Q ∈ O(n) such that
M = QT QT . (3.2.1)
The transformation (3.2.1) is given by a change of variables

Symm(n) → Jac(n) × S n−2 × S n−3 × . . . S 1 .



(3.2.2)

under which the volume form DM on Symm(n) transforms as follows:


n n−1 n−2
(n−k)−1
Y Y Y
DM = Cn daj bk dbk Dωl (3.2.3)
j=1 k=1 l=1

where Dωl denotes uniform measure on the sphere S l , and Cn is a universal


constant.
1 Practical numerical schemes for eigenvalue decomposition are unaffected by this algebraic

obstruction, since they rely on iteration.


3.2. HOUSEHOLDER TRIDIAGONALIZATION ON SYMM(N ) 39

The set of attainable matrices Q are given by a mapping

h : S n−2 × S n−3 × · · · × S 1 7→ O(n). (3.2.4)

This mapping is given below, explicitly in terms of Householder reflections.


As the dimension of the domain for this mapping is less than 21 n(n − 1), the
dimension of O(n), not all matrices in O(n) are attainable.
Remark 23. The space Tridiag(n) clearly inherits the inner-product Tr(T 2 ) =
Pn 2
Pn−1 2
j=1 aj + 2 j=1 bj from Symm(n). However, the volume form obtained from
this metric is not the same as the volume form in (3.2.3) above.
Remark 24. (For algebraists!) The proof will also show that T and Q may be
computed with a finite number of the following algebraic operations: addition,
multiplication and square-roots.
Definition 25. Suppose v ∈ Rn is a unit vector. The Householder reflection
in v is the matrix
Pv = I − 2vv T . (3.2.5)
Lemma 7. The matrix Pv has the following properties:
(a) Pv2 = I.
(b) Pv ∈ O(n).
Proof. Decompose Rn into the orthogonal subspaces span{v} and v ⊥ . Then
Pv v = −v and Pv |v⊥ = I. Thus, Pv2 = I. This proves (a). By construction
PvT = Pv . Thus, by (a), we also have PvT Pv = I.
Proof of Theorem 22. 1. The proof relies on a sequence of Householder reflec-
tions that progressively introduce zeros in a sequence of matrices similar to M .
The first such matrix is the following. Let w1 = (M21 , . . . , Mn1 )T ∈ Rn−1 de-
note the last n − 1 entries of the first column of M . If the first coordinate of w1
is non-negative, and all other coordinates vanish there is nothing to do. If not,
(n−1)
we may choose a Householder reflection (in Rn−1 ) that maps w1 to |w1 |e1
n−1
(here the superscript n−1 denotes that we consider the basis vector e1 ∈ R ).
Geometrically, such a reflection is obtained by choosing v1 to be the unit vector
(n−1)
that lies in between w1 and |w1 |e1 . Explicitly, we set2

(n−1) ṽ1
ṽ1 = |w1 |e1 − w1 , v1 = , P (1) = Pv1 . (3.2.6)
|ṽ1 |

By Lemma 7, P (1) ∈ O(n − 1) is a Householder reflection that maps w1 to


(n−1)
|w1 |e1 . It may be extended to a Householder reflection in O(n), by defining
 
1 0
Q(1) = . (3.2.7)
0 P (1)
2 If one is using this method numerically and |ṽ | is small, instabilities can be introduced.
1
(n−1)
In this case one should use −|w1 |e1 − w1 .
40 CHAPTER 3. JACOBI MATRICES AND TRIDIAGONAL ENSEMBLES

Then the matrix


 T
M (1) := Q(1) M Q(1) = Q(1) M Q(1) , (3.2.8)

is similar to M . By construction, the first row of M (1) is (M11 , |w1 |, 0, . . . , 0),


and the first column is (M11 , |w1 |, 0, . . . , 0)T . Thus, we may write
!
(n−1) T
(1) T (1) |w1 |(e1 )
M = (n−1) , (3.2.9)
|w1 |e1 N (1)

where T (1) is a (trivial) 1 × 1 tridiagonal matrix and N (1) ∈ Symm(n − 1).


2. The proof is completed by induction. Assume that M (k) ∈ Symm(n)
has the form !
(n−k) T
(k) T (k) |wk |(e1 )
M = (n−k) , (3.2.10)
|wk |e1 N (k) )

where T (k) ∈ Tridiag(k) and N (k) ∈ Symm(n − k), 1 ≤ k ≤ n − 1. We apply


the procedure of step 1 to N (k) to obtain a vector vk , a Householder reflection
P (k) = Pvk , and an orthogonal transformation of M (k) ,
 
Ik 0
Q(k) = ∈ O(n), M (k+1) = Q(k) M (k) Q(k) . (3.2.11)
0 P (k)

Note that Q(k) leaves the first k rows and columns of M (k) unchanged, thus
it does not destroy the tridiagonal structure of the first k rows and columns.
Thus, M (k+1) has the form (3.2.10) with the index k replaced by k + 1.
The procedure terminates when k = n − 2, and yields

M = QT QT , Q = Q(n−2) Q(n−3) . . . Q(1) . (3.2.12)

3. It is simplest to prove (3.2.3) probabilistically. Informally, the k-th step


of the procedure above is a change to polar coordinates in Rn−k , with bk ≥ 0
playing the role of the radius, and the factor bn−k−1
k dbk Dωn−1−k being the
pushforward of Lebesgue measure in Rn−k to polar coordinates. More precisely,
assume that M ∼ GOE(n). We note that the first step of the above procedure
leaves M11 alone. Thus, a1 = M11 ∼ N (0, 1). Moreover, the term b1 is the
length of the first column of M , not including the diagonal term M11 . Since a
χ2m random variable has the same law as the length of a vector in Rm whose
entries are iid N (0, 1) random variables, we see that b1 ∼ χn−1 . Further, the
vector ω1 = w1 /|w1 | is uniformly distributed on S n−2 and independent of both
a1 and b1 (see Exercise 3.1). We next observe that by the independence and
invariance of the Gaussian ensembles, the matrix N (1) in (3.2.9) ∼ GOE(n − 1).
Indeed, M̃1 , the lower-right (n − 1) × (n − 1) block of M , is a GOE(n − 1)
matrix, and the reflection P (1) is independent of M̃1 . Thus, N (1) = P (1) M̃1 P (1)
has law GOE(n − 1) and is independent of b1 , a1 and ω1 (see Exercise 3.2).
Thus, a2 ∼ N (0, 1) and b2 ∼ χn−2 . An obvious induction now shows that if
3.3. TRIDIAGONALIZATION ON HER(N ) AND QUART(N ) 41

M ∼ GOE then T ∼ Hermite(1), and the vectors ωk = wk /|wk |, are uniformly


distributed on S n−1−k , 1 ≤ k ≤ n − 2. Comparing the two laws, we find (with
β = 1)
n−1 n−2
β Tr(M 2 ) β Tr(T 2 ) Y Y
e− 2 DM = Cn e− 2 daj bkn−k−1 dbk Dωl (3.2.13)
k=1 l=1

The exponential weights cancel, and yield the Jacobian formula (3.2.3).

3.3 Tridiagonalization on Her(n) and Quart(n)


Theorem 22 admits a natural extension to Her(n) and Quart(n).
Theorem 26. For every M ∈ Her(n) (resp. Quart(n)) there exists a tridiagonal
matrix T ∈ Jac(n) and Q ∈ U(n) (resp. USp(n)) such that

M = QT Q∗ . (3.3.1)

The transformation (3.3.1) is given by a change of variables

Her(n) → Jac(n) × SFn−2 × SFn−3 × . . . SF1 ,



(3.3.2)

where SFl denotes the unit sphere in Fl , with F = C (resp. H). The volume form
DM on Her(n) (resp. Quart(n)) transforms as follows:
n n−1 n−2
β(n−k)−1
Y Y Y
DM = Cn daj bk dbk Dωl (3.3.3)
j=1 k=1 l=1

where Dωl denotes uniform measure on the sphere SFl , and Cn is a universal
constant.
For a vector w ∈ Cn with independent standard normal complex entries,
wj ∼ √12 (N1 + iN2 ), where N1 , N2 ∼ N (0, 1) are independent, |w| ∼ √12 χ2n .
For a quarternion vector w, one finds |w| ∼ 12 χ4n . So, β is introduced in this
way.
Remark 27. Note that the matrix T is always real, whereas the entries of M
and Q are in C or H.
The proof of Theorem 26 is in the same vein as that of Theorem 22. It is
only necessary to replace the Householder projections in O(n) with projections
in U(n) and USp(n). For example, given v ∈ Cn with |v| = 1, the associ-
ated Householder projection in U(n) is Pv = I − 2vv ∗ . Step 3 in the proof of
Theorem 26 also explains the role of the parameter β in the definition of the
Hermite-β ensembles. The k-th step of the Householder transformation maps a
standard Gaussian vector in Cn−k to its magnitude and direction. The law of
the magnitude is now χ2(n−k) (or χβ(n−k) with β = 2). Similarly, the direction
of the Gaussian vector is uniformly distributed on the unit sphere in Cn−k−1 .
42 CHAPTER 3. JACOBI MATRICES AND TRIDIAGONAL ENSEMBLES

3.4 Inverse spectral theory for Jacobi matrices


Bounded Jacobi operators admit a complete and beautiful spectral theory that
is intimately tied to orthogonal polynomials and continued fractions. We first
introduce this theory for finite Jacobi matrices, since it underlies Theorem 21.
As usual, write
T = QΛQT , Q ∈ O(n), (3.4.1)
for the diagonalization of T . We also recall the Weyl chamber

W n = {Λ ∈ Rn |λ1 < λ2 < . . . < λn }. (3.4.2)

For each Λ ∈ W n , its isospectral manifold is the set

MΛ = {T ∈ Jac(n) T = QΛQT , for some Q ∈ O(n) }. (3.4.3)

The following theorem shows that the interior of the isospectral manifold is
n−1
diffeomorphic to the positive orthant S+ = {u ∈ Rn | |u| = 1, uj > 0, j =
1, 2, . . . , n} of the unit sphere. Given T , we uniquely define Q by forcing the
first non-zero entry in each column to be positive.

Theorem 28. The spectral mapping


n−1
S : Jac(n) → W n × S+ , T 7→ (Λ, QT e1 ), (3.4.4)

is an analytic diffeomorphism.

We prove this in stages below. See Figure 3.4.1.


The isospectral manifold admits several distinct parametrizations. First, it is
n−1
clear that we could use the simplex Σn instead of the orthant S+ . Indeed, let
T
u = Q e1 denote the first rowP of the matrix of eigenvectors and define cj = u2k ,
n
1 ≤ k ≤ n. Since Q ∈ O(n), k=1 u2k = 1. Thus, u ∈ S n−1 and c ∈ Σn . But,
n−1
we shall use S+ . Lemma 8 below shows that uk can be chosen to be strictly
n−1
positive, which allows us to restrict attention to the positive orthant S+
Theorem 28 may also be viewed as a mapping to the spectral measure
n
X n
X
T 7→ µ = u2j δλj = cj δλj . (3.4.5)
j=1 j=1

It is often more convenient to work with the Cauchy transform of the spectral
measure, µ. Define the τ -function,
n
u2j
Z
1 X
µ 7→ τ (z) = µ(dx) = , z ∈ C\{λ1 , . . . , λn }. (3.4.6)
R x−z λ −z
j=1 j

The inverse τ 7→ µ is obtained by computing the poles and residues of τ .


3.4. INVERSE SPECTRAL THEORY FOR JACOBI MATRICES 43

n−1
{aj }nj=1 {bj }j=1 {πj }n−1
j=0
= bj
,j +1
3-term recurrence
aj , Tj

Monic OPs
Tjj =
T ∈ Jac(n)

Spec
t ral m
ap Spectral measure
P 2
(Λ, QT e1 ) j uj δλj

Figure 3.4.1: The construction of the spectral map and its inverse. The trans-
formation to spectral variables is computed by computing eigenvalues and taking
the first component of the (normalized) eigenvectors. Then a spectral measure
(3.4.5) is created from this data and is used to define monic orthogonal polyno-
mials (3.4.16). These polynomials satisfy a three-term recurrence relation (see
Lemma 11) and the coefficients in the relation allow for the (unique) reconstruc-
n−1
tion of T , see (3.4.21). This shows the spectral map from Jac(n) to W n × S+
is invertible.

The τ -function may also be written as a ratio of polynomials of degree n − 1


and n respectively. Let Tk ∈ Jac(k) denote the lower-right k × k submatrix of
T , 1 ≤ k ≤ n. It follows from Cramer’s rule that
Qn−1 (n−1)
−1 det(Tn−1 − zI) j=1 (λj − z)
τ (z) = eT1 (T − z) e1 = = Qn (n)
, (3.4.7)
det(T − zI) (λ − z)
j=1 j

where Λ(k) denotes the diagonal matrix of eigenvalues of Tk and Λ(n) = Λ. We


will show that the ordered eigenvalues of Tn−1 and Tn interlace, i.e.
(n) (n−1) (n) (n−1)
λ1 < λ1 < λ2 < . . . < λn−1 < λ(n)
n . (3.4.8)

Thus, interlacing sequences provide another parametrization of Jac(n). A conve-


nient visal description of interlacing sequences, called diagrams, was introduced
by Kerov and Vershik [22]. The importance of these alternate parametrizations
(spectral measures, τ -function, diagrams) is that they provide a transparent
framework for the analysis of the limit n → ∞.
The surprising aspect of Theorem 28 is that the spectral data (Λ, u) provides
enough information to reconstruct the matrix T . There are three reconstruc-
tion procedures. The first involves orthogonal polynomials, the second uses the
theory of continued fractions and a third involves the explicit solution of the
equation T Q = ΛQ for T . We explain the use of orthogonal polynomials below,
and outline the theory of continued fractions in the exercises. In order to de-
velop these procedures, it is first necessary to establish basic properties of the
eigenvalues of Jacobi matrices.
44 CHAPTER 3. JACOBI MATRICES AND TRIDIAGONAL ENSEMBLES

Lemma 8. Assume T ∈ Jac(n). Then


1. The first entry of each non-zero eigenvector is non-zero. In particular, we
may normalize the eigenvectors to ensure uk > 0 for 1 ≤ k ≤ n.
2. The eigenvalues of T are distinct.
Proof. We write the eigenvalue equation T v = zv in coordinates.
bk−1 vk−1 + (ak − z) vk + bk vk+1 = 0, 1 ≤ k ≤ n, (3.4.9)
with the convention b0 = bn = 0. Since the off-diagonal terms bk are strictly
positive, we may solve this linear system recursively. Given v1 , we find
v1 (z − a1 ) v1
(a2 − z)(a1 − z) − b21 , etc.

v2 = , v3 = (3.4.10)
b1 b1 b2
Thus, v ≡ 0 ∈ Rn if v1 = 0. Further, the solution space to the eigenvalue
equation T v = λv has dimension at most 1.
Lemma 9. The characteristic polynomials dk (z) = det(zI − Tk ) satisfy the
recurrence relations
dk+1 (z) = (z − an−k )dk (z) − b2n−k dk−1 (z), 1 ≤ k ≤ n − 1, (3.4.11)
with the initial condition d0 (z) ≡ 1 and the convention bn = 0.
Proof. Expand the determinant det(zI − Tk ) about the k-th row, and compute
the minors associated to z − an−k and bn−k .
Lemma 10. The eigenvalues of Tk and Tk+1 interlace, 1 ≤ k ≤ n − 1.
Proof. We consider the τ -functions for the minors Tk ,
det(Tk − zI) dk (z)
τk (z) = =− . (3.4.12)
det(Tk+1 − zI) dk+1 (z)
By the recurrence relation (3.4.11), we have
1
− = z − an−k + b2n−k τk−1 (z). (3.4.13)
τk (z)
We claim that on the real line, τk (x) is strictly increasing between the zeros
of dk . Indeed, it is clear that τ1 (x) = (an − x)−1 has this property, and upon
differentiating (3.4.13) we find that
1 0 0
τ = 1 + b2n−k τk−1 > 0,
τk2 k
except at poles. The claim follows by induction.
Since τk is strictly increasing between poles, by the intermediate value theo-
rem, it has exactly one zero between any two poles. By (3.4.12), the zeros of τk
are the eigenvalues of Tk , and the poles of τk are the eigenvalues of Tk+1 . Thus,
they interlace.
3.4. INVERSE SPECTRAL THEORY FOR JACOBI MATRICES 45

A remarkable feature of the spectral theory of Jacobi matrices is that the


orthogonal polynomials associated to the spectral measure µ(T ) may be used to
reconstruct T . In order to state this assertion precisely, let us recall some basic
facts about orthogonal polynomials. Assume given a probability measure µ on
R that has finite-moments of all orders, i.e.,
Z
|x|α µ(dx) < ∞, α > 0. (3.4.14)
R

We may apply the Gram-Schmidt procedure to the monomials {xk }∞ k=0 to con-
struct a sequence of polynomials that are orthogonal in L2 (R, µ). There are two
standard normalizations that one may adopt.

Orthonormal polynomials, denoted {pk }∞ k=0 , have the property that pk is


of degree k, k = 0, 1, 2, . . ., and
Z
pk (x)pl (x) µ(dx) = δkl . (3.4.15)
R

Monic polynomials, denoted {πk }∞ k=0 have the property that πk (x) is of
degree k and the coefficient of xk is 1. Further,
Z
πk (x)πl (x) µ(dx) = 0, k 6= l. (3.4.16)
R

Lemma 11 (Three-term recurrence


Pn for orthogonal polynomials). Given (Λ, u) ∈
n−1
W n × S+ , let µ(Λ, u) = k=1 u2k δΛk . Then the associated monic orthogonal
polynomials {πk }nk=0 , satisfy the three-term recurrence (3.4.17)

πk (z) = (z − ak )πk−1 (z) − b2k−1 πk−2 (z), 1 ≤ k ≤ n, (3.4.17)

where the coefficients ak and bk are given by


2
R R
R
xπk−1 µ(dx) 2 xπk (x)πk−1 (x) µ(dx)
ak = R 2
, bk = R R 2 , k = 1, . . . , n,
π
R k−1
(x) µ(dx) π
R k−1
(x) µ(dx)
(3.4.18)
with π−1 = 0 and hence b0 = 0. Recall that π1 = 1. The recurrence (3.4.18)
defines a Jacobi matrix T (µ).

Remark 29. If µ is not a discrete measure of the form (3.4.5), but has bounded
support, the recurrence (3.4.17) defines a bounded Jacobi operator on l2 (C).

Proof. Given any µ as in (3.4.14), we obtain the sequence {πk } using the Gram-
Schmidt procedure. When µ is of the form (3.4.5) with (3.4.5), the vector space
L2 (R, µ) has dimension n and the Gram-Schmidt procedure yields an orthogonal
basis {π0 , π1 , . . . , πn−1 } for L2 (R, µ).
The three-term recurrence for the orthogonal polynomials is obtained as
follows. Since xπk (x) is a polynomial of degree k + 1 it can be expressed as a
46 CHAPTER 3. JACOBI MATRICES AND TRIDIAGONAL ENSEMBLES

Pk+1
linear combination xπk (x) = j=0 cj,k πj (x). Since the πj are monic, we must
have ck+1,k = 1. Moreover, for j = 0, . . . , k − 2
Z Z
xπk (x)πj (x) µ(dx) = πk (x)xπj (x) µ(dx) = 0,
R R

since xπj lies in the span of {π0 , . . . , πk−1 }. Thus, cj,k = 0 for j = 0, . . . , k − 2
and we find

xπk (x) = πk+1 (x) + ck,k πk (x) + ck−1,k πk−1 (x). (3.4.19)
R
RIt remains
2
to show that ck−1,k > 0. By orthogonality, R xπk (x)πk+1 (x) µ(dx) =
π
R k+1
(x) µ(dx). Thus, ck,k−1 > 0 for all k such that πk−1 (x) does not vanish in
L2 (R, µ): Assume πl does not vanish in L2 (R, µ) for l = 0, 1, 2, . . . , k −1 < n−1.
Then this recurrence defines πk which is not the zero polynomial since it is
monic.
R 2 For Λ ∈ W n , it has distinct diagonal entries, so p(x) 6= 0 implies
p (x)µ(dx) > 0 if p is a polynomial of degree less than n. This is (3.4.17)
aside from a change in notation.
Proof of Theorem 28. We have defined a forward map T 7→ µ(T ) as follows.
The matrix T defines a τ -function τ (z) = eT1 (T − zI)−1 e1 , which is expressed
as a ratio of characteristic polynomials in (3.4.7). The poles of τ (z) are the
eigenvalues of T . The norming constrants are the residues at the poles, and are
given by
dn−1 (λk )
u2k = − 0 , 1 ≤ k ≤ n. (3.4.20)
dn (λk )
The inverse map µ → T (µ) is given by Lemma 11. The orthogonal polynomials
defined by µ satisfy a three-term recurrence whose coefficients determine T .
We only need to show that the map µ 7→ T (µ) 7→ µ (T (µ)) is the identity.
Let µ ∼
= (Λ, u) be given and define T (µ) by the recurrence relations. We will
show that
n
u2k
Z
1 X
eT1 (T − zI)−1 e1 = µ(dx) = . (3.4.21)
R x−z λk − z
k=1

We first show that the eigenvalues of T coincide with {λk }. Define pj (x) =
Qj
πj (x) k=1 b−1
k , π0 (x) = p0 (x), then

xp0 (x) = a1 p0 (x) + b1 p1 (x),


xpk (x) = bk pk−1 (x) + ak+1 pk (x) + bk+2 pk+1 (x), k > 0.

Because pn (λj ) = 0 for all j, we conclude that

(p0 (λj ), p2 (λj ), . . . , pn−1 (λj ))T

is a non-trivial eigenvector for eigenvalue λj . We expand both sides of (3.4.21)


for large z, to see that all we have to establish is the relation
Z
eT1 T k e1 = xk µ(dx), 0 ≤ k ≤ n − 1. (3.4.22)
R
3.4. INVERSE SPECTRAL THEORY FOR JACOBI MATRICES 47

To see this, consider


T e1 = a1 e1 + b1 e2 ,
T ek = bk−1 ek−1 + ak ek + bk ek+1 , k > 1.
Qj−1
Define new basis vectors fj = ej k=1 bk , f1 = e1 because bj > 0 for all j =
1, 2, . . . , n − 1. We then have
T f1 = a1 f1 + f2 ,
T fk = b2k−1 fk−1 + ak fk + fk+1 , k > 1.

We then diagonalize this, setting T = QΛQT , fˆj = QT fj to find


Λfˆ1 = a1 fˆ1 + b21 fˆ1 ,
Λfˆk = b2k−1 fˆk−1 + ak fˆk + fˆk+1 , k > 1.
Component-wise, this is the same three-term recurrence as the monic polyno-
mials. So, taking into account f1 = e1 , we find
fˆj = πj−1 (Λ)QT e1 , fj = πj−1 (T )e1 .
Pk Pk Pk
Then because xk = j=0 cjk πj (x) we have T k e1 = j=0 cjk πj (T )e1 = j=0 cjk ej+1
and
eT1 T k e1 = c0k .
Similarly,
Z k
X Z
xk µ(dx) = cjk πj (x)µ(dx) = c0k .
R j=0 R

This proves the theorem and this approach extends to the bi-infinite Jacobi
operators [6].
Remark 30. Observe that the recurrence relation (3.4.17) may be rewritten as
the matrix equation,
 
a1 − z 1 0 ... 0    
 b21 a2 − z 1 ... 0  π0 (z) 0
  π1 (z)    0 
 
 2 ..  
 0 b 2 a3 − z . 0  . = . .

 .
 ..   .. 
 
. . .
 .. .. .. ..

1  πn−1 (z) −πn (z)
2
0 0 ... bn−1 an − z
(3.4.23)
Since π0 (z) = 1, each zero of πk (z) is an eigenvalue of the matrix above. Thus,
πk (z) = det(zI − T̃k ) where T̃k denotes the upper-left k × k submatrix of T
(compare with Tk and dk (z) = det(zI − Tk )).
Thus, given µ, the entries of T are obtained from “top to bottom”. However,
given T , the τ -function is the limit of τ -functions −dk (z)/dk+1 (z) computed
‘bottom-to-top’.
48 CHAPTER 3. JACOBI MATRICES AND TRIDIAGONAL ENSEMBLES

Remark 31. Consider the sequence of orthogonal polynomials


 −1
k
Y
pk (x) =  bk  πk (x), k = 1, 2, . . . , n − 1. (3.4.24)
j=1

This is actually an orthonormal sequence which satisfies the three-term recur-


rence

bk pk (x) = (z − ak )pk−1 (z) − bk−1 pk−2 (x). (3.4.25)

3.5 Jacobians for tridiagonal ensembles


We can now combine Theorem 28 with the definition of Hermite-β ensembles
to state a refined version of Theorem 21.
Theorem 32. For each β > 0, the law of the Hermite(β) ensembles in spectral
n−1
variables (Λ, u) ∈ W n × S+ is given by
 
  Yn
1 β
− 4 Tr(Λ2 ) β−1
pHermite (Λ, u)DΛDu = e |4(Λ)|β DΛ  uj  Du, (3.5.1)
Zn,β j=1

Q 
n β−1
where u
j=1 j Du refers to the joint density for n independent χβ random
variables, normalized so that the sum of their squares is one. In particular, Λ
and u are independent.
Theorem 32 follows from a computation of the Jacobian of the spectral map
n−1
S : Jac(n) → W n × S+ .
n−1
Theorem 33. The volume forms on Jac(n) and W n × S+ are related by
n n−1 n
!
Y Y Y
DT = daj bn−k−1
k dbk = Cn 4(Λ)DΛ uj Du. (3.5.2)
j=1 k=1 k=1

where Cn is a universal constant.


Remark 34. We have suppressed the explicit form of the universal constants
n−1
in the statement of the lemma to focus on the marginals on W n and S+ re-
spectively. The computation of the constants is an interesting exercise (see [8]).
While Theorem 33 is an analytic/geometric assertion, the simplest proof uses
probabilistic reasoning, as in step 3 of the proof of Theorem 22. Since we have
computed the Jacobian for the diagonalizing map Symm(n) → Rn ×O(n) (Weyl’s
formula) and the tridiagonalizing map Symm(n) → Jac(n) (Theorem 22), the
ratio of these Jacobians may be used to compute the Jacobian of the spectral
n−1
map Jac(n) → W n × S+ . The main point is that by the O(n) invariance
3.5. JACOBIANS FOR TRIDIAGONAL ENSEMBLES 49

of GOE, the top row of the eigenvector matrix must be uniformly


Qn distributed
on S n−1 and is independent of Λ. This gives the term k=1 uj duj in equa-
tion (3.5.2). As Dumitriu and Edelman remark, this is a ‘true random matrix
theory’ calculation. Another approach to (3.5.2) uses symplectic geometry.
Lemma 12 (Vandermonde determinant in (a, b) coordinates).
Qn−1 n−k
k=1 bk
Y
4(Λ) = (λj − λk ) = Q n . (3.5.3)
j=1 uj
j<k

Proof. 1. Recall that Λ(l) denotes the diagonal matrix of eigenvalues of Tl and
Ql (l)
that dl (x) = j=1 (x − λj ). Therefore, we have the identity

l l−1 l   l−1  
(l) (l−1) (l) (l−1)
Y Y Y Y
λj − λk = dl−1 λj = dl λ k . (3.5.4)
j=1 k=1 j=1 k=1

Since dl−1 and dl are related through the three-term recurrence


dl (x) = (x − al )dl−1 (x) − b2n−l+1 dl−2 (x),
we have
l−1 l−1 l−2
(l−1) 2(l−1) (l−1) 2(l−1) (l−2)
Y Y Y
dl (λk ) = bn−l+1 dl−2 (λk ) = bn−l+1 dl−1 (λj ) .
k=1 k=1 j=1

We apply this identity repeatedly, starting with l = n to obtain


n−1 n−2
(n−1) 2(n−1) (n−2)
Y Y
dn (λk ) = b1 dn−1 (λj )
k=1 j=1
n−3 n−1
2(n−1) 2(n−2) (n−3) 2(n−k)
Y Y
= b1 b2 dn−2 (λk ) = ··· = bk .
k=1 k=1

2. The coefficients u2j are the residue of τn (z) at the poles λj , i.e.
dn−1 (λk )
u2k = , 1 ≤ k ≤ n. (3.5.5)
d0n (λk )
Observe also that
Y n
Y
d0n (λk ) = (λj − λk ), and d0n (λk ) = 4(Λ)2 . (3.5.6)
j6=k k=1

Therefore,
n n Qn−1 2(n−k)
Y 1 Y b
u2j = 2
|dn−1 (λk )| = k=1 k 2 . (3.5.7)
j=1
4(λ) 4(λ)
k=1
50 CHAPTER 3. JACOBI MATRICES AND TRIDIAGONAL ENSEMBLES

Hermite-1 Jac(n)

Inv. spectral map


lder
seho
Hou
GOE Symm(n)
Spec
tral
varia
b les
(Λ, QT e1 )

Figure 3.5.1: We have already computed the push-forward of GOE under


Householder reflections (3.2.13) and the push-forward of GOE onto spectral
variables via Weyl’s formula (2.2.13). The composition of the map to spectral
variables and the inverse spectral map must give us the reduction to tridiagonal
form via Householder reflections. This allows the computation of the Jacobian
of the inverse spectral map.

Proof of Theorem 33. 1. Our goal is to compute the Jacobian of the spectral
mapping S,
∂(T (a, b))
DT = DΛDu, (3.5.8)
∂(Λ, u)
where Du is uniform measure on {u ∈ Rn | |u| = 1, uj > 0 for all j}. Rather
than compute the change of variables directly, we will compute the push-forward
n−1
of GOE onto Jac(n) and W n × S+ separately, and obtain the Jacobian above,
see Figure 3.5.1.
2. Consider the push-forward of GOE under the map M 7→ (Λ, u), where
M = QΛQT is the diagonalization of M , with the normalization that the first
non-zero entry in each column is positive. Since Λ and the matrix of eigenvalues
Q are independent, Λ and u = QT e1 are independent. Since Q is distributed
according to Haar measure on O(n), the vector u is uniformly distributed on
n−1
S+ and the push-forward of GOE is the measure
1 2
p(Λ, u)DΛDu = cn e− 4 Tr(Λ) 4(Λ)DΛDu. (3.5.9)
3. Next consider the push-forward of GOE under the map M 7→ T , where
M = QT QT denotes the tridiagonalization of M . As we have seen in the proof
of Theorem 20, T and U are independent, and the marginal distribution of T is
given by
n n−1
1 2 Y Y
p̃(T ) DT = Cn e− 4 Tr(T ) daj bn−k−1
k dbk . (3.5.10)
j=1 k=1
n n−1
4. Since T ∈ Jac(n) and (Λ, u) ∈ W × S+ are in bijection, we have
∂(T (a, b))
p(Λ, u) = p̃(T (Λ, u)) . (3.5.11)
∂(Λ, u)
3.5. JACOBIANS FOR TRIDIAGONAL ENSEMBLES 51

We compare the expressions in (3.5.9) and (3.5.10) and use Lemma 12 to obtain
Qn−1
∂(T (a, b)) Cn k=1 bk
= . (3.5.12)
cn nj=1 uj
Q
∂(Λ, u)

The constants are computed in [8].

Proof of Theorem 32. The law of We change variables using the spectral map-
ping and Theorem 33 to obtain the following identity for the law of the Hermite−
β ensembles
n−1
β 2 Y (β−1)(n−k)
Cn,β e− 4 Tr(T )
bk DT (3.5.13)
k=1
 
  n
−β 2 Y
= Cn,β e 4 Tr(Λ )
4(Λ)β DΛ  uβ−1
j
 Du. (3.5.14)
j=1

Since the distribution factors, Λ and u are independent with the laws stated in
Theorem 32.

Exercises
3.1. Let w ∈ Rn have iid N (0, 1) components. Show that |w| and w/|w| are
independent.
3.2. Let U ∈ O(n) be a random orthogonal matrix. For example U could
be a Householder reflection associated to a random vector w. Then assume
A ∼ GOE. Show that B := U AU T ∼ GOE and B is independent of U .
3.3. Write a numerical code to sample matrices from both GOE and the Hermite−
1 ensemble. Verify numerically that a suitably normalized density of eigenval-
ues for the GOE matrix approaches the semicircle law as n increases (n = 100
should be ample). Is the same true for the Hermite − 1 ensemble? Why or why
not?

3.4. Consider the tridiagonal matrix T ∈ Jac(n) that has entries aj = 0, 1 ≤


j ≤ n, bk = 1, 1 ≤ k ≤ n − 1.

(a) Compute explicitly the spectral measure using Chebyshev polynomials


(compare T with the recurrence relations for the Chebyshev polynomials).

(b) Plot histograms of two distributions related


Pn to T for n = 100: (i) the em-
1
pirical
Pn distribution of eigenvalues ( n k=1 δλk ); (ii) the spectral density
2
k=1 uk δλk . Can you identify the limit in (i)?

(This exercise will be relevant for an enumeration problem relating Brownian


excursion to the Riemann-ζ function).
52 CHAPTER 3. JACOBI MATRICES AND TRIDIAGONAL ENSEMBLES

3.5. Establish uniqueness and smoothness in the proof of Theorem 28.


3.6. Use equation (3.4.12) to recursively expand τn as a continued fraction.
Combine this with the uniqueness step in Q.2 to deduce an alternative approach
to Theorem 28 that avoids the theory of orthogonal polynomials.

3.7. The following property of the function −z −1 is relevant in the contin-


ued fraction scheme. Symmetric matrices have a partial order: Given A, B ∈
Symm(n) we say that A ≥ B if uT Au ≥ uT Bu for every u ∈ Rn . Suppose
A ≥ B ≥ 0. Show that −A−1 ≥ −B −1 .
3.8. This problem is a follow-up to exercise 5 in HW 1. Given a map f as in
that exercise, compute an (explicit) expression for its derivative Df .
3.9. Compute the following normalization constants:
(a) The normalization constants Zn,β in the standard definitions of GOE,
β
Tr(M 2 )
GUE and GSE with exponential weight e− 4 .
(b) The constant Cn,β in (3.5.13).
(c) The constant Cn in the Jacobian for ensembles (3.2.3) (compare with your
calculation of the volume of the unit sphere in HW1).

3.10. The proofs of Dumitriu and Edelman finesse the following issue: given
T ∈ Jac(n) it requires some care to find a decomposition for the tangent space
TT Jac(n), especially the isospectral manifold, MT , that is analogous to Lemma
2. As in that lemma, we may split TT Jac(n) into orthogonal subspaces that
correspond to diagonal matrices Λ̇ and QT Q̇ ∈ o(n). However, while each
QT Q̇ ∈ o(n) generates a curve in TT Symm(n) , not all QT Q̇ give rise to curves
in TT Jac(n). Verify this. Explore this issue further by trying to find a basis for
the isospectral manifold MT (see equation (3.4.3)).

3.6 Notes
To include in improved version.
1. Tridiagonal matrices as weights in enumerations problems.

2. Example: Chebyshev polynomials, Brownian excursion as a scaling limit


of Dyck paths and relation with ζ-function.
Chapter 4

Determinantal formulas:
From Vandermonde to
Fredholm

Our purpose in this section is to present the elegant determinantal formulas of


Dyson, Gaudin and Mehta for invariant matrix ensembles on Her(n). These
formulas combine three distinct elements: (i) the Weyl formula on Her(n); (ii)
the theory of orthogonal polynomials; (iii) Fredholm determinants. We first
introduce these formulas for GUE. We then use the asymptotic properties of
Hermite polynomials to establish their scaling limits (Theorem 2, Theorem 6
and Theorem 9). While the eigenvalues of GOE and GSE do not have a deter-
minantal structure, they have a related Pfaffian structure, which is described in
a later chapter.

4.1 Probabilities as determinants


In what follows we will adopt the following notation. In order to avoid confusion,
we let x = (x1 , . . . , xn ) ∈ Rn denote the unordered eigenvalues of M , and
λ = (λ1 , . . . , λn ) ∈ W n denote the ordered eigenvalues of M . The probability
density of x, denoted P (n) (x1 , . . . , xn ), is obtained from the Weyl’s formula

1 1 Pn 2
P (n) (x1 , . . . , xn ) = 4(x)2 e− 2 k=1 xk . (4.1.1)
Zn

Observe that P (n) is invariant under permutations (x1 , . . . , xn ) 7→ (xσ1 , . . . , xσn ),


σ ∈ S(n). In practice, our interest lies not in the joint density of all n eigen-
values, but statistics such as the law of the largest eigenvalue. Thus, what is
required is an analytical technique to extract such information from (4.1.1) by
integrating out degrees of freedom to obtain information on the joint distribu-
tion of m-eigenvalues, 1 ≤ m ≤ n. More precisely, given m and a Borel function

53
54 CHAPTER 4. DETERMINANTAL FORMULAS

f : Rm → R we consider random variables of the type


X
Nf = f (xj1 , . . . , xjm ). (4.1.2)
(j1 ,...,jm )∈J1,nKm , jk distinct

Expectations of random variables of the form (4.1.2) are given by


Z
(n)
E(Nf ) = f (x1 , . . . , xm ) Rm (x1 , . . . , xm ) dx1 . . . dxm , (4.1.3)
Rm

where Rm is the m-point correlation function


(n)
Rm (x1 , . . . , xm ) (4.1.4)
Z
n!
= P (n) (x1 , . . . , xm , xm+1 , . . . , xn ) dxm+1 . . . dxn .
(n − m)! Rn−m
n

The combinatorial factor in (4.1.2) arises as follows. There are m ways of
picking subsets of m distinct indices from J1, nK. On the other hand,
(n) (n)
Rm (x1 , . . . , xm ) = Rm (xσ1 , xσ2 , . . . , xσm ), σ ∈ S(m). (4.1.5)

and the integral on the right hand side of (4.1.5) appears m! times when in-
tegrating over the complementary n − m variables for each choice of indices
{j1 , . . . , jm } ∈ J1, nKm . We state the following theorem which is proved in the
following sections.

Theorem 35. The joint density and m-point functions for GUE(n) are

1
P (n) (x1 , . . . , xn ) = det (Kn (xj , xk )1≤j,k≤n ) , (4.1.6)
n!
(n)
Rm (x1 , . . . , xm ) = det (Kn (xj , xk )1≤j,k≤m ) , (4.1.7)

where the integral kernel Kn is defined by the Hermite wave functions


n−1
X
Kn (x, y) = ψk (x)ψk (y). (4.1.8)
k=0

Remark 36. The kernel Kn may be simplified using identities for the Hermite
polynomials. The Christoffel-Darboux formula (B.2.6) allows us to write

√ ψn (x)ψn−1 (y) − ψn (y)ψn−1 (x)


Kn (x, y) = n . (4.1.9)
x−y

Further, eliminating ψn−1 with the identity (B.2.4) yields

ψn (x)ψn0 (y) − ψn (x)ψn0 (y) 1


Kn (x, y) = − ψn (x)ψn (y). (4.1.10)
x−y 2
4.2. THE M -POINT CORRELATION FUNCTION 55

A particular consequence of Theorem 35 is the following fundamental for-


mula. Assume S is a bounded Borel set, let 1S denote its indicator function, and
let Am (S) denote the probability that the set S contains precisely m eigenvalues
for M ∈ GUE(n).

Theorem 37. The generating function of {Am (S)}∞


m=0 is given by the formula


X
det (I − zKn 1S ) = Am (S)(1 − z)m , z ∈ C, (4.1.11)
m=0

where det (I − zKn 1S ) denotes the Fredholm determinant of the kernel


n−1
X
Kn 1S (x, y) = 1S (x)ψk (x)ψk (y)1S (y). (4.1.12)
k=0

Theorem 35 and Theorem 37 illustrate the general spirit of determinantal


formulas in random matrix theory. The density of a joint distribution is ex-
pressed as a determinant of an integral operator with finite rank. One may then
use the theory of orthogonal polynomials, in particular, results on the asymp-
totics of orthogonal polynomials, to establish the basic limit theorems outlined
in Chapter 1 (see Theorems 38 and Theorem 39 below).
Appendices B and C provide brief introductions to Hermite polynomials and
Fredholm determinants respectively.

4.2 The m-point correlation function


Proof of Theorem 35. We form linear combinations of the rows of the Vander-
monde matrix to obtain
 
h0 (x1 ) h0 (x2 ) ... h0 (xn )
 h1 (x1 ) h1 (x2 ) ... h1 (xn ) 
4(x) = det  . (4.2.1)
 
.. .. ..
 . . . 
hn−1 (x1 ) hn−1 (x2 ) . . . hn−1 (xn )

The calculations above would apply to any set of monic polynomials of degree
0, 1, 2, . . . , n − 1. The Hermite polynomials and wave functions are relevant
because they satisfy the orthogonality relations
2
e−x /2
Z
hj (x)hk (x) √ dx = δjk k!, (4.2.2)
R 2π
and allow the inclusion of an exponential weight. Precisely, the Hermite wave-
functions
2
1 e−x /4
ψk (x) = √ hk (x) , (4.2.3)
k! (2π)1/4
56 CHAPTER 4. DETERMINANTAL FORMULAS

satisfy the orthogonality relation


Z
ψj (x)ψk (x) dx = δjk , (4.2.4)
R

and form a basis for L2 (R). Let H denote the matrix with entries Hjk =
ψj−1 (xk ). It follows from (4.2.1) and (4.2.3) that
x2
e− 2 4(x)2 ∝ det H 2 = det H T H = det [Kn (xj , xk )]1≤j,k≤n , (4.2.5)

using the identity


n
X n−1
X
HT H

jk
= Hlj Hlk = ψl (xj )ψl (xk ) = Kn (xj , xk ). (4.2.6)
l=1 l=0

Therefore, the joint density P (n) (x) is proportional to det Kn . To determine


the constant of proportionality we recall that the determinant of a matrix A =
[ajk ]1≤j,k≤n satisfies
X n
Y
det A = sgn(σ) aσj j (4.2.7)
σ∈S(n) j=1

where sgn(σ) denotes the sign of the permutation σ. We then evaluate the
integral
Z Z  2
det(H)2 dx1 . . . dxn = det [ψj−1 (xk )]1≤j,k≤n dx1 . . . dxn
Rn Rn
X Z n
Y
= sgn(σ)sgn(τ ) ψσj −1 (xj )ψτj −1 (xj ) dx1 . . . dxn
σ,τ ∈S(n) Rn j=1

(4.2.8)
X n
Y X
= sgn(σ)sgn(τ ) δσj ,τj = 1{σ=τ } = n! .
σ,τ ∈S(n) j=1 σ,τ ∈S(n)

We combine (4.2.8) and (4.2.6) to obtain the first assertion in Theorem 35:
1
P (n) (x1 , . . . , xn ) = det [Kn (xj , xk )]1≤j,k≤n .
n!
The formulas for the correlation functions may be obtained by induction,
beginning with

R(n) (x1 , . . . , xn ) = det [Kn (xj , xk )]1≤j,k≤n . (4.2.9)

First, the orthonormality relations (4.2.4) imply


Z Z
Kn (x, x)dx = n, Kn (x, z)Kn (z, y) dz = Kn (x, y). (4.2.10)
R R
4.3. DETERMINANTS AS GENERATING FUNCTIONS 57

Assume (4.1.7) holds for an index m + 1 ≤ n. We then have


Z
(n) 1 (n)
Rm (x1 , . . . , xm ) = R (x1 , . . . , xm , xm+1 ) dxm+1
n − m R m+1
Z
1
= det [Kn (xj , xk )]1≤j,k≤m+1 dxm+1 (4.2.11)
n−m R
Z
1 X
= sgn(σ) Kn (x1 , xσ1 ) . . . Kn (xm+1 , xσm+1 ) dxm+1 .
n−m R
σ∈S(m+1)

If σm+1 = m + 1 in this sum, then the first equality in (4.2.10) implies


Z
Kn (x1 , xσ1 ) · · · Kn (xm+1 , xσm+1 ) dxm+1 (4.2.12)
R
= n Kn (x1 , xσ1 ) · · · Kn (xm , xσm ).

If σm+1 6= m + 1, there exists j ≤ m and k ≤ m such that σj = m + 1 and


σm+1 = k. We then use the second equality in (4.2.10) to find
Z
Kn (x1 , xσ1 ) · · · Kn (xm+1 , xσm+1 ) dxm+1 (4.2.13)
R
Z
= Kn (x1 , xσ1 ) · · · Kn (xj , xm+1 ) · · · Kn (xm+1 , xk ) dxm+1
R
= Kn (x1 , xσ10 ) · · · Kn (xm , xσm
0 ).

where σ 0 is a permutation of {1, . . . , m} such that σj0 = k and σl0 = σl if l 6= j.


Each permutation σ 0 ∈ Sm may come from m permutations σ ∈ Sm+1 . Further,
sgn(σ 0 ) = −sgn(σ) since these permutations differ by a single swap. Therefore,
using equations (4.2.12) and (4.2.13) we have
Z
det [Kn (xj , xk )]1≤j,k≤m+1 dxm+1 = (n − m) det [Kn (xj , xk )]1≤j,k≤m .
R

4.3 Determinants as generating functions


Proof of Theorem 37. The Fredholm determinant det (I − zKn 1S ) is an entire
function of z. Thus, equation (4.1.11) is equivalent to the statement
 m
1 d
Am (S) = − det (I − zKn 1S )|z=1 . (4.3.1)
m! dz

We first prove formula (4.3.1) in the case m = 0. The probability that all
58 CHAPTER 4. DETERMINANTAL FORMULAS

eigenvalues lie outside S is given by


 
Z Yn
 (1 − 1S (xj )) P (n) (x1 , . . . , xn ) dx1 . . . dxn (4.3.2)
Rn j=1
n
X Z
j
= (−1) ρnj (1S (x1 ), . . . , 1S (xn ))P (n) (x1 , . . . , xn ) dx1 . . . dxn ,
j=0 Rn

where ρnj (x1 , . . . , xn ) is the j-th symmetric function in n variables. For example,
n
X n
X n
Y
ρn0 (x) = 1, ρn1 (x) = xj , ρn2 (x) = xj xk , ρnn (x) = xj .
j=1 j<k j=1

Then, we can express


1 X Y
ρnj (x) = xjk .
j!
(j1 ,...,jm )∈J1,nKj , jk distinct k

Using
Qm the m-point correlation function, we obtain using (4.1.3) with f (x1 , . . . , xm ) =
j=1 1S (xj ),
Z
E (Nf ) = ρnj (1S (x1 ), . . . , 1S (xn ))P (n) (x1 , . . . , xn ) dx1 . . . dxn (4.3.3)
Rn
Z
1
= det [Kn 1S (xk , xl )]1≤k,l≤j dx1 . . . dxj .
j! Rj
In the last equality, we have used (4.1.7) and multiplied the kernel on the left
and right by the diagonal matrix dS = diag(1S (x1 ), . . . , 1S (xj )), so that
(n) (n)
1S (x1 ) . . . 1S (xj )Rj (x1 , . . . , xj ) = 12S (x1 ) . . . 12S (xj )Rj (x1 , . . . , xj )
 
= det dS [Kn (xk , xl )]1≤k,l≤j dS = det [Kn 1S (xk , xl )]1≤k,l≤j ,

where K (n) 1S is defined in (4.1.12). We now combine (4.3.2) and (4.3.3) to


obtain
X n Z
(−1)j ρj (1S (x1 ), . . . , 1S (xn ))P (n) (x1 , . . . , xn ) dx1 . . . dxn
j=0 Rn

= det(I − Kn 1S ), (4.3.4)
using the infinite series (C.1.8) for the Fredholm determinant (only n terms are
non-zero, since K (n) has rank n, see Exercise 4.2).
We now turn to the case m ≥ 1. Equation (4.3.2) must now be modified to
allow exactly m eigenvalues within S and n − m eigenvalues outside S. Define
m
Y n
Y
f (x1 , . . . , xn ) = 1S (xj ) (1 − 1S (xj )).
j=1 j=m+1
4.4. SCALING LIMITS OF INDEPENDENT POINTS 59

Then from (4.1.3), when we take into account the m! permutations of the first
m elements, and the (n − m)! permutations of the last n − m elements

1
Am (S) = E (Nf )
m!(n − m)!
Z
1
= f (x1 , . . . , xn )Rn(n) (x1 , . . . , xn )dx1 · · · dxn .
m!(n − m)! Rn

We then write
m
Y m−n
X
f (x1 , . . . , xn ) = 1S (xj ) (−1)k ρn−m
k (1S (xm+1 ), . . . , 1S (xm+k ))
j=1 k=0

We use the fact that ρn−m (1S (xm+1 ), . . . , 1S (xn )) is given by a sum of n−m

k k
terms, each of which is product of k terms, and all terms integrate to the same
value. So,
Z m
Y
1S (xj )ρn−m
k (1S (xm+1 ), . . . , 1S (xm+k ))Rn(n) (x1 , . . . , xn )dx1 · · · dxn
Rn j=1
 Z m+k Z 
n−m Y
= 1S (xj ) Rn(n) (x1 , . . . , xm+k )dxm+k+1 · · · dxn
k Rm+k j=1 Rn−m−k

× dx1 · · · dxm+k
m+k
(n − m)!
Z
(k)
Y
= 1S (xj )Rm+k (x1 , . . . , xm+k )dx1 · · · dxm+k
k! R m+k
j=1
(n − m)!
Z
= det (Kn 1S (xj , xl )1≤j,l≤m+k ) dx1 · · · dxm+k .
k! Rm+k

Then, it follows that


n−m
1 X (−1)k
Z
Am (S) = det (Kn 1S (xj , xl )1≤j,l≤m+k ) dx1 · · · dxm+k
m! k! Rm+k
k=0
 m
1 d
= − det(I − zKn 1S )|z=1 .
m! dz

4.4 Scaling limits of independent points


Recall the semicircle density psc from (1.2.1). We show in the next section that
the global eigenvalue density, or density of states, for GUE(n) is given by psc as
n → ∞. Before we describe this more precisely, we consider a situation of iid
points to contrast with the distributions that arise in GUE(n).
60 CHAPTER 4. DETERMINANTAL FORMULAS


Consider an iid vector Λ = n(λ1 , λ2 , . . . , λn )T ∈ Rn where P(λj ∈ S) =
p (x0 )dx0 . We form the empirical measure
R
S sc

n
1X
Ln (dx) = δΛk (dx), (4.4.1)
n
k=1

and consider the deterministic measure ELn defined by


Z n
1X
f (x)ELn (dx) := EhLn , f i = E f (λk ), f ∈ C0 (R). (4.4.2)
n
k=1

But, it is clear, and effectively by definition, that ELn (dx0 ) = p(x0 )dx0 =
0 √ √
1
√ psc √
n
x
n
dx0 and hence np( nx0 )dx0 = psc (x0 )dx0 .
Next, we consider a gap probability in the “bulk”. Lets ∈ (−2, 2),
 I ⊂R
√ I
be an interval and consider the rescaled interval In = n s + npsc (s) . Then
by independence
 Z  0  n
1 x
P ( no λj ∈ In ) = 1 − √ psc √ dx0 . (4.4.3)
n In n

We directly find that


 0 
|I|
Z
1 x
√ psc √ dx0 = (1 + o(1)) as n → ∞. (4.4.4)
n In n n

From this it follows that


 Z 
lim P ( no λj ∈ In ) = exp − dx0 . (4.4.5)
n→∞ I

This is, of course, the gap probability for a Poisson process.


We now consider the distribution of the maximum, i.e. at the “edge”. Let
λ̂ = maxj Λj . Then, by independence,
2 n

 Z
P(n1/6 (2 n − λ̂) > t) = 1− psc (x0 ) dx0 .
2−n−2/3 t

By direct calculation, replacing t with π 2/3 t2/3 (3/2)2/3 we find, for t ≥ 0,


√ 2 3/2
lim P(n1/6 (2 n − λ̂) > t) = e− 3π t . (4.4.6)
n→∞

From
√ this we see a (trivial) scaling limit of the density of states after rescaling
by 1/ n, gaps on the order of 1/n after this rescaling and a largest “eigenvalue”

that satisfies λ̂ ∼ 2 n+ξn1/6 for an appropriate random variable ξ. All of these
statements carry over to the random matrix setting, but the actual limits are
very different for local statistics.
4.5. SCALING LIMITS I: THE SEMICIRCLE LAW 61

4.5 Scaling limits I: the semicircle law


The empirical measure of the eigenvalues of GUE(n) is
n
1X
Ln (dx) = δλk (dx) (4.5.1)
n
k=1

has the expected density


1
ELn (dx) = Kn (x, x) dx. (4.5.2)
n
This density is also referred to as the global eigenvalue density or the density of
states. The above expression is somewhat more transparent in its weak form,
using unordered x1 , . . . , xn . For every f ∈ C0 (R), we have
Z Z
1 (n) 1
EhLn , f i = f (x)R1 (x) dx = f (x)Kn (x, x) dx, (4.5.3)
n R n R

by Theorem 35 and equation (4.1.3). The value of the kernel Kn on the diag-
onal is determined by the Christoffel-Darboux relation (4.1.9) and L’Hospital’s
lemma: √
Kn (x, x) = n ψn0 (x)ψn−1 (x) − ψn (x)ψn−1
0

(x) . (4.5.4)
The scaling limit of ELn is the semicircle law defined in (1.2.1).
Lemma 13.
1 √ √ 
lim √ Kn x n, x n = psc (x), x ∈ R. (4.5.5)
n→∞ n
Further, for any ε > 0, the convergence is uniform on the set {x ||x − 2| ≥ ε }.
Proof. The lemma follows from the Plancherel-Rotach asymptotics for the Her-
mite wave functions (see Cases 1 and 2 and equations (B.5.1)–(B.5.4)) in Ap-
pendix B). Define the rescaled wave functions
1 √
Ψn+p (x) = n 4 ψn+p (x n), p = −2, −1, 0. (4.5.6)

We use the identity (B.2.4) to eliminate ψn0 and ψn−1


0
from (4.5.4) and find after
a few computations that
r
1 √ √  2 n−1
√ Kn x n, x n = Ψn−1 (x) − Ψn−2 (x)Ψn (x). (4.5.7)
n n

We now use the asymptotic relations (B.5.2) and (B.5.4) depending on whether
|x| < 2 or |x| > 2. Since the region |x| > 2 corresponds to exponential decay
with a rate proportional to n, we focus on the region |x| < 2. In order to simplify
notation, let  
1 1 π
θ = n ϕ − sin 2ϕ − ϕ − . (4.5.8)
2 2 4
62 CHAPTER 4. DETERMINANTAL FORMULAS

(This is the argument of the cosine in (B.5.17) when p = −1.) Then (4.5.7) and
(B.5.2) yield
1 √ √ 
√ Kn x n, x n
n
1 1 p
cos2 θ − cos(θ + ϕ) cos(θ − ϕ) =

∼ 4 − x2 ,
π sin ϕ 2π
using x = 2 cos ϕ and the trigonometric formulae cos 2α = 2 cos2 α − 1 and
2 cos(θ+ϕ) cos(θ−ϕ) = cos 2ϕ+cos 2θ. A similar calculation with (B.5.4) shows
that the limit vanishes outside the set |x| > 2. The assertion of uniformity in the
convergence follows from the assertion of uniform convergence in the Plancherel-
Rotach asymptotics.
Using Exercise 4.9, Lemma 13 implies that ELn (dx), after rescaling, con-
verges weakly
n
1X
δxk /√n (dx) → psc (x)dx, weakly. (4.5.9)
n
k=1

It is also worth noting that if f (x) = 1S then


Z Z
1
E ( fraction of eigenvalues that lie in S) = f (x)ELn (dx) = Kn (x, x)dx.
n S

4.6 Scaling limits II: the sine kernel


Recall from Definition 5 that Ksine is the integral kernel on R × R given by
sin π(x − y)
Ksine (x, y) = , x 6= y, (4.6.1)
π(x − y)
and Ksine (x, x, ) = 1. It defines an integral operator on L2 (S) for every bounded,
measurable set S. We can now prove a stronger version of Theorem 6.
Theorem 38. For each integer m = 0, 1, 2, . . . and bounded, Borel set S and
r ∈ (−2, 2)

  
S
lim P M ∈ GUE(n) has m eigenvalues in n r +
n→∞ npsc (r)
 m
1 d
= − det (I − zKsine 1S )|z=1 . (4.6.2)
m! dz
The proof of Theorem 38 is a consequence of the following
Lemma 14. Let S be a bounded measurable set. Then for r ∈ (−2, 2)
√ √
 
1 x y
lim sup √ Kn nr + √ , nr + √ − Ksine (x, y) = 0.
n→∞ x,y∈S psc (r) n psc (r) n psc (r) n
(4.6.3)
4.6. SCALING LIMITS II: THE SINE KERNEL 63

Proof. For r ∈ (−2, 2) define ϕ(s) by x = r + n sinπsϕ(0) = 2 cos ϕ(s). We then


note that sin ϕ(0)/π = psc (r). We expand, for sufficiently large n,
1 1 πs
ϕ(s) − sin 2ϕ(s) = ϕ(0) − sin 2ϕ(0) − + O(n−2 ). (4.6.4)
2 2 n
Define the new functions
1 √ 
Ψn,p (s) = n 4 ψn+p x n , (4.6.5)
From (B.5.2)
     
1 1 1 π
Ψn,p (s) ∼ p cos n ϕ(0) − sin 2ϕ(0) − πs + p + ϕ(0) −
π sin ϕ(0) 2 2 4
(4.6.6)
For fixed r, this is uniform for s in a compact set. We then use (4.1.9) and
y = r − n sinπtϕ(0) to find, for s 6= t,
π √ √
√ Kn (x n, y n) (4.6.7)
sin ϕ(0) n
√ √ √ √
π ψ (x n)ψn−1 (y n) − ψn (y n)ψn−1 (x n)
= √ n
sin ϕ(0) n x−y
Ψn,0 (s)Ψn,−1 (t) − Ψn,0 (t)Ψn,−1 (s)
=
s−t
1 cos(θn + s) cos(θn + t − ϕ(0)) − cos(θn + t) cos(θn + s − ϕ(0))

π sin ϕ(0) t−s
sin π(s − t)
= . (4.6.8)
π(s − t)
Here we set θn = n ϕ(0) − 12 sin 2ϕ(0) + 21 ϕ(0) − π4 and used the identity


cos α cos(β + γ) − cos(α + γ) cos β = sin γ sin(α − β). (4.6.9)


This is uniform for |t − s| ≥ δ. For |t − s| < δ, it is convenient to write
 1 −ψn0 (`x + (1 − `)y)
Z  
ψn (x)ψn−1 (y) − ψn (y)ψn−1 (x)
= ψn (x) ψn−1 (x) 0 d`,
x−y 0 ψn−1 (`x + (1 − `)y)
and establish uniform convergence of this, after rescaling as above, to
 1 sin(π`s + π(1 − `)t)
Z  
sin π(s − t)
= sin πs cos πs d`. (4.6.10)
π(s − t) 0 cos(π`s + π(1 − `)t)

1√ √ √
Proof of Theorem 38. Let K̃n (x, y) denote the rescaled kernel psc (r) K (x n, y n),
n n
x = r − npscs (r) , y = r − npsct (r) . It follows from Lemma 14, using Sections C.2.1
and C.2 that
 
lim det I − z K̃n 1S = det (I − zKsine 1S ) , z ∈ C, (4.6.11)
n→∞
64 CHAPTER 4. DETERMINANTAL FORMULAS

and that the convergence is uniform in z for z in a bounded set. In particular,


the derivatives at z = 1 converge for all m, that is
 m  m
d   d
lim − det I − z K̃n 1S = − det (I − zKsine 1S )|z=1 .
n→∞ dz z=1 dz
(4.6.12)
By Theorem 37, this is equivalent to (4.6.2).

4.7 Scaling limits III: the Airy kernel


Recall from Definition 8 that KAiry is the continuous integral kernel on R × R
given by
Ai(x)Ai0 (y) − Ai0 (x)Ai(y)
KAiry (x, y) = , x 6= y. (4.7.1)
x−y
The fluctuations at the edge of the spectrum are described as follows. Let
(x1 , . . . , xn ) denote the unordered eigenvalues of a matrix M ∈ GUE(n) and let
us consider the shifted and rescaled points
1 √ 
sk = n 6 x − 2 n , k = 1, . . . , n. (4.7.2)
(n)
For each nonnegative integer m and bounded, measurable set S, let Bm (S)
denote the probability that exactly m of the points s1 , . . . , sn lie in S when
M ∈ GUE(n). The following theorem is a consequence of Lemma 15 and the
discussion in Section C.2.
Theorem 39.
 m
(n) 1 d
lim Bm (S) = − det (I − zKAiry 1S )|z=1 . (4.7.3)
n→∞ m! dz
Remark 40. The assumption that S is bounded is necessary for Ksine . The
sine-kernel has a (weak) rate of decay |x|−1 as |x| → ∞ and the Fredholm
determinant det(I − zKsine 1S ) is not finite unless S is bounded. However, the
Airy function, and the thus the Airy kernel, has strong decay as x and y → ∞.
The Fredholm determinant det(I − zKAiry 1S ) is well-defined in L2 (S) for sets
S that are bounded below, but not above, such as S = (a, ∞) for any a ∈ R.
Such sets will be considered when we compute the Tracy-Widom distribution.
See Exercise 5.
The proof of Theorem 39 follows from the Plancherel-Rotach asymptotics for
the Hermite polynomials, in particular the Airy asymptotics in the transition
zone (see Case 3 and (B.5.5)–(B.5.6) in Appendix B). The following lemma plays
a role analogous to that of Lemma 14 in the proof of Theorem 38.
Lemma 15. For x 6= y
√ x √
 
1 y
lim 1 K n 2 n + 1 , 2 n + 1 − KAiry (x, y) = 0 (4.7.4)
n→∞ n 6 n6 n6
4.8. THE EIGENVALUES AND CONDITION NUMBER OF GUE 65

and there exists a function G(x, y) ∈ L2 ([C, ∞)2 ) for all C ∈ R such that

√ x √
 
1 y
1 Kn 2 n + 1 ,2 n + 1 ≤ G(x, y). (4.7.5)
n6 n6 n6

Proof. Convergence follows from (B.5.6). The function G(x, y) can be con-
structed using (B.5.45) and (B.5.46), see Exercise 4.3.

4.8 The eigenvalues and condition number of


GUE
Let M ∼ GUE(n). Let λ1 ≤ λ2 ≤ · · · ≤ λn be the eigenvalues of M . A
consequence of Theorem 39 is the following, for all t ∈ R
   
λn
lim P n2/3 √ − 2 < t = det(1 − KAiry 1(t,∞) ) =: F2 (t),
n→∞ n
   
λ1
lim P −n2/3 2 + √ < t = F2 (t).
n→∞ n

Then, Theorem 38 gives for t ≥ 0,


√ 
n|λj |
lim P > t for all j = det(1 − Ksine 1(−t,t) ) := S(t). (4.8.1)
n→∞ π

The singular values σ1 ≤ σ2 ≤ . . . ≤ σn of a matrix M are the square roots


of the non-zero eigenvalues of M ∗ M . One can rewrite (4.8.1) as
√ 
nσ1
lim P > t = S(t). (4.8.2)
n→∞ π

The condition number is defined as κ(M ) := σn /σ1 .

Lemma 16. If M ∼ GUE(n), then for all t > 0


π 
lim P κ(M ) < t = S(t−1 ). (4.8.3)
n→∞ 2n
√ √
Proof. We first show that λn / n → 2, λ1 / n → −2 in probability. Fix  > 0,
and let L > 0. Then
     
λn 2/3 λn 2/3 2/3 λn
1≤P √ −2 ≤ =P n √ −2 ≤n  ≥P n √ −2 ≤L ,
n n n

provided n2/3  ≥ L. So we, find


 
λn
1 ≤ lim inf P √ − 2 ≤  ≥ F2 (L) − F2 (−L).
n→∞ n
66 CHAPTER 4. DETERMINANTAL FORMULAS


Letting L → ∞ √ gives convergence in probability for λn / n. Similar arguments
follow for λ1 / n. Next, define
 
λn λ1
E,n = √ − 2 ≤ , √ + 2 ≤  .
n n

We know that P(E,n ) → 1 as n → ∞. Then


π  π  π 
c
P κ(M ) < t = P κ(M ) < t, E,n + P κ(M ) < t, E,n .
2n 2n 2n
Because the second term must√ vanish as n → √ ∞, we focus on the first term. On
E,n it follows that (2 − ) n ≤ σn ≤ (2 + ) n and
   
π(2 + ) π  π(2 − )
P < t, E,n ≤ P κ(M ) < t, E,n ≤ P < t, E,n .
2nσ1 2n 2nσ1

We find that for  > 0


 
π  π  2 −  −1
lim sup P κ(M ) < t = lim sup P κ(M ) < t, E,n ≤ S t ,
n→∞ 2n n→∞ 2n 2
π π  
  2 +  −1
lim inf P κ(M ) < t = lim sup P κ(M ) < t, E,n ≥ S t .
n→∞ 2n n→∞ 2n 2

If S is continuous at t, send  ↓ 0 to obtain convergence in distribution. Since


S(t) is continuous, the result follows.

4.9 Notes on universality and generalizations


4.9.1 Limit theorems for β = 1, 4
4.9.2 Universality theorems

Exercises
4.1. Prove the Christoffel-Darboux identity (B.1.7) for Hermite polynomials.
(This is a standard relation and it is easy to find a proof in many texts, but try
to do it on your own.)
4.2. Show that
Z
det[K(xp , xq )]1≤p,q≤k dx1 · · · dxk = 0, (4.9.1)
Rk

for k > n, if K is of the form


n−1
X
K(x, y) = gj (y)fj (x), fj , gj ∈ L2 (R), j = 0, 2, . . . , n − 1. (4.9.2)
j=0
4.10. NOTES 67

4.3. Finish the proof of Lemma 15 by constructing a function G(x, y).


4.4. Establish (4.8.1).
4.5. Use the method of steepest descent to establish the asymptotic formula
(A.3.1) for the Airy function. This is an easy application of the method of
steepest descent.
4.6. In order to appreciate the power of the Plancherel-Rotach asymptotics,
some numerical calculations will help.
(a) Develop a numerical scheme to compute all the roots of the n-th Hermite
polynomial hn . Plot the empirical distribution of roots for n = 100. Can
you determine the limiting density of suitably rescaled roots?

(b) Numerically compute the Hermite wave functions for large n, say n =
100, and compare the rescaled wave function with the Plancherel-Rotach
asymptotic formulas in all three regions (oscillatory, decaying and transi-
tion).
4.7. Use the method of steepest descent to establish the Plancherel-Rotach
asymptotics in the region of exponential decay (equation (B.5.4)). This requires
more care than Q.2.
4.8. Establish the following a priori bound on the Airy kernel. For any a ∈ R,

sup ex+y |KAiry (x, y)| < ∞. (4.9.3)


x,y

Let S be the semi-infinite interval (a, ∞). Use the above estimate to establish
that the Fredholm determinant det(I − zKAiry 1S ) is an entire function.
4.9. Let ρn (x), n = 1, 2, . . . be probability densities on R that converge al-
most uniformly to ρ(x) with respect to Lebesgue measure on R. Assume ρ has
compact support. Show that
Z Z
lim f (x)ρn (x)dx = f (x)ρ(x)dx
n→∞ R R

for every continuous function f with compact support.

4.10 Notes
To include in improved version.

1. Moment estimates to strengthen convergence to semicircle law.


2. Definition of determinantal processes.
3. Pair correlation function for the sine kernel.
68 CHAPTER 4. DETERMINANTAL FORMULAS
Chapter 5

The equilibrium measure

In this section we establish properties of the equilibrium measure for general


invariant ensembles. We also relate the equilibrium measure to the classical
theory of orthogonal polynomials and Fekete points.

5.1 The log-gas


Let V : R → R denote a potential such that V (x) → ∞ sufficiently rapidly as
|x| → ∞. The log-gas with size n and potential nV is a system of n identi-
cal charged particles constrained to the line interacting via pairwise Coulomb
repulsion and the potential nV (we have scaled the potential V by n in order
to ensure a scaling limit). The total energy of the system in any configuration
x ∈ Rn is given by
n
X 1X 1
E(x) = n V (xj ) + log . (5.1.1)
j=1
2 |xj − xk |
j6=k

A fundamental postulate of equilibrium statistical mechanics is that the


probability density of finding the system in a state x at inverse temperature
β > 0 is
1
e−βE(x) , (5.1.2)
Zn,V (β)
where Zn,V is the partition function
Z
Zn,V (β) = e−βE(x) Dx. (5.1.3)
Rn

The log-gas provides us with a physical caricature of eigenvalue repulsion. On


one hand, we see that the energy E(x) has two complementary terms: the
logarithmic potential drives charges apart, but the potential V confines them
in space. On the other hand, let V define an invariant probability measure of
the form (1.1.3) on Symm(n), Her(n) or Quart(n). As a consequence of Weyl’s

69
70 CHAPTER 5. THE EQUILIBRIUM MEASURE

formula (Theorem 15), the equilibrium density (5.1.2) is precisely the joint law
of the eigenvalues for these ensembles at β = 1, 2 and 4 respectively. It is in
this sense that the ‘eigenvalues repel’.
We have scaled the energy V with n in (5.1.1) in order to obtain a simple
description of the scaling limit when n → ∞. In order to study this limit, we
view the energy function as a functional of the empirical measure, Ln , rather
than a configuration x ∈ Rn . For (r, s) ∈ R2 let

1 1 1
e(r, s) = V (r) + V (s) + log , (5.1.4)
2 2 |r − s|

and given a probability measure µ on the line, define the functional


Z Z
I[µ] = e(r, s) µ(dr) µ(ds). (5.1.5)
R R

Observe that if Ln is the empirical measure associated to x ∈ Rn , then


 
n
1 X 1 X 1 ˜ n ],
E(x) = n2  V (xj ) + 2 log  = n2 I[L (5.1.6)
n j=1 n |xj − xk |
j6=k

and we may rewrite the partition function in the form


Z
2 ˜ n]
Zn,V (β) = e−n β I[L
Dx. (5.1.7)
Rn

˜ n ] denotes the renormalized functional


Here I[L
Z Z
˜ =
I[µ] 1r6=s e(r, s) µ(dr) µ(ds), (5.1.8)
R R

that takes into account all interaction terms in I[µ], except the singular self-
interaction term from I[µ]. The logarithmic singularity in e(r, s) is integrable if
µ(ds) has an absolutely continuous density. Thus, if the particles in the log-gas
spread out sufficiently as n → ∞, we expect that µ has a smooth density, and

1
lim log Zn,V (β) = min I[µ]. (5.1.9)
n→∞ n2 µ

In order to establish this relation, it is first necessary to obtain a precise ana-


lytical understanding of this minimization problem. We first prove such results
under the formal assumption that there exists an R > 0 such that V (x) = +∞
for |x| > R. This simply means that we first restrict attention to measures with
support within the interval [−R, R]. Once the ideas are clear in this setting, we
turn to measures with support on the line.
5.2. ENERGY MINIMIZATION FOR THE LOG-GAS 71

5.2 Energy minimization for the log-gas


5.2.1 Case 1: bounded support
Let PR denote the set of probability measures on the interval [−R, R]. Recall
that the natural topology on PR is the weak topology (we adopt the probabilists
convention for what is conventionally termed the weak-∗ topology). A sequence
of measures {µk }∞k=1 ∈ PR converges weakly to µ ∈ PR if

lim hµn , f i = hµ, f i, (5.2.1)


n→∞

for every function f ∈ C(R). This topology is natural, because it yields com-
pactness by Helly’s theorem: Each sequence {µk }∞
k=1 ∈ PR has a subsequence
that converges weakly to a measure in PR .
Theorem 41. Assume V is a continuous function on [−R, R]. There exists a
unique probability measure µ∗ ∈ PR such that

I[µ∗ ] = min I[µ]. (5.2.2)


µ∈PR

The proof of Theorem 41 is a demonstration of the classical method of the


calculus of variations. There are two distinct ideas at work: existence follows
from the fact that the functional I[µ] is weakly lower semicontinuous; uniqueness
follows from the fact that I[µ] is a strictly convex function on PR .
Lemma 17. Suppose the sequence {µn }∞
n=1 ∈ PR converges weakly to µ ∈ PR .
Then
I[µ] ≤ lim inf I[µn ]. (5.2.3)
n→∞

Lemma 18. Let µ0 6= µ1 be two measures in PR and let µθ = (1 − θ)µ0 + θµ1


denote their convex combination for each θ ∈ (0, 1). Then

I[µθ ] < (1 − θ)I[µ0 ] + θI[µ1 ]. (5.2.4)

Proof of Theorem 41. Existence. Since V is bounded, the function e(x, y) is


bounded below on [−R, R]. Therefore, inf µ∈PR I[µ] > −∞. Further, since the
logarithmic singularity is integrable, I[µ] < ∞ for any measure that is absolutely
continuous. Thus, we may assume that there is a sequence of measures {µk }∞ k=1
such that
lim I[µk ] = inf I[µ] < inf ty. (5.2.5)
k→∞ µ∈PR

Since PR is compact in the weak topology, we may extract a convergent


subsequence, also labeled {µk }∞
k=1 for simplicity. Let µ∗ denote the weak limit
of this subsequence. We then use Lemma 17 to obtain the chain of inequalities

inf I[µ] ≤ I[µ∗ ] ≤ lim inf I[µk ] = inf I[µ]. (5.2.6)


µ∈PR k→∞ µ∈PR

Thus, µ∗ is a minimizer.
72 CHAPTER 5. THE EQUILIBRIUM MEASURE

Uniqueness. Assume µ∗ and ν∗ are two distinct minimizers. We apply


Lemma 18 to their convex combination with θ = 1/2 to obtain the contradiction
1 1 1
inf I[µ] ≤ I[ µ∗ + ν∗ ] < (I[µ∗ ] + I[ν∗ ]) = inf I[µ]. (5.2.7)
µ∈PR 2 2 2 µ∈PR

5.2.2 Weak lower semicontinuity


We now turn to the proof of Lemma 17. We first observe that for each monomial
rj sk in the variables r and s, the quadratic functional
! Z !
Z ZR R Z R R
µ 7→ rj sk µ(dr) µ(ds) = rj µ(dr) sk µ(ds)
−R −R −R −R

is weakly continuous since it is the product of two bounded linear functionals


on PR . Since each polynomial p(r, s) in the variables (r, s) is a finite sum of
monomials, the functional
Z R Z R
µ 7→ p(r, s)µ(dr) µ(ds)
−R −R

is also weakly continuous. Finally, since each continuous function f ∈ C([−R, R]2 )
may be uniformly approximated by polynomials, the quadratic functional
Z R Z R
µ 7→ f (r, s)µ(dr) µ(ds)
−R −R

is weakly continuous.
The function e(s, t) defined in (5.1.4) is not continuous on [−R, R]2 since the
logarithmic term is unbounded on the diagonal s = t. However, for any M > 0,
the truncated function eM (r, s) = min(e(r, s), M ) is continuous. Thus, given a
weakly convergent sequence of measures {µk }∞ k=1 with limit µ ∈ PR we find
Z R Z R Z R Z R
eM (r, s)µ(dr) µ(ds) = lim eM (r, s)µk (dr) µk (ds)
−R −R k→∞ −R −R
Z R Z R
≤ lim inf e(r, s)µk (ds)µk (ds) = lim inf I[µk ].
k→∞ −R −R k→∞

We let M → ∞ on the left hand side and use the monotone convergence theorem
to obtain (5.2.3).

5.2.3 Strict convexity


Lemma 18 is a particular consequence of a general fact in potential theory. The
essential idea is to recognize that the function z 7→ − log |z| is the fundamental
5.2. ENERGY MINIMIZATION FOR THE LOG-GAS 73

solution to Laplace’s equation in C ∼


= R2 . More precisely, given a signed measure
µ with a smooth density ρ(z), supported in the ball BR ⊂ C the unique solution
to Poisson’s equation with Dirichlet boundary condition
− 4ψ = µ, z ∈ C\Ω, ψ(z) = 0, |z| = R, (5.2.8)
is given by the integral formula
Z
ψ(z) = G(z, w)ρ(w) Dw, z ∈ BR , (5.2.9)
BR

where Dw denotes the two-dimensional area element in C and G(z, w) is the


Green’s function for Poisson’s equation in the ball BR with Dirichlet boundary
conditions,
|w| |z − wR | R2 w
 
1
G(z, w) = log , wR = , z, w ∈ BR . (5.2.10)
2π R |z − w| |w|2
The function G(z, w) is obtained by the method of images: the image point wR
is the reflection of the point w ∈ BR in the circle ∂BR [20, §4.1]. What matters
here is that the dominant term in the Green’s function is the logarithmic term
− log |z − w|, just as in equation (5.1.5), and the positivity of
Z Z Z Z
G(z, w) µ(dz) µ(dw) = − ψ(w)4ψ(w) ds = |∇ψ(w)|2 Dw > 0.
BR BR BR BR
(5.2.11)
However, in contrast with (5.1.5) here we have assumed that µ(dw) has a smooth
density ρ(w), whereas the measures of interest in (5.1.5) are concentrated on an
interval, and may have no regularity. Thus, some care is needed in formulating
and proving a theorem on positivity analogous to (5.2.11).
Recall that a signed Borel measure µ on the line may be uniquely decomposed
into two positive measures µ± respectively such that µ = µ+ − µ− . The Fourier
transform of a measure is defined by
Z
µ̂(u) = e−ius µ(ds), u ∈ R. (5.2.12)
R

The Fourier transform is a well-defined distribution. If µ± are finite measures


on [−R, R], the Fourier transform is a continuous function of u that decays to
zero as |u| → ∞ by the Riemann-Lebesgue lemma.
Lemma 19. Assume µ = µ+ − µ− is a signed measure on [−R, R] such that
Z R Z R
µ+ (dr) = µ− (dr) < ∞. (5.2.13)
−R −R

Then we have the identity


Z R Z R
1
log (µ+ (dr)µ+ (ds) + µ− (dr)µ− (ds)) (5.2.14)
−R −R |r − s|
Z R Z R Z ∞
1 |µ̂(u)|2
= log (µ+ (dr)µ− (ds) + µ− (dr)µ+ (ds)) + du.
−R −R |r − s| 0 u
74 CHAPTER 5. THE EQUILIBRIUM MEASURE

In particular, I[µ] > 0 if µ is non-zero and satisfies (5.2.13).

Remark 42. Equation (5.2.14) simply says that


R R ∞
|µ̂(u)|2
Z Z Z
1
log µ(dr) µ(ds) = du. (5.2.15)
−R −R |r − s| 0 u
RR
for a signed measure µ with −R µ(ds) = 0. This identity has been written in
the form (5.2.14) in order to ensure that there are no ill-defined terms of the
form ∞ − ∞. It is now clear from (5.1.4) and (5.1.5) that I[µ] > 0 for such
measures.

Proof. This proof is from [7, p.142]. We first regularize the logarithm at 0 and
use the following integral representation. For any real s and ε > 0
Z ∞
eisu − 1
log(s2 + ε2 ) = log ε2 + 2 Im e−εu du. (5.2.16)
0 iu

We apply this integral representation to the following regularization of I[µ], and


RR
use the fact that −R µ(dr) = 0, to obtain
Z R Z R
log (r − s)2 + ε2 µ(dr)µ(ds)

−R −R
∞ R R
ei(r−s)u − 1
Z Z Z
= 2 Im e−εu µ(dr)µ(ds) du
0 −R −R iu
Z ∞ Z ∞
|µ̂(u)|2 |µ̂(u)|2
= 2 Im e−εu du = −2 e−εu du.
0 iu 0 u

We may rewrite this identity in terms of µ± as follows:


Z R Z R
1
log p (µ+ (dr)µ+ (ds) + µ− (dr)µ− (ds)) (5.2.17)
−R −R (r − s)2 + ε2
Z R Z R Z ∞
1 |µ̂(u)|2
= log p (µ+ (dr)µ− (ds) + µ− (dr)µ+ (ds)) + e−εu du.
−R −R
2
(r − s) + ε 2
0 u

We now let ε ↓ 0 and use the monotone convergence theorem to obtain (5.2.14)

Finally, let us prove Lemma 18. Suppose µ0 and µ1 be two measures in PR


as in (5.2.4). The difference
Z Z
1
(1−θ)I[µ0 ]+θI[µ1 ]−I[µθ ] = θ(1−θ) log (µ0 − µ1 ) (dx) (µ0 − µ1 ) (dx)
|r − s|

in the sense of signed measures. Thus, it is strictly positive when µ0 6= µ1 by


Lemma 19.
5.2. ENERGY MINIMIZATION FOR THE LOG-GAS 75

5.2.4 Case 2: Measures on the line


Having explained the main ideas behind Theorem 41 for finite measures, let us
turn to the measures on the line. The proof of uniqueness requires no change,
since it is easily verified that Lemma 19 holds for measures in PR . However,
it is necessary to modify the proof of existence to account for a possible loss
of compactness: a sequence of measures in PR may drift off to infinity (e.g.
µk = δk , k ∈ Z). The appropriate condition required for compactness here is
the following.

Definition 43. A sequence of measures {µk }∞


k=1 ∈ PR is tight if for every ε > 0
there exists Mε > 0 such that

sup µk (R\[−Mε , Mε ]) < ε. (5.2.18)


k≥1

Compactness of measures in PR is provided by the Prokhorov-Varadarajan


criterion: the sequence {µk }∞ k=1 ∈ PR has a subsequence that converges to a
measure µ ∈ PR if and only if the sequence {µk }∞ k=1 is tight [33]. In practice,
application of this criterion requires a uniform estimate on the tails of the mea-
sures {µk }∞
k=1 . Such a bound is possible only if the growth of the confining
potential V (x) as |x| → ∞ is faster than the divergence of log |x| as |x| → ∞.
We formalize this requirement as follows. For any ε > 0, observe that
p p
|r − s| = |r − 1 − (s − 1)| ≤ r2 + 1 s2 + 1. (5.2.19)

Therefore, we have the lower bound


 
1 1 1 1
log ≥ log 2 + log 2 . (5.2.20)
r−s 2 r +1 s +1

Let us define the function


1 1 1
l(s) = log 2 + V (s). (5.2.21)
2 s +1 2

If l(s) is bounded below, then by adding a constant to V if necessary, we can


ensure that l(s) ≥ 0 for all s. Clearly, this does not change the nature of the
minimization problem.

Theorem 44. Assume V (s) is a continuous function such that l(s) is bounded
below and l(s) → ∞ as |s| → ∞.

(a) There exists a unique probability measure µ∗ ∈ PR such that

I[µ∗ ] ≤ min I[µ]. (5.2.22)


µ∈PR

(b) The support of the measure µ∗ is contained within a finite interval.


76 CHAPTER 5. THE EQUILIBRIUM MEASURE

Proof. (a) Since V is bounded below and the addition of a constant to V does
not change the minimization problem, we may assume that l(s) ≥ 0. Then
1 1 1
e(r, s) = log + V (r) + V (s) ≥ l(r) + l(s) ≥ 0, (5.2.23)
|r − s| 2 2

and c := inf µ∈PR I[µ] ≥ 0. Suppose µk ∞ k=1 is an infimizing sequence: i.e.


limk→∞ I[µk ] = c. Without loss of generality, we may assume that I[µk ] ≤ c + 1
for all k. Tightness of the sequence {µk }∞ k=1 follows from the following (Cheby-
shev) inequality. For any M > 0,
Z Z
c + 1 ≥ I[µk ] = e(r, s)µk (dr)µk (ds) (5.2.24)
R R
Z Z
≥ 2 l(s)µk (ds) ≥ 2lM µk (ds) = 2lM µk (R\[−M, M ]),
R |s|>M

where lM = inf |s|≥M l(s). Since lim|s|→∞ l(s) = ∞, lM → ∞ as M → ∞. Thus,


for any ε > 0, we may choose M = Mε large enough so that (5.2.18) holds. The
rest of the proof of part (a) follows that of Theorem 41.
(b) For any M > 0, let SM denote the set (−∞, M ) ∪ (M, ∞). We will show
that µ∗ (SM ) = 0 if M is large enough. The proof relies on varying the measure
µ∗ by adding more mass proportional to µ∗ in the set SM . More precisely, let
ν denote the restriction of µ∗ to the set SM , and for any t ∈ (−1, 1), define the
measures
µ∗ + tν
µt = . (5.2.25)
1 + tν(SM )
We then find that I[µt ] is a differentiable function of t, with
Z Z
dI[µt ]
0= =2 ν(ds) µ∗ (dr)e(r, s) − 2ν(SM )I[µ∗ ]. (5.2.26)
dt t=0 SM R

The estimate (5.2.23) and positivity of l yields the lower bound


Z Z
2 ν(ds) µ∗ (dr)e(r, s) (5.2.27)
SM R
Z Z Z
≥ l(s)ν(ds) + l(r)µ∗ (dr) ≥ l(s)ν(ds) ≥ lM ν(SM ).
SM R SM

As in part (a), lM → ∞ as M → ∞. Thus, for M sufficiently large, we have


lM − I[µ∗ ]) > 0 and since ν is a positive measure, we have the (trivial) estimate

2(lM − I[µ∗ ])ν(SM ) ≥ 0. (5.2.28)

On the other hand, the inequalities (5.2.26) and (5.2.27) yield the opposite
inequality
2(lM − I[µ∗ ])ν(SM ) ≤ 0. (5.2.29)
Thus, ν(SM ) = 0 for all M such that lM > I[µ∗ ].
5.3. FEKETE POINTS 77

5.3 Fekete points


A second approach to the energy minimization problem relies on a study of the
minimizers of the function E(x) defined in (5.1.1) for x ∈ Rn , and a potential
V that satisfies the assumptions of Theorem 44. For any such potential, 0 ≤
E(x) < ∞ for any x ∈ Rn such that xj 6= xk , j 6= k. Thus, for each n, there
exists a set of points Fn ⊂ Rn , such that
E(x∗ ) = minn E(x), x∗ ∈ Fn . (5.3.1)
x∈R

The set Fn is called the set of n-Fekete points. The Fekete points are natu-
rally connected to the minimization problem for the functional I[µ] through the
modified functional H[Ln ], where Ln (x) is the empirical measure associated to
a point x ∈ Rn . Let δn denote the rescaled energy of Fekete points
1
δn = E(x(n) ). (5.3.2)
n(n − 1)
The main result is then the following
Theorem 45. Assume V satisfies the assumptions of Theorem 44. Let {x(n) }∞
n=1
be a sequence of points x(n) ∈ Fn and Then
(a) The rescaled energy of Fekete points increases monotonically to I[µ∗ ].
0 ≤ δn ≤ δn+1 ≤ I[µ∗ ]. (5.3.3)

(b) The empirical measures L(x(n) ) converge weakly to µ∗ .


Proof of (a). We first prove the estimates (5.3.3). The uniform upper bound on
E(x(n) ) is obtained as follows. Fix a positive integer n and a point x(n) ∈ Fn .
By definition, for any s = (s1 , . . . , sn ) ∈ Rn ,
n n
1 X X 1
E(x(n) ) ≤ E(s) = (V (sj ) + V (sk )) + log . (5.3.4)
2 |sj − sk |
j,k=1 j6=k=1

Let µ(ds) be any probability measure on the line. We integrate (5.3.4) with
respect to the n-fold tensorized probability measure µ ⊗ µ · · · ⊗ µ on Rn to
obtain
E(x(n) ) (5.3.5)
 
Z n n
1 1
X X
≤ (V (sj ) + V (sk )) + log  µ(ds1 )µ(ds2 ) · · · µ(dsn )
Rn 2 |sj − sk |
j,k=1 j6=k=1
Z Z
= n(n − 1) e(r, s)µ(ds)µ(dr) = I[µ],
R R

since for each value of the indices j and k only the integrals over µ(dsj ) and
µ(dsk ) give contributions that are not unity and there are n(n − 1) possible
unordered pairings of j and k. In particular, E(x(n) ) ≤ n(n − 1)I[µ∗ ].
78 CHAPTER 5. THE EQUILIBRIUM MEASURE

The monotonicity of δn follows from the following argument. Suppose x(n+1) =


(x1 , . . . , xn+1 ) is point in the Fekete set Fn+1 . We fix an index m, 1 ≤ m ≤ n+1
and use the definition of E in (5.1.1) to obtain
  1
n(n+1)
1 V (xj ) V (xk )
− E(x(n+1) ) Y
e n(n+1) = |xj − xk | e− 2 e− 2  (5.3.6)
1≤j6=k≤n+1
  2   1
n(n+1) n(n+1)
Y V (xj ) V (xm ) Y V (xj ) V (xk )
= |xj − xm | e− 2 − 2   |xj − xk | e− 2 e− 2 
j6=m j,k6=m
  2
n(n+1)
V (xj ) V (xm ) n−1
−δn n+1
Y
≤ |xj − xm | e− 2 e− 2  e
j6=m

since the second term is the energy E(x̂) of the point x̂ ∈ Rn obtained from
x(n) by projecting out the coordinate xm .
Since m is arbitrary, we take the product over 1 ≤ m ≤ n + 1 to obtain
  2
n(n+1)
1 (n+1) Y Y V (xj ) V (xm )
e− n E(x )
≤ e−(n−1)δn  |xj − xm | e− 2 − 2 
1≤m≤n+1 1≤j≤n+1,j6=m
2
− E(x(n+1) )
= e−(n−1)δn e n(n+1) . (5.3.7)

This inequality simplifies to δn ≤ δn+1 .


Proof of (b). While the self-energy of all the Fekete points is infinite, inequality
(5.3.3) shows that a suitably renormalized energy is finite, and bounded above
by I[µ∗ ]. This inequality, in combination with an easy modification of the
Chebyshev inequality (5.2.24) also shows that the empirical measures L(x(n) )
are tight. Thus, there exists a convergent subsequence and a limiting probability
measure ν ∈ PR such that the empirical measures L(n) defined by the Fekete
points x(n) converge weakly to ν as n → ∞.
For any M > 0, we introduce the cut-off energy eM (r, s) = min(M, e(r, s))
and observe that
n2
Z Z
1
δn = E(x(n) ) = 1r6=s e(r, s)L(n) (dr)L(n) (ds)
n(n − 1) n(n − 1) R R
n2
Z Z
M
≥ eM (r, s)L(n) (dr)L(n) (ds) − .
n(n − 1) R R n−1
Since the function eM (r, s) is continuous and 0 ≤ eM (r, s) ≤ M , we may inter-
change limits as n → ∞, and use Theorem 45(a) to obtain
Z Z
I[µ∗ ] ≥ lim inf δn ≥ eM (r, s)ν(dr)ν(ds). (5.3.8)
n→∞ R R
5.4. EXERCISES 79

We now let M → ∞ and use the monotone convergence theorem and the fact
that µ∗ is a minimizer to obtain
I[µ∗ ] ≥ I[µ] ≥ I[µ∗ ]. (5.3.9)
Since µ∗ is unique, it follows that µ∗ = ν.
This argument proves that every subsequential limit of L(n) is µ∗ . Thus, the
entire sequence converges to µ∗ .

5.4 Exercises
The first three questions are related. The goal is to formulate and analyze the
equation for the equilibrium measure µ∗ associated to the potential V (x). In
order to simplify your calculations, assume that µ∗ has a continuous density ψ,
in all the problems below. The last two questions discuss enumeration problems
related to the Catalan numbers.
1. Basics of the Hilbert transform. Let G(z) denote the Stieltjes transform
Z ∞ Z ∞
1 1
G(z) = µ∗ (ds) = ψ(s)(ds), z ∈ C\supp(µ∗ ). (5.4.1)
−∞ s − z −∞ s − z
The Hilbert transform of ψ is the limit of the Stieltjes transform as z → x ∈ R.
The Hilbert transform also differs from the Stieltjes transform by the inclusion
of a factor of π (since this makes the Fourier transform of the operator H
particularly simple). That is, given µ∗ as above, we set
Z ∞ Z ∞
1 ψ(s) x−s
Hψ(x) = p.v. ds := lim ψ(s) ds. (5.4.2)
π −∞ x − s ε→0 −∞ (x − s)2 + ε2
(a) Show that Hψ is a bounded function when ψ(x) is continuous.
(b) Show that µ∗ may be recovered from G by evaluating the jump in the
imaginary part of G across the support of µ∗ :
1
lim (G(x + iε) − G(x − iε)) = ψ(x). (5.4.3)
ε→0 2πi

(c) Compute the Hilbert transform of the following functions to obtain a feel
for it (answers are on wikipedia):
eix , δ0 (x), 1[a,b] (x).

2. Integral equation for ψ. Assume V is differentiable and satisfies the assump-


tions of Theorem 44 so that µ∗ has compact support. Show that if µ∗ has a
density ψ as above, then it satisfies the integral equation
1 0
Hψ(x) = V (x) on supp(µ∗ ). (5.4.4)

3. Fixed point equation for the resolvent. One solution to (5.4.4) uses the
Stieltjes transform G(z). Assume that V (x) is a polynomial of degree d ≥ 2.
80 CHAPTER 5. THE EQUILIBRIUM MEASURE

(a) Show that G satisfies the quadratic equation

G2 (z) + V 0 (z)G(z) + P (z) = 0, (5.4.5)

where P (z) is a polynomial of degree d − 2 whose coefficients are deter-


mined by the moments of µ∗ of degree lower than d. The solution branch
is determined by the requirement that G(z) ∼ −1/z as z → ∞ which is
immediate from (5.4.1).
(b) Equation (5.4.5) may be solved by making further assumptions on the
form of µ∗ . In particular, assume that V (z) is even, that the support of
µ∗ is a single interval [−2a, 2a], and show that (5.4.5) simplifies to
p 1
G(z) = Q(z) z 2 − 4a2 − V 0 (z) (5.4.6)
2
where Q(z) is a polynomial of degree d − 2 whose coefficients are deter-
mined by the condition that G(z) ∼ −1/z as z → ∞.
(c) Apply these ideas to compute the equilibrium measure for the quartic
potential
1 g
V (x) = x2 + x4 . (5.4.7)
2 4
Show that
 p
1 g 2 2 1
x + gx3 ,

G(z) = + x + ga x2 − 4a2 − (5.4.8)
2 2 2

where a2 solves the quadratic equation

3ga4 + a2 − 1 = 0. (5.4.9)

(d) Compute the associated density ψ(x) and plot it as g varies.

4. Establish the identity (1.3.11).


5. Show that the Catalan numbers enumerate the number of Dyck paths as
discussed below equation (1.3.12).

5.5 Notes
To include in improved version.
1. Fixed point equation for equilibrium measure.
2. Properties of Hilbert transform.
3. Convergence of k-point distribution to tensor product of equilibrium mea-
sure.
Chapter 6

Other random matrix


ensembles

In this chapter we discuss other random matrix ensembles that differ funda-
mentally from GUE, GOE and GSE. For this discussion we concentrate on real
and complex matrices. The first ensembles we consider are the real and com-
plex Ginibre ensembles1 , GinR (m, n) on Rm×n and GinC (m, n) on Cm×n . These
are ensembles of real and complex matrices of size m × n. without symmetry
conditions. Their densities are given by

1 1 T 1 1 ∗
pGin,R (Y )DY = e− 4 Tr Y Y DY, pGin,C (X)DX = e− 2 Tr X X DX.
ZR,m,n ZC,m,n

Thus, the entries are distributed as independent (real or complex) normal ran-
dom variables. The definition DY and DX in each case follows directly from the
volume forms associated to the length elements Tr(dY T dY ) and Tr(dX ∗ dX).
When m = n we use the notation GinC (n) and GinR (n) and ZR,n and ZC,m .
The Ginibre ensembles allow us to define the Laguerre ensembles as transfor-
mations of GinC (m, n) and GinR (m, n). These are ensembles of positive (semi-)
definite matrices defined by X ∗ X where X ∼ GinC (m, n), GinR (m, n). The La-
guerre ensembles are often referred to as Wishart matrices and they get their
name from the close connection to Laguerre polynomials.
We end this chapter with a discussion of the so-called Jacobi ensembles. It
is important to note that these ensembles are not ensembles of Jacobi matrices,
rather, they get their name from their close connection to Jacobi polynomials.
Jacobi polynomials are polynomials orthogonal on the interval [−1, 1], and so
Jacobi matrices have eigenvalues that all lie in the same interval [−1, 1].

1 Often, the term Ginibre ensemble is reserved for square matrices, but we find it convenient

to keep it for all rectangular matrices.

81
82 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES

6.1 The Ginibre ensembles


Our first task is to generalize Weyl’s formula to the Ginibre ensembles GinR (n)
and GinC (n). To compute this, we use the Schur decomposition. The Schur
decomposition is often seen as a numerical tool to perform a spectral decom-
position of non-normal matrices. The eigenvalue decomposition is unstable to
compute: matrices with distinct eigenvalues are dense and so, computing a Jor-
dan block of a non-normal matrix is a precarious task when round-off errors are
present. An arbitrarily small perturbation will lead to an O(1) change in the
eigenvalue matrix.
Theorem 46. All matrices Y ∈ Rn×n and X ∈ Cn×n have decompositions
Y = OSOT , X = U T U ∗,
where O ∈ O(n), U ∈ U(n). Here T ∈ Cn×n is upper-triangular and S ∈ Rn×n
is block-upper triangular with blocks of size 1 or 2. These 2 × 2 blocks have the
form
 
α −γ
, α ∈ R, δ, γ > 0. (6.1.1)
δ α
Furthermore, if the eigenvalues are distinct with a given ordering, and the eigen-
vectors are normalized (say, first non-zero component is positive), the decompo-
sition is unique.
This can be proved by first performing an eigenvalue decomposition and
second, performing a QR factorization of the eigenvector matrix. We now de-
scribe the QR decomposition algorithm, using Householder reflections, for real
matrices. Another numerically viable algorithm is the modified Gram–Schmidt
procedure. Both algorithms extend to complex matrices  in a straightforward
way. Given a matrix Y ∈ Rm×n , Y = y1 y2 · · · yn , define a transforma-
tion Y 7→ P (Y )Y by
P (Y )Y = |y1 | Pv y2 · · · Pv yn , Pv = I − 2vv T ,

(6.1.2)
v = ṽ/|ṽ|, ṽ = |y1 |e1 − y1 .
If y1 = 0, we use P = I. Let Ij be the j × j identity matrix, and let [Y ]j,k be
the lower-right j × k sub-block of Y . The QR factorization of a matrix Y is
then given via
Y0 = Y,
Y1 = Q1 Y0 := P (Y0 )Y0 ,
 
I 0
Y2 = Q2 Y2 := 1 Y ,
0 P ([Y1 ]m−1,n−1 ) 1 (6.1.3)
..
.
 
I 0
Yj = Qj Yj−1 := j−1 Y .
0 P ([Yj−1 ]m−j+1,n−j+1 ) j−1
6.1. THE GINIBRE ENSEMBLES 83

Y Q R

Figure 6.1.1: The full QR decomposition in the case m > n. The shaded area
columns and rows are removed to create the reduced QR decomposition.

It follows that R = Ymin{m,n} is upper-triangular and Y = QR where Q =


(Qmin{m,n} · · · Q2 Q1 )T . We arrive at the following.
Theorem 47. Every matrix Y ∈ Rm×n , X ∈ Cm×n has a factorization Y =
QR, X = U T such that Q ∈ O(m), U ∈ U(m) where R, T are upper-triangular
with non-negative diagonal entries. The factorization is unique if X (resp. Y ) is
invertible. This is called the QR factorization, or decomposition, of the matrix.
This theorem gives the full QR decomposition. If m > n, then a m −
n columns of Q, U are redundant, and m − n rows of R, T are as well, see
Figure 6.1.1. After dropping these columns and rows, one obtains the reduced
QR decomposition.
If m > n, one can count the number of degrees of freedom to see that neither
Q nor U could ever be distributed according to Haar measure on U(m) or O(n)
for X ∼ GinC (m, n) or Y ∼ GinR (m, n), respectively. So, we instead consider
the QR factorization of the augmented matrices
X X 0 and Y Y 0 , X 0 ∼ GinC (m, m − n), Y 0 ∼ GinR (m, m − n),
 

(6.1.4)
for X 0 and Y 0 independent of X and Y , respectively. This can be performed
even if X and Y are deterministic matrices. So, in the real case, and similarly
in the complex case,
 
0 0 0 In

Y 7→ Y Y = QR 7→ QR := QR = Y.
0
Since it is a non-classical theorem for the Schur decomposition, we state the
following.
Theorem 48. Let X(t), X : (−a, a) → Fn×n , a > 0, be a C k matrix func-
tion. Assume X(0) has distinct eigenvalues. Then the induced factors X(t) 7→
(T (t), U (t)) or X(t) 7→ (S(t), O(t)) obtained by the Schur decomposition for
F = C or R are also C k in a neighborhood of t = 0.
84 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES

Finally, before we proceed to pushing forward measure via these decom-


positions, we prove an elementary result for Ginibre ensembles using the QR
factorization.
Theorem 49. If X ∼ GinC (m, n), Y ∼ GinR (m, n), m ≥ n then

P (rank X < n) = 0 and P (rank Y < n) = 0.

Proof. We use induction on n for the real case. The complex case is similar. If
n = 1, then a Gaussian vector in Rn is non-zero with probability one. If n > 1,
n ≤ m − 1, assume

P (rank Y < n) = 0, Y ∼ GinR (m, n).

Let b ∈ Rm be an independent Gaussian vector (b ∼ GinR (m, 1)). Then


    
P rank Y b < n + 1 = E P rank Y b < n + 1 | Y .

On a set of full probability rank Y = n. For such a matrix consider


 
P rank Y b < n + 1 | Y .

Solve

Y x = b = QRx = b, Rx = QT b =: b̃,

and therefore b̃ ∼ GinR (m, 1). For this equation to have a solution x, Rx = b̃,
since R ∈ Rm×n , triangular, and n < m, the last entry of b̃ must vanish. Thus
 
P rank Y b < m + 1 | Y = 0

almost surely. This proves the claim.


Finally, we want to know that the probability of finding a Ginibre matrix
with an eigenvector that has a zero first component is zero.
Theorem 50. Assume X ∼ GinC (n), Y ∼ GinR (n). Then

P (∃λ ∈ C, v ∈ Cn , Xv = λv and v1 = 0) = 0,
P (∃λ ∈ C, v ∈ Rn , Y v = λv and v1 = 0) = 0.

Proof. We prove this for Y . The proof for X is similar. First, we write

y0 y1T
 
Y = ,
y2 Y 0
y0 ∼ GinR (1), y1 , y2 ∼ GinR (n − 1, 1), Y 0 ∼ GinR (n − 1, n − 1),

all independent. Let


    
0 0
E = ∃λ ∈ C, v ∈ Rn−1 , Y 0 v = λv and Y =λ .
v v
6.1. THE GINIBRE ENSEMBLES 85

It then follows that


P (∃λ ∈ C, v ∈ Rn , Y v = λv and v1 = 0) = P(E) = E [P(E|Y 0 )] .
Then
P(E|Y 0 ) = P ∃v ∈ Rn , y1T v = 0, v is an eigenvector of Y 0 |Y 0 .


For the eigenvalue λj of Y 0 , let Vj = v (1) , . . . , v (`) , ` ≤ n − 1 be a basis of




eigenvectors for this eigenvalue. Then


   
`
X
P ∃{cj } so that y1T  cj v (j)  = 0 X 0  = 0, a.s.
j=1

Because, given X 0 , perform a QR factorization of Vj = QR, and consider


y1T QRc = 0, c = (c1 , . . . , cj )T . But as R has rank `, this amounts to the con-
dition that (at least) one component of the Gaussian vector xT = y1T Q has to
vanish, a probability zero event. A union bound over all the distinct eigenvalues
proves the result.
This theorem has an interesting implication. If a matrix Y has a repeated
eigenvalue and two linearly independent eigenvectors, then an eigenvector can
be constructed that has a zero first component. By the theorem, this event
occurs with probability zero for GinR (n), GinC (n). And so, if one shows that
Y is diagonalizable with probability one, then Y has distinct eigenvalues with
probability one. Nevertheless, it is actually easier to directly show this.
Theorem 51. Assume X ∼ GinC (n), Y ∼ GinR (n). Then
P (X has distinct eigenvalues ) = 1,
P (Y has distinct eigenvalues ) = 1.
Proof. We show that the Vandermonde squared 4(Λ)2 is a polynomial in the
entries of the matrix. Let λ1 , . . . , λn be the eigenvalues of Y and consider
V = (Vij ), Vjk = λk−1
j .
Then
n
X
2 2
4(Λ) = det(V ) = det(V V ), T T
(V V )jk = λ`j+k−2 = Tr Y j+k−2 .
`=1

n2 n2
Now consider a rectangle R = [a, b] ⊂ R , and assume that
Z
1{Y ∈Rn×n | |4(Λ)|=0} DY > 0.
R

Since the set of matrices with distinct eigenvalues is dense, 4(Λ) 6= 0 for some Y .
But the only way for the zero locus of a polynomial in n variables to have positive
n-dimensional Lebesgue measure is for the polynomial to vanish identically. The
theorem follows.
86 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES

6.1.1 Schur decomposition of GinC (n)


Theorems 46 and 48 allow us to compute the distribution induced on U and T
in the Schur decomposition. We first identify the tangent space.
Theorem 52. Assume X ∈ Cn×n has distinct eigenvalues. Then

TX Cn×n ∼
= Rn(n−1) ⊕ P TI U(n).

Proof. A straightforward computation, using the differentiability of the Schur


decomposition gives

Ẋ = U (Ṫ + [U ∗ U̇ , T ])U ∗ , (6.1.5)

after using X(t), t ∈ (−a, a), a > 0, differentiating and evaluating at t = 0. It


follows that S := U ∗ U̇ is skew-symmetric. We then decompose T = Λ + T+ and
S = S0 +S− +S+ , where the ± refers to strict upper- and lower- triangular parts.
We can first solve for S− of S in the following way. Define S− 7→ ζ ∈ Cn(n−1)/2
by ordering the entries of using the following relations:

(i, j) < (i0 , j 0 ) if i − j < i0 − j 0 ,


(6.1.6)
(i, j) < (i0 , j 0 ) if i − j = i0 − j 0 and i < i0 .

The first inequality orders entries by which diagonal they lie on. The second
orders within the diagonal. Then

Ẋ− = [S− , Λ] + [S− , T+ ].

With the chosen ordering

ζ 7→ [S− , T+ ] =: M− ζ (6.1.7)

is strictly lower triangular. Thus provided λi 6= λj for i 6= j, we can solve this


for S− . If we then make the choice that S0 = 0, we can clearly solve for Ṫ once
S is known. Finally, by adjusting Ṫ accordingly, it is clear that any Ẋ can be
achieved with S0 = 0.
Now, we give the analogue of Weyl’s formula for Cn×n .
Theorem 53. For X ∈ Cn×n ,

DX = |4(Λ)|2 DT DU, (6.1.8)


Qn Q
where DT = j=1 dReλj dImλj j<k dReTjk dImTjk and DU refers to the same
distribution as that of the eigenvectors of GUE(n).
2
Proof. We first map X to Cn in a consistent way. We order X− using (6.2.2)
giving ζ X− . We then order diagonal(X) in the usual way. Then, finally we order
X+ using

(i, j) ≺ (i0 , j 0 ) if and only if (j, i) < (j 0 , i0 ),


6.1. THE GINIBRE ENSEMBLES 87

giving ζ X+ , and X 7→ [ζ X− , η, ζ X+ ]T . We use ζ S− and ζ T+ in same way for S−


and T+ , respectively. It then follows that, after ordering U ∗ dXU ,
  S 
Λ̃ + M− 0 0 dζ −
U ∗ dXU =  D I 0   dΛ  .
T+
M+ 0 I dζ

where Λ̃ζ S− is defined through ζ S− 7→ [S− , Λ], which is diagonal, Sjk 7→ (λk −
λj )Sjk . M− and D are matrices whose exact form is irrelevant. Decomposing
all differentials into real and imaginary parts and computing the metric tensor

Tr dX ∗ dX,
Q
we find (6.1.8) by using det(Λ̃ + M− ) = j<k (λk − λj ) and computing the
associate volume form. Here one has to use that if A : Cn → Cn induces B :
R2n → R2n (by separating real and imaginary parts), then det B = | det A|2 .
Theorem 54. The Schur decomposition of GinC (n) is given by
1 − 1 Tr T ∗ T
pGin,C (X)DX = e 2 |4(Λ)|2 DT DU. (6.1.9)
ZC,n

Note that this implies that the strict upper-triangular entries of T are all iid
complex normal random variables.

6.1.2 QR decomposition of GinC (m, n)


We now consider the distribution induced on U and T by GinC (m, n). Following
the discussion in (6.1.4), we assume n ≥ m. We follow the push forward of the
distributions under the algorithm in (6.1.3). If X ∼ GinC (m, n) then it follows
that if we replace Qj with Uj and Yj with Xj in (6.1.4) then Xj and Uj are
independent for every j using the fact that the length of a Gaussian vector is
independent of its angle and U X is independent of U ∈ U(m) if U is independent
of X. And therefore, for X = U T , U is independent of T .
From the discussion in Section 3.2 it follows that the induced volume form
on T is
m
β ∗ Y 2m−2j+1
∝ e− 4 Tr T T
Tjj DT, β = 2,
j=1

where DT refers to standard Lebesgue measure on Rm + ×C


m(m−1)/2+m(m−n)
.
Note that all the strictly upper-triangular entries are standard complex normal
random variables and the entries on the diagonal are all chi-distributed. To
understand the distribution on U all we need to do use to use that for O ∈ U(m),
OX ∼ GinC (m, n) if X ∼ GinC (m, n). Then factorize

X = UT OX = U 0 T 0 .
88 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES

From the uniqueness of the QR factorization (on set of full probability where X
is invertible), T = T 0 and U = OT U 0 . But U and U 0 have the same distribution
and this distribution must therefore be invariant under left multiplication by
any element of U(m). We conclude U is distributed according to Haar measure
on U(m) [27] and to proportionality constants:

β ∗ QR β ∗ Y 2m−2j+1
e− 4 Tr X X
DX −→ e− 4 Tr T T
Tjj DT D Ũ , ñ = min{m, n},
j=1

where DŨ is defined in (2.5.8). The normalization constant is easily computed


in terms of Γ-functions. This can be seen as an equality when m ≤ n. For
m ≥ n, we add additional degrees of freedom to find DŨ , and so this is the
push-forward under a random transformation.

6.1.3 Eigenvalues and eigenvectors of GinR (n)


Computing the analogue of Weyl’s formula for GinR (n) is much more compli-
cated. This comes from the fact that complex eigenvalues must arise as complex
conjugate pairs. Furthermore, for finite n there is a non-zero probability that
the matrix with have k real eigenvalues. Thus the distribution on the eigenval-
ues is not absolutely continuous with respect to Lebesgue measure on C. We
first compute the tangent space, under the assumption of k real eigenvalues.

Theorem 55. Assume that Y has exactly k real eigenvalues. Assume further
that the real part of all the eigenvalues of Y (0) = Y in the closed upper-half plane
are distinct. Finally, assume that each 2×2 block in the real Schur factorization
has γ 6= δ in (6.1.1). Then

TY Rn×n ∼
= Rn(n−1)/2 ⊕ o(n).

Proof. Assume Y (t) is a smooth curve in Rn×n such that Y (t) has k real eigen-
values for all t. As before, we have the relation

Ẏ = O(Ṡ + [OT Ȯ, S])OT .

We need to show that the entries of Ṡ and Ȯ are uniquely determined by this
relation. We assume
 
R1 × · · · ··· ×
 0 R2 × · · · ···  ×
 .. ..
 
.. .. 
 . . .  .  
  αj −γj
S= 0 · · · 0 R` × · · · ×  , Rj =
 ,

 0 ··· ··· 0 δj αj
 λ1 · · · × 

 . .. ..
 ..

. . 
0 ··· · · · 0 λk
6.1. THE GINIBRE ENSEMBLES 89

where ` = (n − k)/2 and n − k is assumed to be even. The ordering is fixed by


αj < αj+1 and λj < λj+1 . We also refer to the location of all the imposed zeros
in S as the generalized lower-triangular part of S, denoted LG (S). Similarly,
UG (S) = (LG (S T ))T and DG (S) = S − UG (S) − LG (S). So, we have

LG (OT Ẏ O) = LG ([A, S]) , AT = −A.

After careful consideration, we find

LG ([A, S]) = LG ([LG (A), UG (S)] + [LG (A), DG (S)])

by noting that

[A, S] = [LG (A), LG (S)] + [DG (A), LG (S)] + [UG (A), LG (S)]
+ [LG (A), DG (S)] + [DG (A), DG (S)] + [UG (A), DG (S)]
+ [LG (A), UG (S)] + [DG (A), UG (S)] + [UG (A), UG (S)],

LG (S) = 0, and any term involving only DG and UG or only UG does not con-
tribute to LG ([A, S]). Then, it is a non-trivial but straightforward calculation
to find that LG ([DG (A), DG (S)]) = 0. This gives a linear system of equations
for LG (A). Since it will be of use in computing the metric tensor below, we
compute the determinant of this matrix in the following lemma.
Lemma 20. There exists a trivial mapping LG (A) → ξ ∈ Rn(n−1)/2−` defined
by ordering the elements of LG (A) so that when M is the matrix representation
for ξ 7→ LG ([A, S]) we have
   
(1) (2)
Y Y Y
det M = 4k (Λ) :=  (λj − λi )  4ij   4ij 
1≤i<j≤k 1≤j<k≤` 1≤i≤k,1≤j≤`

where λ1 , . . . , λk are the real eigenvalues, µj = αj + iβj , βj > 0 are the complex
eigenvalues (in the upper half plane) and
(1)
4ij = |µj − µi |2 |µj − µ̄i |2 = |µj − µi |2 |µ̄j − µi |2 ,
(2)
4ij = |µj − λi |2 .

Proof of Lemma 20. The important aspect of this is to choose the ordering.
First split
 (1,1) 
A 0
LG (A) = .
A(2,1) A(2,2)

We order the 2 × 2 blocks of A(1,1) according to (6.2.2). Within each block we


use this same ordering. We then order the entries of A(2,2) according to (6.2.2).
Finally, we order the 1 × 2 blocks of A(2,1) according to (6.2.2) and within
each block we use this same ordering. This defines LG (A) 7→ ξ ∈ Rn(n−1)/2−` .
Define L = LG (LG (A), UG (S)) and decompose L into L(i,j) , i = 1, 2, j = 1, 2
90 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES

in the same was as for LG (A). From the reasoning2 that went into (6.1.7), we
have that the (i, j) block of L(1,1) depends only on blocks (i0 , j 0 ) of A(1,1) for
(i0 , j 0 ) > (i, j) and entries in A(2,1) . Similarly, the (i, j) entry of L(2,2) depends
only on entries (i0 , j 0 ) of A(2,2) for (i0 , j 0 ) > (i, j) and entries in A(2,1) . Lastly,
one checks that block (i, j) of L(2,1) depends only on blocks (i0 , j 0 ) of A(2,1) for
(i0 , j 0 ) > (i, j). This gives a strong form of strict lower-triangularity for ξ 7→ L.
We now show that ξ 7→ K := LG (LG (A), DG (S)) is block-diagonal in a way
that does not overlap with this strict lower-triangularity. First, decompose K
into K (i,j) , i = 1, 2, j = 1, 2 in the same was as for LG (A) and L. We obtain
the following relations for blocks of size 2 × 2, 1 × 1 and 1 × 2, respectively:
(1,1) (1,1) (1,1)
Kij = Aij Rj − Ri Aij ,
(2,2) (2,2)
Kij = Aij (λj − λi ),
(2,1) (2,1) (2,1)
Kij = Aij Rj − λi Aij .

The determinants of each of these linear transformations are

(αj − αi )4 + (δj γj − δi γi )2 + 2(αj − αi )2 (δj γj + δi γi ),


(λj − λi ),
(αj − λi )2 + δj γj ,

respectively.p For the non-real eigenvalues in the upper-half plane, we have


µj = αj + i γj δj . This proves the lemma.

From this lemma, with our assumptions, we can uniquely find LG (A). But
as A is skew-symmetric, we have ` entries left undetermined. So, we consider

(OT Ẏ )2j,2j = (Ṡ + [A, S])2j,2j = (α̇j + (γj − δj )ṡ2j+1,2j ) + f2j (LG (A)),
(OT Ẏ )2j+1,2j+1 = (Ṡ + [A, S])2j+1,2j+1 = (α̇j + (δj − γj )ṡ2j+1,2j ) + f2j (LG (A)).
(6.1.10)

for some functions fj . As LG (A) is known, this gives a solvable system for α̇j
Q`
and s2j+1,2j , with determinant 2` j=1 (γj − δj ). The remaining entries of Ṡ are
given through the relation

Ṡ = OT Ẏ O − [A, S].

We now can compute the volume form.


2 The commutator of lower-triangular and upper triangular matrices at entry (i, j) only

depends on entries (i0 , j 0 ) of the lower-triangular matrix for j 0 ≤ j with i = i0 and i0 ≥ i with
j = j 0 . With strict triangularity, fewer dependencies occur.
6.2. SVD AND THE LAGUERRE (WISHART) ENSEMBLES 91

Theorem 56. For Y ∈ Rn×n with k real eigenvalues,


 
`
Y
DY = 2` |4k (Λ)|  |γj − δj | DS DO, (6.1.11)
j=1

where
`
Y k
Y Y
DS = dαj dγj dδj dλj ds, (6.1.12)
j=1 j=1 s∈UG (S)

and DO refers to the same distribution as that of the eigenvectors of GOE(n),


i.e., Haar measure on O(n).
When we restrict to k real eigenvalues we use the notation
1 1 T
pGin,R,k (Y )DY = (k)
e− 4 Y Y
1{Y has k real eigenvalues} DY. (6.1.13)
ZR,n

Theorem 57. The real Schur decomposition of GinR (n) given k real eigenvalues
is
 
`
2` − 1 Tr S T S Y
pGin,R,k (Y )DY = (k) e 4 |4k (Λ)|  |γj − δj | DS DO. (6.1.14)
ZR,n j=1

Note that this implies that the generalized upper-triangular entries of S are
all iid normal random variables.

6.1.4 QR decomposition of GinR (m, n)


It follows from the discussion in Section 6.1.2 that up to proportionality con-
stants

β T QR β T Y m−j+1
e− 4 Tr Y Y
DY −→ e− 4 Tr R R
Rjj DR DQ, β = 1, ñ = min{m, n},
j=1

where DR refers to standard Lebesgue measure on Rm


+ ×R
m(m−1)/2+m(m−n)
,
and DQ is Haar measure on O(n).

6.2 Singular value decomposition and the La-


guerre (Wishart) ensembles
Next, we turn to understanding the singular value decomposition of GinC (m, n)
and GinR (m, n). This is done by means of describing the eigenvalue and eigen-
vector distributions of the so-called Laguerre ensembles. The following gives the
singular value decomposition.
92 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES

Theorem 58. Every matrix Y ∈ Rm×n and X ∈ Cm×n has a decomposition

Y = QΣOT , X = U ΣV ∗ ,

where Q ∈ O(m), O ∈ O(n), U ∈ U(m), V ∈ U(m) and Σ ∈ Rm×n is a diagonal


matrix with non-negative diagonal entries.
The entries in Σj of Σ are called the singular values of the matrix in question.

6.2.1 The Cholesky decomposition


To compute the singular value decomposition of GinR (m, n) and GinC (m, n) we
follow the approach of Edelman [10] and first compute the Cholesky decompo-
sition.
Theorem 59. Every strictly positive definite matrix A ∈ Rn×n (or Cn×n ) has
a unique decomposition

A = LLT

where L ∈ Rn×n (or Cn×n ) is a lower-triangular matrix with positive diagonal


entries.
Proof. We concentrate on the real case and we first show uniqueness. Assume
A = LLT = L1 LT1 for two different factorizations. Then

L−1 T −T
1 L = L1 L , where L−T = (L−1 )T .

Since the non-singular upper- and lower-triangular matrices for groups, the left-
hand (right-hand) side is lower-triangular (upper-triangular). Therefore L−1 1 L
is a diagonal matrix that is equal to its own transpose-inverse: ej L−1
1 Lej = ±1.
Positivity of the diagonal entries gives L1 = L. Now, by Gaussian elimina-
tion, without pivoting3 A = L̃U where L̃ is lower-triangular and U is upper-
triangular. Here L̃ has ones on the diagonal. We know that eTj Aej > 0
and therefore eTj L̃U ej = Ujj > 0. Then Let Ud = diagonal(U )1/2 and A =
L̃Ud Ud−1 U . It follows from the symmetry of A that L = L̃Ud gives the Cholesky
factorization. Similar considerations follow for A ∈ Cm×n .

Change of variables for GinC (m, n)


We now consider the change of variables that closely resembles the singular
value decomposition, but differs in a fundamental way. For X ∈ Cm×n , full
rank, define
QR Inv. Cholesky Spectral map
X = U T 7→ (U, T ) 7→ (U, A = T ∗ T ) = (U, V ΛV ∗ ) 7→ (U, Λ, V ).
(6.2.1)
3 Pivoting is not required for strictly positive definite matrices because the upper left ` × `

blocks are non-singular for every `.


6.2. SVD AND THE LAGUERRE (WISHART) ENSEMBLES 93

This is a well-defined, invertible mapping, provided that the first row of V con-
tains non-vanishing entries. It will follow from Section 6.2.2 that the probability
of this is one. But we emphasize that for this decomposition X 6= U ΛV ∗ , gen-
erally. We now show that if X ∼ GinC (m, n) then U, Λ, V are independent and
we then characterize the distribution of Λ and V .
Lemma 21 (Spectral variables for Her+ (n)). If A ∈ Her+ (n) is non-singular
with distinct eigenvalues then
TA Her+ (n) ∼
= Rn ⊕ P TI U(n).
Proof. The proof is essentially the same as Lemma 6, just using that the set of
strictly positive definite matrices is open.
We define DA in the natural way as the volume form induced by the metric
tensor Tr dA2 . We then have the analogous formula to Theorem 15:
DA = |4(Λ)|2 DΛ DU.
Next, we compute the volume form associated with the change Cholesky change
of variables.
Lemma 22. Let A = LL∗ be the Cholesky decomposition for a non-singular
A ∈ Her+ (n). Let DL be the natural volume form induced by Tr(dL∗ dL). Then
n
2(n−j)+1
Y
DA = 2n Ljj DL.
j=1

Proof. We prove this by identifying that the Jacobian of the transformation is


triangular, and computing the diagonal entries. We first compute for j ≥ k
∂A ∂A
= ej eTk L∗ + Lek eTj , = ej eTk L∗ − Lek eTj .
∂ReLjk ∂ImLjk

Examine the structure of these matrices. Since ej eTk L∗ is the matrix that con-
tains the kth row of L∗ in its jth row, with all other row being zero we find the
following picture
0
 
 .. 

 . 


 0 


 L kk 

∂A  Lk+1,k 
= .. .
 
∂ReLjk  . 
 

 Lj−1,k 

 0 · · · 0 Lkk Lk+1,k · · · Lj−1,k 2ReLjk · · · 
 
 .. 
 . 
94 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES

Here only the jth row and jth column have non-zero entries. Here 2ReLjk is
∂A
in the (j, j) entry. A similar picture holds for ∂ImL jk
, with 2ImLjk in the (j, j)
entry. We define a mapping ReL 7→ ξ ∈ Rn(n−1)/2 and ImL 7→ η ∈ Rn(n−3)/2 by
the ordering of the non-zero elements of L:

(j, k) < (j 0 , k 0 ) if j < j 0 ,


(6.2.2)
(j, k) < (j , k 0 ) if k < k 0 .

This orders first by row, and then by columns within each row. Assume (i, `) <
(j, k), j ≥ k, i ≥ `. Then

∂Ai` ∂Ai`
= 0, = 0.
∂ReLjk ∂ImLjk

because either i < j or ` < k 0 if i = j. And, it is clear that

∂Ajk ∂Ajk
= Lkk , j > k, = 2Lkk , j = k,
∂ReLjk ∂ReLjk
∂Ajk
= Lkk , j > k.
∂ImLjk

Then, if we define L 7→ ζ where ζ = (ξ1 , η1 , ξ2 , η2 , . . .)T we find that the Jacobian


is triangular and
n
∂A Y 2(n−j)+1
= 2n Ljj .
∂L j=1

This theorem allows one to understand transformations of GinC (m, n). Fol-
lowing the transformation (6.2.1), with X ∈ Cm×n with m ≥ n using T = L∗
noting that
 

T = .
0

where T̃ is a upper-triangular matrix with positive diagonal entries. Then


n n
QR Y 2(m−j)+1
Y 2(m−n)
DX −→ Tjj DT DŨ = 2−n Tjj DA DŨ (6.2.3)
j=1 j=1
n
2(m−n)
Y
= 2−n σj |4(Σ2 )|2 DΣ DŨ DV. (6.2.4)
j=1

Here DŨ is Haar measure on U(n) and DV represents the same distribution as
the eigenvectors of GUE(n). Also, DΣ is Lebesgue measure on Rn+ . As noted
6.2. SVD AND THE LAGUERRE (WISHART) ENSEMBLES 95

below (6.2.1), this is not the singular value decomposition for X, but we claim,
it is in a distributional sense. For X ∼ GinC (m, n), m ≥ n and consider
X = U1 ΣV, X̃ := U ΣV
where (U, V, Σ) are independent with joint distribution (6.2.4), U1 is the matrix
of left singular vectors for X, and U is independent of U1 . Then X̃ = U U1∗ X,
but then by the invariance of U , for measureable sets S1 ⊂ U(m), S2 ⊂ Cm×m ,
P(U U1∗ ∈ S1 ) = P(U ∈ S1 U1 ) = P(U ∈ S1 ),
P(U U1∗ ∈ S1 ,X ∈ S2 ) = P(U ∈ S1 U1 , X ∈ S2 )
Z Z 
= DU pGin,C (X)DX = P(U ∈ S1 )P(X ∈ S2 ).
S2 S1 U1

So, U U1∗ is independent of X and therefore X̃ must have the same distribution
as X. This implies the singular value decomposition of GinC (m, n) is given by
(6.2.4).
Remark 60. If one wants to match of dimensions, then DU should be replaced
by the push-forward of uniform measure on SCm−1 × SCm−2 × · · · × SCm−n−1 onto
U(m) via Householder reflections.

Change of variables for GinR (m, n)


Similar considerations show for Y = QΣOT ∼ GinR (m, n) the singular value
distributions are given by
n
QR Y
DY −→ 2−n Σjm−n |4(Σ2 )|DΣ DQ DO
j=1

where DO is Haar measure on U(n), DQ is Haar measure on O(m) and DΣ is


as before.
In both cases, GinR (m, n) or GinC (m, n), if m < n, then same distribu-
tional description holds with the addition of n − m point masses at zero for
Σ1 , . . . , Σn−m (depending one’s ordering convention) to indicate the deficiency
of the matrix.

6.2.2 Bidiagonalization of Ginibre


Consider X ∼ GinC (m, n) or Y ∼ GinR (m, n), m ≥ n, and consider the sample
covariance matrices X ∗ X/m and Y T Y /m.
Theorem 61. Let x1 , . . . , xn be the unordered eigenvalues of X ∗ X/m (β = 2)
or Y T Y /m (β = 1). The following gives their joint marginal distribution
n
1 Y β2 (m−n)− 21 Y βm Pn
xj |xj − xk |β e− 4 j=1 xj 1{xj ≥0, for all j} dx1 · · · dxn .
Zn (β) j=1
j<k
(6.2.5)
96 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES

We now consider the reduction of GinC (m, n) and GinR (m, n) to bidiagonal
matrices and, in the process, find a generalization of (6.2.5) to general β. This
is sometimes called Golub–Kahan bidiagonalization. The aim here is not to
preserve eigenvalues, but to preserve singular values as transformations are per-
formed. So, we can perform independent Householder reflections from the left
and the right. Recall the definition of P (Y ) from (6.1.2). Let Y ∼ GinR (m, n)
for m ≥ n. Consider the iterative method
Y0 = Y,
Ỹ1 = Q1 Y0 := P (Y0 )Y0 ,
!
1 0
Y1T = Q̃1 Ỹ1T := Ỹ1T ,
 
0 P [Ỹ1T ]n−1,m−1
 
1 0
Ỹ2 = Q2 Y1 := Y ,
0 P ([Y1 ]m−1,n−1 ) 1 (6.2.6)
..
.
 
Ij−1 0
Ỹj = Qj Yj−1 Y ,
0 P ([Yj−1 ]m−j+1,n−j+1 ) j−1
!
Ij 0
YjT = Q̃j Ỹj−1
T
:= T
Ỹj−1 .
 
0 P [ỸjT ]n−j,m−j

The algorithm terminates when j = n − 1, returning Yn−1 which is a bidiagonal


matrix. Let (Yn )jj = cj and (Yn )j,j+1 = dj for j = 1, 2, . . .. We find that
(Qj , Q̃j , cj , dj )j≥1 is an independent set of random variables, with Qj being
defined by vj ∈ SRn−j and Q̃j being defined by ṽj ∈ SRn−j−1 (Qn−1 gives a sign
flip of one entry). Under this change of variables, following the arguments for
(3.3.3), we have
n
Y n−1
Y n−2
Y n−1
Y
DY ∝ cm−j
j dcj dn−k−1
k ddk Dω̃k Dωp ,
j=1 k=1 l=1 p=1

where Dω̃l and Dωp denote uniform measure on SRl and SRp , respectively. Simi-
larly, by applying this algorithm to X ∼ GinC (m, n) we find
n n−1 n−2 n−1
2(m−j)+1 2(n−k)−1
Y Y Y Y
DX ∝ cj dcj dk ddk Dω̃k Dωp ,
j=1 k=1 l=1 p=1

where Dω̃l and Dωp denote uniform measure on SCl and SCp , respectively.

6.2.3 Limit theorems


Circular law
We now describe the global eigenvalue distribution for GinC (n) as n → ∞. We
have the following distribution on the (unordered) eigenvalues Z = (z1 , z2 , . . . , zn )
6.2. SVD AND THE LAGUERRE (WISHART) ENSEMBLES 97

from (6.1.8)
n
1 1
Pn 2 Y
P̂ (n) (z1 , . . . , zn )Dz = |4(Z)|2 e− 2 j=1 |zj | dRe zj dIm zj .
Zn j=1

Owing to the calculations that result in Theorem 35 we have


1
P̂ (n) (z1 , . . . , zn ) = det(K̂n (zj , zk )1≤j,k≤n ),
n!
(n)
R̂m (z1 , . . . , zm ) = det(K̂n (zj , zk )1≤j,k≤m ), 1 ≤ m ≤ n,
n−1
X 1 2
K̂n (z, w) = cj Φj (z)Φj (w), Φj (z) = cj z j e− 4 |z| .
j=0

(n)
where R̂m is the m-point correlation function defined by (4.1.4) with P̂ (n)
instead of P (n) and dRe zj dIm zj instead of dxj . To show that this is the correct
choice for K̂n and to determine cj we need to show that {Φj }n−1 j=0 are orthogonal
and choose cj > 0 to normalize the functions. Consider for j < k
Z Z
1 2
Φj (z)Φk (z) dRez dImz = cj c̄k z̄ k−j |z|2j e− 2 |z| dRez dImz
C C
Z ∞ Z 2π 
1 2
= cj c̄k (cos θ + i sin θ) dθ rk+j+1 e− 2 |r| dRez dImz = 0.
k−j
0 0

If j = k we find
Z Z
1 2
|Φj (z)|2 dRez dImz = |cj |2 |z|2j e− 2 |z| dRez dImz,
C C

and using r = 2s
Z Z ∞ Z ∞
2j − 12 |z|2 2j+1 − 21 r 2
|z| e dRez dImz = 2π r e j+1
dr = 2 π sj e−s ds
C 0 0
j+1
=2 πΓ(j + 1) = 2j+1 j!π

so
1 1
cj = √ , cj √ √ = cj+1 .
2j/2+1/2 πj! 2 j+1
So, we find a simple two-term recurrence formula
z 1
Φj+1 (z) = √ √ Φj (z), Φ0 (z) = √ .
2 j+1 2π
The corresponding Christoffel-Darboux-type formula is

ezw̄/2 Γ(n, z w̄/2) − 1 (|z|2 +|w|2 )


K̂n (z, w) = e 4 .
2π (n − 1)!
98 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES

R∞
where Γ(n, z) = z tn−1 e−t dt is the incomplete Gamma function. To see this
let fn (z) = ez Γ(n, z), and we find
fn(j) (0) = (n − 1)!, j = 0, 1, 2, . . . , n − 1,
fn(j) (0) = 0, j ≥ n,
so that
n−1
X (n − 1)! j
fn (z) = z .
j=0
j!

Define the rescaled empirical spectral measure


n
1X
L̂n (Dz) = δλk /√2n (Dz), Dz = dRez dImz.
n
k=1

It then follows that for f ∈ C0 (C) by (4.1.3)


√ √
Z  Z Z
E f (z)L̂n (Dz) =: f (z)ELn (Dz) = 2 f (z)K̂n (z 2n, z 2n)Dz.

We then perform the asymptotic analysis of this density. Consider


√ Z ∞
z7→z 2n
Γ(n, z z̄/4) −→ tn−1 e−t dt.
n|z|2

Then
Z ∞ Z ∞ Z ∞
tn−1 e−t dt = nn tn−1 e−nt dt = nn t−1 e−ng(t) dt,
n|z|2 |z|2 |z|2

where g(t) = t − log t. The stationary phase point here is t = 1, g 0 (1) = 0 and
g 00 (1) = 1. So, if |z| ≤ 1 − , the stationary phase point is in the interval of
integration and
Z ∞ √ √
nn t−1 e−ng(t) dt = e−n nn−1/2 2π(1 + O(n−1 )) = e−n nn−1 2πn(1 + O(n−1 ))
|z|2

= (n − 1)!(1 + O(n−1 ))
uniformly as n → ∞ by Stirling’s approximation. Then for |z| ≥ 1 + , by
integrating by parts
Z ∞
1 2n−2 −n|z|2 n − 1 ∞ n−2 −nt
Z
n−1 −nt
In (z) := t e dt = |z| e + t e dt
|z|2 n n |z|2
1 2 n−1
≤ |z|2n−2 e−n|z| + In (z).
n n
Therefore
2
In (z) ≤ |z|2n−2 e−n|z| .
From these estimates, the following lemma follows.
6.2. SVD AND THE LAGUERRE (WISHART) ENSEMBLES 99

Lemma 23. Fix 0 <  < 1. As n → ∞, for |z| ≤ 1 − 


√ √ 1
2K̂n (z 2n, z 2n) = + O(n−1 ),
π
uniformly. As n → ∞, for |z| ≥ 1 + 
√ √
2K̂n (z 2n, z 2n) = O(n−1 ),

uniformly.
This shows that (see Exercise 4.9)
1
ELn (Dz) → 1{|z|≤1} Dz
π
weakly. This is the averaged circular law.

Marchenko–Pastur law
Again, consider X ∼ GinC (m, n) or Y ∼ GinR (m, n), m ≥ n, and consider the
sample covariance matrices X ∗ X/m and Y T Y /m, and let x1 , . . . , xn be their
unordered eigenvalues. Define the empirical spectral measure
n
1X
Ln (dx) = δx (dx).
n j=1 j

Assume further that n/m → d ∈ (0, 1]. The Marchenko–Pastur law states that
r
1 |(λ+ − x)(x − λ− )|1[λ− ,λ+ ] √
ELn (dx) → pMP (x; d)dx := 2
dx, λ± = (1 ± d)2 ,
2πd x
weakly as n → ∞. If d = 0, then the limiting law is δ0 (dx), by the Law of Large
Numbers.
100 CHAPTER 6. OTHER RANDOM MATRIX ENSEMBLES
Chapter 7

Sampling random matrices

7.1 Sampling determinantal point processes


7.2 Sampling unitary and orthogonal ensembles
7.3 Brownian bridges and non-intersecting Brow-
nian paths

101
102 CHAPTER 7. SAMPLING RANDOM MATRICES
Chapter 8

Additional topics

8.1 Joint distributions at the edge


8.2 Estimates of U(n)
8.3 Iterations of the power method

103
104 CHAPTER 8. ADDITIONAL TOPICS
Appendix A

The Airy function

A.1 Integral representation

There are several different conventions for the definition of the Airy function.
The standardization adopted here follows [1]. The Airy function, Ai(x) is defined
as the oscillatory integral

∞ Z b
t3
Z    3 
1 1 t
Ai(x) = cos + xt dt = lim cos + xt dt. (A.1.1)
π 0 3 π b→∞ 0 3

This is an improper integral, that is, the integral converges conditionally, not
absolutely. In order to obtain an absolutely convergent integral, it is necessary
to work in the complex plane. Let C denote a contour in the complex plane
that starts and ends at the point at infinity, and is asymptotically tangent to
the rays e−iπ/3 and e+iπ/3 respectively. Then first setting t = −iz and then
deforming the contour, we have

Z ∞  3 Z
z3
1 1
  
i z3 −xz i −xz
Ai(x) = e dz = e 3
dz. (A.1.2)
2πi −∞ 2πi C

The integral is absolutely convergent for every x ∈ C on the contour C. Indeed,


with z = reiθ ,

 3 
i z3 −xz 3 3
e ≤ e|x|r e−r cos(3θ)/3
∼ e−r /3 r|x|
e (A.1.3)

as z → ∞ along the rays θ = ±π/3. Thus, Ai(x) is an entire function.

105
106 APPENDIX A. THE AIRY FUNCTION

A.2 Differential equation


We differentiate under the integral sign (justified by (A.1.3)) and integrate by
parts to obtain
 3 
Z z
00 1 2 3 −xz
Ai (x) = z e dz (A.2.1)
2πi C
d z3 −xz z3 d
Z Z
1 1
= e3e dz = − e3 e−xz dz = xAi(x).
2πi C dz 2πi C dz

Thus, Ai(x) satisfies the Airy differential equation

y 00 = xy, x ∈ C. (A.2.2)

This differential equation has a scaling invariance: if y(x) is a solution, so are


y(ωx) and y(ω 2 x) where ω = e2πi/3 is a cube root of unity. Thus, both Ai(ωx)
and Ai(ω 2 x) solve (A.2.2). Each of these solutions is linearly independent of
Ai(x). A solution to (A.2.2) that is real when x is real, and is linearly indepen-
dent from Ai(x), is obtained from the linear combination

Bi(x) = eπi/6 Ai(ωx) + eπ−i/6 Ai(ω 2 x). (A.2.3)

A.3 Asymptotics
The functions Ai(x) and Bi(x) have the following asymptotic properties.
Asymptotics as x → ∞.
1
2 3 e−ζ x4
ζ = x2 , Ai(x) ∼ 1√ , Bi(x) ∼ √ eζ . (A.3.1)
3 π
2x 4 π

Asymptotics as x → −∞.
2 3 1  π 1  π
ζ= (−x) 2 , Ai(x) ∼ 1√ sin ζ + , Bi(x) ∼ 1√ cos ζ + .
3 4 4
x4 π x4 π
(A.3.2)
Appendix B

Hermite polynomials

In this chapter, µ denotes the weight function


1 x2
µ(dx) = √ e− 2 dx. (B.0.1)

The (probablilists’) Hermite polynomials {hk }∞
k=0 are the monic family of poly-
nomials of degree k orthogonal with respect to the weight µ.

B.1 Basic formulas


 k
x2 d −x2
hk (x) = e 2 − e 2 . (B.1.1)
dx
Z
1 1 2
hk (x) = √ (−iξ)k e− 2 (ξ−ix) dξ. (B.1.2)
2π R

1
Z
x2 √
√ hk (x)hl (x)e− 2 dx = 2πk!δkl . (B.1.3)
2π R

xhk (x) = hk+1 (x) + khk−1 (x), k ≥ 1. (B.1.4)

h0k (x) = khk−1 (x). (B.1.5)

h00k (x) − xh0k (x) + khk (x) = 0. (B.1.6)

k−1
X 1 (hk (x)hk−1 (y) − hk−1 (x)hk (y))
hj (x)hj (y) = , x 6= y. (B.1.7)
j=0
j! (k − 1)!(x − y)

107
108 APPENDIX B. HERMITE POLYNOMIALS

Relation (B.1.1) may be treated as an alternate definition of the Hermite poly-


nomials. On the other hand, since we have defined the Hermite polynomials
as the monic orthogonal polynomials obtained by applying the Gram-Schmidt
procedure to the set {1, x, x2 , . . .} in L2 (R, µ), equation (B.1.1) may be verified
as follows. First, it is clear from (B.1.1) that hk (x) is a monic polynomial of
degree k and that h0 (x) = 1, h1 (x) = x. By induction, if it has been established
that property (B.1.1) defines the Hermite polynomials for j ≤ k − 1, then it is
only necessary to show that the monic polynomial
 k
x2 d x2
Pk (x) = e 2 − e− 2 ,
dx
is the same as hk . The polynomial Pk is orthogonal to hj , 0 ≤ j ≤ k −1 because,
using integration by parts,
Z Z  k
d
Pk (x)hj (x)µ(dx) = hj (x)µ(dx) = 0,
R R dx
since Hj has degree less than k. Since Pk is monic, it must be hk . The same
calculation serves to establish (B.1.3).
The integral representation (B.1.2) follows from the formula for the Fourier
transform of a Gaussian
Z
x2 1 ξ2
e− 2 = √ eiξx e− 2 dξ, (B.1.8)
2π R
and the identity (B.1.1).
The two-term recurrence relation follows from (3.4.18) and (B.1.3) (see also
Remark 29). The coefficient ak vanishes because equation (B.1.1) shows that
h2k is an even polynomial for all k. The coefficient b2k may be rewritten
xhk−1 (x)hk (x)µ(dx) h2k (x)µ(dx)
R R R
2 xhk−1 (x)hk (x)µ(dx)
bk = R 2 = R 2 R 2 = 1 · k2 ,
hk−1 µ(dx) hk µ(dx) hk−1 µ(dx)
(B.1.9)
by (B.1.3).
The differential equation (B.1.5) is obtained by rewriting (B.1.1) in the form
 k
x2 d x2
e− 2 hk (x) = (−1)k e− 2 ,
dx
x2
differentiating both sides, and then multiplying by e 2 . Equation (B.1.6) is ob-
tained by differentiating (B.1.5) and using (B.1.4). The proof of the Christoffel-
Darboux identity is left as an exercise to the reader.

B.2 Hermite wave functions


The Hermite wave functions {ψ}∞
k=0 are defined by
2
1 e−x /4
ψk (x) = √ hk (x), k = 0, 1, 2, . . . (B.2.1)
k! (2π)1/4
B.3. SMALL X ASYMPTOTICS 109

The following properties of the Hermite wave-functions follow immediately from


the corresponding properties of the Hermite polynomials.

Z
ψk (x)ψl (x) dx = δkl . (B.2.2)
R

√ √
xψk (x) = k + 1ψk+1 (x) + kψk−1 (x). (B.2.3)

x √
ψk0 (x) = − ψk (x) + kψk−1 (x). (B.2.4)
2

1 x2
 
ψk00 (x) + k+ − ψk (x) = 0. (B.2.5)
2 4

n−1
X √ (ψn (x)ψn−1 (y) − ψn−1 (x)ψn (x))
ψk (x)ψk (y) = n . (B.2.6)
x−y
k=0

B.3 Small x asymptotics


The following classical formulas capture the asymptotics of the Hermite poly-
nomials near the origin [1, §22.15].


(−1)n n
 
x 1
lim n
h2n √ = √ cos x. (B.3.1)
n→∞ 2 n! 2n π
r
(−1)n
 
x 2
lim h2n+1 √ = sin x. (B.3.2)
n→∞ 2n n! 2n π

Further, the convergence to the limit is uniform over x in a bounded interval.


In comparing equations (B.3.1) and (B.3.2) with a standard reference such
as [1], the reader should note that there are two conventions in the definition
of Hermite polynomials. The exponential weight in earlier sources was chosen
2
to be e−x , which differs from our choice (B.0.1). The relation between the
Hermite polynomials, {Hn (x)} in [1], and those used here are:

 
n
−n x
Hn (x) = 2 hn (x 2), hn (x) = 2
2 2 Hn √ . (B.3.3)
2

These formulas may be immediately translated into asymptotic formulas for


the Hermite wave functions, using Stirling’s approximation for the factorial:
√  n n
n! = 2πn (1 + O(n−1 )) as n → ∞. (B.3.4)
e
110 APPENDIX B. HERMITE POLYNOMIALS

 
1/4 n x 1
lim (2n) (−1) ψ2n √ = √ cos x. (B.3.5)
n→∞ 2n π
 
x 1
lim (2n)1/4 (−1)n ψ2n+1 √ = √ sin x. (B.3.6)
n→∞ 2n π

The asymptotic formulas (B.3.1) and (B.3.2) are proved by applying Laplace’s
method to the integral formula (B.1.2). We only explain how to prove (B.3.1)
since equation (B.3.2) is similar. Since (i)2n = (−1)n , we take the real part
of (B.1.2) to find

2 x2 ∞ 2n − ξ2
  r Z  
x xξ
(−1)2n h2n √ = e 4n ξ e 2 cos √ dξ
2n π 0 2n
1 Z
2n+1 nn+ 2 ∞ −n(t2 −2 log t)
= √ e cos xt dt, (B.3.7)
π 0

by rescaling ξ = n t. We now apply Laplace’s method to the integral above.
The function g(t) = t2 − 2 log t has a single minimum on the interval (0, ∞) at
t = 1. At this point

g(1) = 1, g 0 (1) = 0, g 00 (1) = 4. (B.3.8)

Laplace’s approximation now yields


Z ∞ r
−ng(t) −n π
e cos xt dx ∼ e cos x, (B.3.9)
0 2n

which when combined with (B.3.7) implies


 
x 1
(−1)2n h2n √ ∼ 2n+ 2 nn e−n cos x. (B.3.10)
2n
Equation (B.3.10) is equivalent to (B.3.1) by Stirling’s approximation (B.3.4).
Further, it is easy to check that the error is uniformly small for x in a bounded
set.

B.4 Steepest descent for integrals


Consider the integral
Z
f (t)e−nΦ(t) dt (B.4.1)
Γ

where f and Φ are entire functions. Assume Φ(t∗ ) = 0, Φ0 (t∗ ) = 0, Φ00 (t∗ ) 6= 0,
ImΦ(t) = 0 for t ∈ Γ. Further assume Γ is the path of steepest ascent for Φ,
i.e. the path of steepest descent for −Φ(t). Having ImΦ(t) = 0 is enough to
B.4. STEEPEST DESCENT FOR INTEGRALS 111

ensure that Γ is either the path of steepest ascent (locally) or steepest descent:
Let t = x(s) + iy(s) be a smooth local parameterization of Γ, then by the
Cauchy–Riemann equations
d dx dy dx dy
0= Im Φ(t) = Im Φx (t) + Im Φy (t) = −Re Φx (t) + Re Φx (t) .
ds ds ds ds ds
This shows that ∇Re Φ is orthogonal to the tangent vector (−y 0 (s), x0 (s)), im-
plying that Γ is in the direction of greatest increase/decrease for Re Φ.
Performing a Taylor expansion, we have
Φ00 (t∗ )
Φ(t) = (t − t∗ )2 (1 + O(|t − t∗ |). (B.4.2)
2
The point is that Φ is locally quadratic at t∗ and we use this to inform the
change of variables. But if we naı̈vely looked to solve
Φ(t∗ + v) = s2 ,
for v as a function of s, v(0) = 0, we would fail. The implicit function theorem
fails because we have two solution branches! Instead we consider
Φ(t∗ + sv) Φ00 (t∗ ) 2
2
−1=0= v − 1 + O(|sv 3 |). (B.4.3)
s 2
00 ∗
We can choose v = ±R−1/2 e−iφ/2 where Φ 2(t ) = Reiφ . For either choice, we can
apply the implicit function theorem (the derivative with respect to v, evaluated
at (s, v) = (0, ±R−1/2 e−iφ/2 ) does not vanish). We use v = ±R−1/2 e−iφ/2 to
obtain v(s), and our local parameterization of Γ: t(s) = t∗ + sv(s). We use
this a change of variables, within a neighborhood B(t∗ , ) on which the implicit
function theorem applies (here we assume the orientation of Γ is the same as
the induced orientation on t((−δ1 , δ2 )))
Z Z δ2
2 ds
f (t)e−nΦ(t) dt = f (t∗ + sv(s))e−ns , δ1 , δ2 > 0.

Γ\B(t ,) −δ1 v(s) + sv 0 (s)
(B.4.4)

Now let δ = min{δ1 , δ2 }. It follows that on Γδ = Γ \ t(−δ, δ), Φ(t) ≥ δ 2 . Then


Z Z
−nΦ(t) −nδ 2 2
f (t)e dt ≤ e |f (t)|e−n(Φ(t)−δ ) dt. (B.4.5)
Γδ Γδ

For n ≥ 1, we have
Z Z
2 2
|f (t)|e−n(Φ(t)−δ ) dt ≤ |f (t)|e−(Φ(t)−δ ) dt := M. (B.4.6)
Γδ Γδ
2
And therefore (B.4.5) is exponentially small in n, less than M e−nδ . Now,
consider
Z δ
2
f (t∗ + sv(s))e−ns (v(s) + sv 0 (s)) ds (B.4.7)
−δ
112 APPENDIX B. HERMITE POLYNOMIALS

and we can directly apply Laplace’s method. Taylor expand the function
f (t∗ + sv(s))(v(s) + sv 0 (s))
at s = 0, and term by term integration gives an expansion in powers of n−1/2
with the leading order term being
Z
f (t)e−nΦ(t) dt
Γ
Z δ
2
= f (t∗ + sv(s))e−ns (v(s) + sv 0 (s))ds + O(n−α )
−δ

2
= f (t∗ )v(0)(1 + O(s))e−ns ds + O(n−α ), for all α > 0. (B.4.8)
−δ

Performing a change of variables s = y/ 2n we have
Z δ Z √2nδ r
−ns2
−y 2 /2 π
e ds = √ e dy = + O(n−α ), for all α > 0,
−δ − 2nδ n
Z δ Z √2nδ
−ns2 1 2 C
|s|e ds = √ √ |y|e−y /2 dy = + O(n−α ), for all α > 0.
−δ 2n − 2nδ n
So, we have
Z r
−nΦ(t) 2π
f (t)e dt = f (t∗ )|Φ00 (t∗ )|−1/2 e−iφ/2 + O(n−1 ) as n → ∞. (B.4.9)
Γ n

B.5 Plancherel–Rotach asymptotics



Another asymptotic regime is obtained when we consider x = O( n) and let
n → ∞. Plancharel–Rotach asymptotics refer to the asymptotics of polynomials
scaled by their largest zero. The limit is oscillatory or exponential depending
on the range of x. This is to be expected: for each n, the polynomial hn (x),
and thus the wave functionpψn (x), has n zeros. The largest and smallest of the
zeros are approximately ± (n + 1/2). The oscillatory regime is obtained when
x(n + 1/2)−1/2 lies well within the interval (−1, 1). Outside this interval, the
Hermite wave function decays exponentially fast. A more delicate calculation,
using the Airy function, is required to understand the transition from oscillatory
to exponential behavior.
We will prove a weaker version of the Plancherel-Rotach formulas, that suf-
fices for our needs. These formula are as follows.
Case 1. Oscillatory behavior.
x = 2 cos ϕ, 0 < ϕ < π. (B.5.1)
√ 
     
1 1 1 1 π
n 4 ψn+p x n ∼ √ cos n ϕ − sin 2ϕ + p + ϕ− .
π sin ϕ 2 2 4
(B.5.2)
The convergence is uniform for ϕ in a closed subset of (0, π).
B.5. PLANCHEREL–ROTACH ASYMPTOTICS 113

Case 2. Exponential decay.

|x| = 2 cosh ϕ, 0 < ϕ. (B.5.3)


1
1 √  e(p+ 2 )ϕ − n2 (e2ϕ +1−2ϕ)
n 4 ψn+p x n ∼ √ e . (B.5.4)
2π sinh ϕ
The convergence is uniform for ϕ in a compact subset of (0, ∞). Observe
that e2ϕ − 2ϕ − 1 > 0 when ϕ > 0, ensuring exponential decay.
Case 3. The transition region.
√ s
x=2 n+ 1 s ∈ C, (B.5.5)
n6


n1/12−p/2 ψn (x n) (B.5.6)
s    
n! −1/3 1 0 −2/3
= Ai(s) + n − p Ai (s) + O(n ) .
(n + p)! 2
(B.5.7)

The convergence is uniform for s in a compact subset of C.


All three asymptotic relations are obtained by the method of steepest descent
for integrals. Assume x ∈ R. We fix an integer p, use the integral identity
(B.1.2) with k = n + p, and rescale ξ = nt to obtain
Z ∞
√  √ n+p
r
n n 2
hn+p x n = −i n tn+p e− 2 (t−ix) dt (B.5.8)
2π −∞
Z ∞ Z ∞
√ n+p
r 
n n 2 n 2
= −i n tn+p e− 2 (t−ix) dt + (−1)n+p tn+p e− 2 (t+ix) dt
2π 0 0
√ n+p
r
n
In,p (x) + (−1)n+p In,p (−x) .

:= −i n (B.5.9)

The integral In,p (x) may be rewritten in the form
Z ∞
1
In,p (x) = tp e−ng(t) dt, g(t) = (t − ix)2 − log t. (B.5.10)
0 2
As is usual, the first step is to determine the critical points where g 0 (t) = 0.
This reduces to the quadratic equation t2 − ixt − 1 = 0. The three distinct
asymptotic limits arise from the three distinct possibilities for the roots.
(a) |x| < 2. The function g has two critical points on the unit circle, given by

ix ± 4 − x2
t± = = ie∓iϕ , (B.5.11)
2
where x and ϕ are related through (B.5.1).
114 APPENDIX B. HERMITE POLYNOMIALS

t− t+ π
4 − ϕ
2 Γ

0 (0, ∞)

Figure B.5.1:

(b) |x| > 2. The two critical points lie on the imaginary axis, and may be
written in the form
√ !
x ± x2 − 4
t± = i = i sgn(x)e±ϕ , (B.5.12)
2

where each branch of ϕ is defined through the relation (B.5.3).


(c) |x| = 2. The two critical points coalesce into a single value t = i. A further
blow-up is necessary to obtain the Airy asymptotics (B.5.6).
Let us first consider the integral In,p (x) in case (a), and assume that x > 0
to be concrete. We deform the integral over (0, ∞) a contour Γ which is the
path of steepest descent that passes through the critical point t+ as shown in
Figure B.5.1. The existence of such a contour may be deduced by continuity,
beginning with the observation that when x = 0, Γ is simply the segment (0, ∞)
along the real line. While in general, Γ is given by the equation Im(g(t)) =
Im(g(t+ )). It is not important for us to solve for the contour explicitly: all
that is required is to understand the phase of g 00 (t+ ), check that 0 ∈ Γ and the
integral over (0, ∞) can be deformed to an integral over Γ.
It is easy to check that when |x| < 2
1
g 00 (t+ ) = 1 + = 1 − e2iϕ = −ieiϕ (2 sin ϕ) .

2 (B.5.13)
t+

Thus, we have
Z ∞ Z
p −ng(t) −ng(t+ )
In,p (x) = t e dt = e tp e−n(g(t)−g(t+ )) dt
0 Γ
Z ∞
dt n 00 2
= e−ng(t+ ) tp+ e− 2 |g (t+ )|s ds + O(n−1 ). (B.5.14)
ds t+ −∞
B.5. PLANCHEREL–ROTACH ASYMPTOTICS 115

t+

t−

0 (0, ∞)

Figure B.5.2:

In the second line, we have used the fact that Im(g(t) − g(t+ )) = 0 on Γ, and we
have further approximated the integral over Γ by an integral over the tangent
to Γ at t+ . More precisely, the approximation here is

g 00 (t+ )(t − t+ )2 = |g 00 (t+ )|s2 ,

which implies
dt ϕ
= ei( 4 − 2 ) .
π
(B.5.15)
ds t+

We now combine the values


e2iϕ  π
t+ = ie−iϕ , g(t+ ) = − +i ϕ− ,
2 2
with (B.5.14) and (B.5.15) to obtain
r
π
ei( 2 sin 2ϕ+(n+p+ 2 )( 2 −ϕ)) .
n n 1 π
In,p (x) ∼ e 2 cos 2ϕ (B.5.16)
n sin ϕ

Finally, since x is real, we have In,p (x) = In,p (−x). We combine (B.5.9) with
(B.5.16) to obtain


r      
n+p 2 n cos 2ϕ 1 1 π
hn+p (x n) ∼ n 2 e2 cos n ϕ − sin 2ϕ + p + ϕ− ,
sin ϕ 2 2 4
(B.5.17)
where x and ϕ are related via (B.5.1). We now use (B.2.1) and Stirling’s ap-
proximation (B.3.4) to obtain (B.5.2).
The asymptotics in case (b) are obtained as follows. We make the change of
variables (B.5.3), and deform the domain of integration for In,p to the contour
consisting of two straight lines shown in Figure B.5.2. So see that this is enough,
we have that g 00 (t+ ) > 0 while g 00 (t− ) < 0. So the path of steepest ascent
116 APPENDIX B. HERMITE POLYNOMIALS

(descent for −g) through t− is the imaginary axis. Then, the steepest ascent
path through t+ makes a right angle with the imaginary axis. And one can check
that the real part of g(t) is strictly increasing along the contour t+ + i[0, ∞).
The dominant contribution comes from t− . The remaining calculations are left
to the reader. The final asymptotic relation is
n
√ n+p e− 2
e(p+ 2 )ϕ− 2 (sinh(2ϕ)−2ϕ) (1 + o(1)),
1 n
hn+p (x n) = n 2 √ (B.5.18)
sinh ϕ

which combines with (B.2.1) and Stirling’s approximation (B.3.4) to yield (B.5.4).
We now turn to case (c). We begin with the integral representation(B.5.8)
and substitute
r √ s
t=i+ 1 , x=2 n+ 1 , (B.5.19)
n3 n6
moving the integral over R to an integral over the line i + R, to obtain
1 ∞
√ √
Z
n6
hn (x n) = (−i n)n √ enh(r) dr, (B.5.20)
2π −∞

where
 
   2
r 1 r s
h(r) = log i + 1 − i+ 1 −i 2+ 2
n3 2 n3 n3
2
 
1 s 1 i s − 43 4
= + log i + 2 + isr + r3 + 4 + O(n r ), (B.5.21)
2 n 3 n 3 2n 3

using the Taylor expansion of the logarithm. The terms that depend on s may
be pulled out of the integral and we are left with
n 1
√ n 2 + 6 n sn 13 ∞ isr+ i r3
Z
hn (x n) ≈ √ e2e e 3 dr (B.5.22)
2π −∞
√ n 1 n
1
hn (x) = 2πn 2 + 6 e 2 esn 3 (Ai(s) + O(n−1/3 ))

To make this rigorous, and to obtain the next term in the expansion, We take
the integral


r Z
n 1 2
hn+p (x) = (−i n)n+p tp e−n( 2 (t−ix) −log t)
dt (B.5.23)
2π R

and deform to i + R. Then, let t = i + r and we arrive at


r Z
n 1 2
hn (x) = (−i n)n+p (i + r)p e−n( 2 (r+i(1−x)) −log(i+r))
dr. (B.5.24)
2π R

Then this can be deformed to a contour Γ = e−iπ/6 (−∞, 0] ∪ eiπ/6 [0, ∞).
B.5. PLANCHEREL–ROTACH ASYMPTOTICS 117

Then, we perform a Taylor expansion of the logarithm to find, for H(r) =


1
2 (r+ i(1 − x))2 − log(i + r) and x > 1,
Z
n 2
e− 2 (1−x) (i + r)p e−nH(r) dr (B.5.25)
Γ
+ O(e−n(x−1)δ ) (B.5.26)
4
Z  
pr
3
in π in(x−2)r+in r3 p p−1 2 5
=e 2 e i + rpi + O(r ) − ni + nO(r ) dr
Γ∩B(0,δ) 4
(B.5.27)
We compute
Z δ
y3
e−n 3 y α dy = O(n−(α+1)/3 ),
0

so that
r4
Z Z  
n 2 π r3
e− 2 (1−x) (i + r)p e−nH(r) dr = ein 2 ip + pip−1 r − nip ein(x−2)r+in 3 dr
Γ Γ∩B(0,δ) 4
+O(n−1 ).

Finally, it follows that if x = 2 + sn−2/3 and setting r = k/n−1/3


Z
r3
ein(x−2)r+in 3 rγ dr = 2πn−(γ+1)/3 (−i)γ Ai(γ) (s) + O(n−α ) for all α > 0.
Γ∩B(0,δ)
(B.5.28)
This gives
x2 n 2
√ √ e−n 4 e 2 (1−x) n + p −1/3
1/4
ψn+p (x n) = (2π) n p n2 2n (B.5.29)
(n + p)!
   
1
× Ai(s) + n−1/3 −pAi0 (s) − Ai(4) (s) + O(n−2/3 ) .
4
(B.5.30)
We compute
x2 s2 s2 s2 n−1/3
n 2 −n
4 (4+4
s
+ ) n
(1+2 s
+ ) n
e−n 4 e 2 (1−x) = e n2/3 n4/3 e2 n2/3 n4/3 = e− 2 e 4 ,
(B.5.31)
and use Stirling’s approximation to write
s
(2π)1/4 − n n n! (2πn)1/4 − n n −1/4
p e n =
2 2 √ e 2n2n (B.5.32)
(n + p)! (n + p)! n!
s
−1/4 n!
=n (1 + O(n−1 )). (B.5.33)
(n + p)!
118 APPENDIX B. HERMITE POLYNOMIALS

Continuing, we obtain
s
√ −1/12+p/2 n! s2 n1/3
ψn+p (x n) = n e 4 (B.5.34)
(n + p)!
   
1 (4)
× Ai(s) + n −1/3
−pAi0 (s) − Ai (s) + O(n −2/3
) (B.5.35)
4
s
n!
= n−1/12+p/2 (B.5.36)
(n + p)!
   
1 2
× Ai(s) + n−1/3 −pAi0 (s) + (s Ai(s) − Ai(4) (s)) + O(n−2/3 )
4
(B.5.37)
s
−1/12+p/2 n!
=n (B.5.38)
(n + p)!
   
1
× Ai(s) + n−1/3 − p Ai0 (s) + O(n−2/3 ) , (B.5.39)
2

where we used Ai(4) (s) = s2 Ai(s) + 2Ai0 (s) in the last line.

B.5.1 Uniform bounds


We need uniform estimates when x = 2 + sn−2/3 and 0 ≤ s ≤ n2/3 to allow us
to transition into case (b), (B.5.12). We use Γ = e−iπ/6 (−∞, 0] ∪ eiπ/6 [0, ∞).
Z Z
n 2 π
e− 2 (1−x) e−nH(r) dr = ein 2 e−nH(r) dr + O(e−n(x−1)δ ).
Γ Γ∩B(0,δ)

Then, we deform
Z Z
2 2
−n −nH(r) −n
e 2 (1−x) e dr = e 2 (1−x) e−nH(r) dr (B.5.40)
Γ∩B(0,δ) C

to a horizontal contour connecting its endpoints. Then on this contour,


Z
n 2 n 2
e− 2 (1−x) (e−nH(r) − e 2 (1−x) )dr (B.5.41)
C
Z
r2 r3
h i
= einr(x−2) e−n 2 +inr+n log(i+r) − ein 3 dr. (B.5.42)
C

For r ∈ C, |einr(x−2) | ≤ enδ/ 2(x−2)
, and we find

Z
n 2 n 2
e− 2 (1−x) (e−nH(r) − e 2 (1−x) )dr ≤ M n−2/3 enδ/ 2(x−2)
. (B.5.43)
C
B.5. PLANCHEREL–ROTACH ASYMPTOTICS 119

n1/3 r3
ein(x−2)r+in
R
Then, define fn (s) = 2π C
3 dr and we have
s
√ n! s2 n−1/3 s2 n1/3

n1/12−p/2 ψn+p (x n) − e 4 fn (s) ≤ M n−2/3 e 4 −nδ/ 2(x−2) .
(n + p)!
(B.5.44)
√ s2 n1/3
√ 3 1/3
Choosing δ = 2, we find e 4 −nδ/ 2(x−2) ≤ e−s 4 n . A similar estimate
follows for ψn+p and we obtain that there exist a constant M > 0 such that for
0 ≤ s ≤ n2/3
√ s2 n−1/3 3 1/3
n1/12 ψn (x n) − e 4 fn (s) ≤ M n−2/3 e−s 4 n ,
(B.5.45)
√ s2 n−1/3 3 1/3
n1/12 ψn−1 (x n) − e 4 fn (s) ≤ M n−2/3 e−s 4 n .

For s ≥ n2/3 , we can use (B.5.4) to find


√ √ 3 1/3
n1/4 (|ψn (x n)| + |ψn−1 (x n)|) ≤ M e− 4 n . (B.5.46)
120 APPENDIX B. HERMITE POLYNOMIALS
Appendix C

Fredholm determinants

C.1 Definitions
Our purpose in this section is to explain the notion of a Fredholm determi-
nant and resolvent in a simple and concrete setting. The ideas presented here
originated in Fredholm’s attempt to find a solution formula akin to Cramer’s
rule for linear integral equations. The notion of a determinant for an infinite-
dimensional linear operator is, of course, of independent interest and has at-
tracted the interest of many mathematicians. Simon’s book provides an excel-
lent overview of current knowledge [31].
Assume a given continuous kernel K : [0, 1] × [0, 1] → R and a continuous
function h : [0, 1] → R. Fix a spectral parameter z ∈ C and consider the linear
integral equation
Z 1
ϕ(x) − z K(x, y)ϕ(y) dy = h(x), x ∈ [0, 1]. (C.1.1)
0

The integral equation (C.1.1) may be written in the more compact form
(I − zK)ϕ = h, (C.1.2)
where I − zK denotes the bounded linear operator on L2 ([a, b]) defined by
Z b
ϕ 7→ (I − zK)ϕ, (I − zK)ϕ(x) = ϕ(x) − z K(x, y)ϕ(y) dy x ∈ [a, b].
a
(C.1.3)
Integral equations such as (C.1.1) may naturally be viewed as continuum
limits of linear equations. More precisely, we fix a positive integer n, consider
a uniform grid xj = j/n, with uniform weights wj = 1/n, define the vector
(n) (n)
hj = h(xj ), matrix Kj,k = wj K(xj , xk ), 1 ≤ j, k ≤ n and discretize (C.1.1)
by the linear equation
n
(n) (n) (n) (n)
X
ϕj −z Kj,k ϕk = hj , 1 ≤ j ≤ n. (C.1.4)
k=1

121
122 APPENDIX C. FREDHOLM DETERMINANTS

Equation (C.1.4) has a unique solution if and only if det(In − zK (n) ) 6= 0. By


linearity, the solution for arbitrary h(n) is determined by the resolvent R(n) =
(In − zK (n) )−1 , which is given by Cramer’s rule.
Remark 62. If one wants to compute a Fredholm determinant numerically
and K is a smooth function, quadrature rules (such as Gaussian quadrature
or Clenshaw–Curtis quadrature) can be used to choose xj and wj . See, for
example, [?].

(n) det(Mjk )
Rj,k = (−1)j+k , (C.1.5)
det(In − zK (n) )
where Mjk denotes the matrix obtained from In − zK (n) by removing the j-
th row and k-th column. Further, if zj , j = 1, . . . n, denote the zeros of the
polynomial det(In − zK (n) ), the eigenvalues of K (n) are given by 1/zj . Both
these notions may be extended to (C.1.1) via the Fredholm determinant. The
basic observation that allows passage to the limit is the identity

det(In − zK (n) ) = (C.1.6)


n 2 n
z X z 1 X K(xj1 , xj1 ) K(xj1 , xj2 )
1− K(xj1 , xj1 ) + + ...
n j =1 2! n2 j ,j =1 K(xj2 , xj1 ) K(xj2 , xj2 )
1 1 2

The coefficient of z k in the expansion above may be computed by differentiating


the left hand side k times with respect to z, and setting z = 0. Since K is
continuous, as n → ∞, the k-th term in the sum above converges to the integral

(−z)k
Z
det (K(xp , xq )1≤p,q≤k ) dx1 . . . dxk . (C.1.7)
k! [0,1]k

Definition-Theorem 63. The Fredholm determinant of the operator I − zK


is the entire function of z defined by the convergent series

(−z)k
X Z
D(z) := det (I − zK) = 1+ (det(K(xp , xq )1≤p,q≤k )) dx1 · · · dxk .
k! [0,1]k
k=1
(C.1.8)
Proof. It is only necessary to show that the series(C.1.7) is convergent for all z ∈
C. The determinant of a k × k matrix A with columns a1 , . . . , ak is the (signed)
volume of the parallelopiped spanned by the vectors a1 , . . . , ak . Therefore,
 k
|det(A)| ≤ |a1 ||a2 | · · · |ak | ≤ max |aj | . (C.1.9)
1≤j≤k

We have assumed that K is bounded on [0, 1] × [0, 1], say max |K| ≤ M < ∞.
By the inequality above,

|(det(K(xp , xq )1≤p,q≤k ))| ≤ k k/2 M k . (C.1.10)


C.2. CONVERGENCE 123

Thus, the k-term in the series (C.1.8) is dominated by

(−z)k
Z
det [K(xp , xq )]1≤p,q≤k dx1 · · · dxk
k! [0,1]k

k k/2 1 1
≤ (|z|M )k ∼ √ (|z|M e)k k+1 ,
k! 2π k 2
where we have used Stirling’s approximation in the last step.
Remark 64. If [0, 1] is replaced by a general Borel set S, we assume

|K(x, y)| ≤ M (x),

where M ∈ L1 (S). The same statements about the determinant follow.


Since D(z) is entire, we may differentiate term-by-term to obtain
 m
d
− det(I − zK) (C.1.11)
dz

(−z)k
X Z
= det [K(xp , xq )]1≤p,q≤m+k dx1 · · · dxm+k
k! [0,1]m+k
k=0

for m ≥ 1. Recall that the zeros of a non-zero entire function form a discrete,
countable set. The entire function det(I − λ−1 K) is an infinite-dimensional gen-
eralization of the characteristic polynomial of the matrix K (n) in the following
sense:
Theorem 65 (Eigenvalues of K). Assume that K is a continuous kernel. The
complex number λ is an eigenvalue of K if and only if D(λ−1 ) = 0.
For more on Fredholm determinants, see [23, Ch.24].

C.2 Convergence
Suppose a kernel Kn (x, y) → K∞ (x, y), (x, y) ∈ S 2 , pointwise. One needs the
additional convergence criteria to conclude

det(1 − Kn ) → det(1 − K∞ ). (C.2.1)

The following are from [31]. Let Kn and K∞ be the operators on L2 (S) with
kernels Kn and K∞ , respectively. Then the trace norm of an operator K is
given by

kKkTr = Tr K∗ K, (C.2.2)

where K∗ is the adjoint of K. The general definition of K∗ K for general
operators is unimportant for us and an operator with finite trace norm is said
124 APPENDIX C. FREDHOLM DETERMINANTS

to be trace class. But, for example, if K is a self-adjoint operator with continuous


kernel K then
Z
kKkTr = K(x, x)dx. (C.2.3)
S

Theorem 66. The map K 7→ det(I + K) is a continuous function on the space


of trace-class operators (i.e. operators with kKkTr < ∞) and

| det(I + K) − det(I + L)| ≤ kK − LkTr exp(kKkTr + kLkTr + 1). (C.2.4)

Theorem 67. Suppose Kn , K are trace class. If Kn → K, |Kn | → |K| and


|Kn∗ | → |K∗ | all weakly, then kKn − KkTr → 0.
In our cases, |Kn | = Kn = |Kn∗ |, so to show that det(I − Kn ) → det(I − K)
it suffices to show for each f, g ∈ L2 (S) that
Z Z Z Z
Kn (x, y)f (x)g(y)dxdy → K(x, y)f (x)g(y)dxdy. (C.2.5)
S S S S

Two such conditions for this to occur are


1. If S is bounded then

sup |Kn (x, y) − K(x, y)| → 0. (C.2.6)


x,y∈S

2. If S is unbounded then we require

Kn (x, y) → K(x, y), (C.2.7)

for each x, y ∈ S and there exists G(x, y) ∈ L2 (S 2 ) such that |Kn (x, y)| ≤
G(x, y). This allows one to use the dominated convergence theorem.

C.2.1 Change of variables and kernel extension


Let K : S → R be a kernel. Let x = r(t) and y = r(s) for s, t ∈ T where r0
2

exists, is continuous and does not vanish. Define


1
K̂(s, t) = K (r(s), r(t)) , s, t ∈ T 2 . (C.2.8)
r0 (s)
Then

det(I − K) = det(I − K̂). (C.2.9)

C.3 Computing Fredholm determinants


C.4 Separable kernels
Appendix D

Notation

D.1 Indices
The integers m and n are reserved for the number of rows and columns of a
matrix. For square matrices, we use√n. The letters j and k are used to denote
indices. The letter i is reserved for −1.

D.2 Fields
R real numbers.

C complex numbers with imaginary unit i = −1.

H quaternions with imaginary units e1 , e2 , e3 .

F general notation for one of the above fields.

Tn the n-dimensional real torus.


Pn
Σn the n-dimensional simplex, x ∈ Rn , k=1 xk = 1, xk ≥ 0, 1 ≤ k ≤ n.

For x ∈ C and x ∈ H we use x̄ to denote the complex and quaternion conjugate


respectively. The absolute value of a number x ∈ F is always denoted |x|.
The same notation is used for the Euclidean length of a vector in Fn , but the
distinction between the two uses of | · | will be clear from the context. For
example, for x ∈ Fn , x = (x1 , . . . , xn ), xj ∈ F, 1 ≤ j ≤ n, we write

n
X
|x|2 = |xj |2 . (D.2.1)
j=1

125
126 APPENDIX D. NOTATION

D.3 Matrices
The fundamental spaces of matrices are denoted as follows:

Fm×n m × n matrix with entries from a field F.

Fn×n n × n matrix with entries from a field F.

Symm(n) real, symmetric n × n matrices.

Her(n) complex, Hermitian n × n matrices.

Quart(n) real, self-dual quaternion n × n matrices.

Jac(n)

We write M † for the adjoint of a matrix M ∈ MFm×n . Matrices in Symm(n),


Her(n) and Quart(n) are self-adjoint: M = M † , but the notion of duality is
distint in each setting, since the underlying field is different. For M ∈ Symm(n),
M † = M T ; if M ∈ Her(n), then M † = M̄ T = M ∗ ; and if M ∈ Quart(n), then
M † = M̄ T . All these matrices have real eigenvalues, and the matrices are said to
be positive definite if all eigenvalues are strictly positive. We denote the subset
of positive definite matrices by Symm+ (n), Her+ (n) and Quart+ (n) respectively.
The Hilbert-Schmidt norm of a matrix M ∈ Fn×m is denoted
n
X
kM k2 = Tr M † M = |Mj,k |2 .

(D.3.1)
j,k=1

Br (M ) denotes the ball of radius r centered at M in the norm k · k.

D.4 Lie groups


The classical groups we consider are also Lie groups. For a Lie group, the as-
sociated Lie algebra is the tangent space at the identity. We use the following
notation for the classical (Lie) groups and their associated Lie algebras, respec-
tively.

O(n), o(n) the real, orthogonal group.

SO(n), so(n) the special orthogonal group.

U(n), u(n) the unitary group.

USp(n), usp(n) the group of unitary symplectic matrices, or the compact


symplectic group.
D.5. BANACH SPACES 127

D.5 Banach spaces


The following notation is used for standard Banach spaces.
C(J) The space of continuous functions on an interval J equipped with
the supremum norm.
PJ The space of probability measures on an interval J equipped with the
weak topology.

hµ, f i The duality pairing between measures and continuous functions on


the interval J given by
Z
hµ, f i = f (x) µ(dx). (D.5.1)
J

C0 (R) The space of continuous function on R that vanish at infinity,


equipped with the supremum norm.
Cb (R) The space of bounded continuous function on R that vanish at
infinity, equipped with the supremum norm.
128 APPENDIX D. NOTATION
Bibliography

[1] M. Abramowitz and I. A. Stegun, Handbook of mathematical func-


tions: with formulas, graphs, and mathematical tables, Courier Dover Pub-
lications, 1972.

[2] J. Baik, P. Deift, and K. Johansson, On the distribution of the length


of the longest increasing subsequence of random permutations., J. Am.
Math. Soc., 12 (1999), pp. 1119–1178.

[3] J. Baik, P. Deift, and Suidan, Combinatorics and Random Matrix


Theory, Amer. Math. Soc., Providence, RI, 2017.

[4] P. Bourgade, On random matrices and L-functions, PhD thesis, Ph. D.


thesis, 2009.

[5] F. Bullo and A. D. Lewis, Geometric Control of Mechanical Systems,


Springer, New York, NY, 2005.

[6] P. Deift, Orthogonal Polynomials and Random Matrices: A Riemann


Hilbert Approach, Courant Lecture Notes in Mathematics, Courant Insti-
tute of Mathematical Sciences, New York, First ed., 1999.

[7] P. Deift, C. D. Levermore, and C. E. Wayne, eds., Dynamical Sys-


tems and Probabilistic Methods in Partial Differential Equations, vol. 31 of
Lecture Notes in Applied Mathematics, American Mathematical Society,
Providence, First ed., 1996, ch. Integrable Hamiltonian Systems, pp. 103–
138.

[8] I. Dumitriu and A. Edelman, Matrix models for beta ensembles, Journal
of Mathematical Physics, 43 (2002), pp. 5830–5847.

[9] F. J. Dyson, Statistical theory of the energy levels of complex systems. I,


Journal of Mathematical Physics, 3 (1962), pp. 140–156.

[10] A. Edelman, Eigenvalues and Condition Numbers of Random Matrices,


PhD thesis, Massachusetts Institute of Technology, 1989.

[11] A. Edelman and B. Sutton, From random matrices to stochastic oper-


ators, Journal of Statistical Physics, 127 (2007), pp. 1121–1165.

129
130 BIBLIOGRAPHY

[12] H. M. Edwards, Riemann’s zeta function, vol. 58, Academic Press, 1974.

[13] L. Erds, H.-T. Yau, and J. Yin, Rigidity of eigenvalues of generalized


Wigner matrices, Adv. Math. (N. Y)., 229 (2012), pp. 1435–1515.

[14] J. Harer and D. Zagier, The euler characteristic of the moduli space of
curves, Inventiones Mathematicae, 85 (1986), pp. 457–485.

[15] S. Hastings and J. McLeod, A boundary value problem associated


with the second Painlevé transcendent and the Korteweg-de Vries equation,
Archive for Rational Mechanics and Analysis, 73 (1980), pp. 31–51.

[16] M. Hestenes and E. Steifel, Method of Conjugate Gradients for solving


Linear Systems, J. Res. Nat. Bur. Stand., 20 (1952), pp. 409–436.

[17] E. Hille, Ordinary differential equations in the complex domain, Courier


Dover Publications, 1997.

[18] M. Jimbo, T. Miwa, Y. Môri, and M. Sato, Density Matrix of an


impenetrable Bose Gas and the fifth Painleve Transcendent, Physica 1D, 1
(1980), pp. 80–158.

[19] K. Johansson, Shape Fluctuations and Random Matrices, Commun.


Math. Phys., 209 (2000), pp. 437–476.

[20] F. John, Partial differential equations, vol. 1 of Applied Mathematical


Sciences, Springer-Verlag, New York, fourth ed., 1991.

[21] T. Kato, Perturbation theory for linear operators, Springer, New York,
NY, 1995.

[22] S. V. Kerov, Asymptotic representation theory of the symmetric group


and its applications in analysis, American Mathematical Society, Provi-
dence, RI, 2003.

[23] P. Lax, Functional Analysis, John Wiley and Sons, 2002.

[24] B. M. McCoy and T. T. Wu, The two-dimensional Ising model, vol. 22,
Harvard University Press, 1973.

[25] M. L. Mehta, Random Matrices, Elsevier, San Diego and London,


third ed., 2004.

[26] M. L. Mehta and M. Gaudin, On the density of eigenvalues of a random


matrix, Nuclear Physics, 18 (1960), pp. 420–427.

[27] L. Nachbin, The Haar integral, R.E. Krieger Pub. Co, 1976.

[28] A. M. Odlyzko and E. M. Rains, On Longest Increasing Subsequences


in Random Permutations, Amer. Math. Soc., Contemp. Math., 251 (2000),
pp. 439–451.
BIBLIOGRAPHY 131

[29] J. Ramı́rez, B. Rider, and B. Virág, Beta ensembles, stochastic Airy


spectrum, and a diffusion, Journal of the American Mathematical Society,
24 (2011), pp. 919–944.
[30] B. Riemann, Collected Works of Bernhard Riemann, Dover Publications,
1953.

[31] B. Simon, Trace ideals and their applications, vol. 120 of Mathematical
Surveys and Monographs, American Mathematical Society, Providence, RI,
second ed., 2005.
[32] R. P. Stanley, Enumerative combinatorics, vol. 2, Cambridge University
Press, 2011.
[33] D. W. Stroock, Probability theory: an analytic view, Cambridge univer-
sity press, 2010.
[34] G. ’t Hooft, A planar diagram theory for strong interactions, Nuclear
Physics B, 72 (1974), pp. 461–473.

[35] Tao, T, Topics in random matrix theory, AMS, Providence, RI, 2011.
[36] C. A. Tracy and H. Widom, Level–Spacing Distributions and the Airy
Kernel, Communications in Mathematical Physics, 159 (1994), pp. 151–
174.

[37] S. M. Ulam, Monte Carlo Calculations in Problems of Mathematical


Physics, in Mod. Math. Eng., E. F. Bechenbach, ed., McGraw-Hill, Inc.,
1961, pp. 261–281.
[38] A. M. Vershik and S. V. Kerov, Asymptotics of the Plancherel measure
of the symmetric group and the limiting form of Young tableaux, Dokl. AN
SSSR, 233 (1977), pp. 1024–1027.
[39] J.-B. Zuber, Introduction to random matrices. Lectures at ICTS, Banga-
lore, 2012.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy