Cours Phys Stat v2004 Vapart1
Cours Phys Stat v2004 Vapart1
Introduction
The goal of this course is to cover the basic knowledge of classical statistical
physics and kinetic theory that any physicist needs to have encountered. The
links to continuous models of matter, such as solids or fluid mechanics will not be
addressed in details (this topic is covered in an other course of this master program),
neither will we recall in great depth the basic laws of thermodynamics (covered in
the first two years a typical physics course in any university). We shall assume
these to be already known by the reader.
The foundations of statistical mechanics goes back to the works of Boltzmann
and Maxwell at the end of the nineteenth century, whose goal was to bridge the
gap between the microscopic reversible world and the irreversibility that ruled any
thermodynamical transformation. The main idea behind this approach is that
macroscopic systems are composed with a large amount of microscopic components
(particles) and since we have this large number of constituents, we could use statis-
tics and probabilities to describe their state. To achieve the course, we organized it
as follows: in the first part we begin to discuss the theory and ideas that gave rise
to statistical physics, and then move on to discuss the microcanonical ensemble.
Then we will review the statistical ensemble. It is essentially from this stage that
we really begin to put theory into practice, some useful problems and guided exer-
cises must be fully understood, and mastered. Finally, we will discuss the quantum
aspects of statistical physics and briefly review some "out of equilibrium" charac-
teristics, the problem of thermal transport and/or some elements of classical kinetic
theory of gases are discussed from the point of view of statistical physics.
3
CHAPTER 2
In this chapter we really start to consider statistical physics per se. The main
goal to this chapter is to give some intuitions and concepts that are important to
the foundations of statistical physics. As mentioned in the introduction, statis-
tical physics emerged as a way to explain a major discrepancy that had risen in
the XIXth century, between mechanics and thermodynamics. On one side the law
of Newton that emerged from the observation of the sky, and which appeared as
extremely powerful and true especially when considering celestial mechanics, this
law that gave rise to Lagrangian and Hamiltonian point of views are assumed to
be microscopically conservative laws, that thus conserve energy, and that are time
reversible. This reversibility is mostly what caused the problem with thermodynam-
ics. Indeed, the equivalence principle between heat and energy allowed to continue
to have a conservative vision of the laws governing physical phenomena; however
the Clausius principle, and Carnot statement concerning the second principle of
thermodynamics which explains that entropy is necessary a stale or growing func-
tion with time, implied irreversible phenomena. Indeed, should we have a physical
transformation that creates some entropy, it would be then impossible to return
to the original state because entropy can only grow or remain constant. And this
breaks the original mechanical vision.
Once we remark that thermodynamics is concerned with transformations at
the macroscopic level, meaning transformations of system which are composed of
a large number of small constituents (for instance particles like atoms or smaller
than that), we can maybe perform some statistical treatment on our underlying
mechanical system, and use the law of large numbers for that. This is more or less
what has been done by Boltzmann. In what follows, we stress out a bit more how
starting from a perfect reversible mechanical system we may devise the laws that
govern classical statistical physics. We shall start with the notion of phase space,
that will be the first useful ingredient of our theory.
from which the ordinary differential equations governing the motion are obtained:
∂H
(2.1.2) q̇k,i =
∂pk,i
∂H
(2.1.3) ṗk,i =− ,
∂qk,i
where ẋ corresponds to the temporal derivative of x, and k ∈ {1, · · · , D}. The
coordinates q = (q1 , q2 , · · · , qN ) and p = (p1 , p2 , · · · , pN ) are the canonical vari-
ables of the Hamiltonian (2.1.1) and the pair (qk,i , pk,i ) is said to be canonically
conjugated.
We notice that even-though particles are individually evolving in a space of
dimension D, the global dynamics evolves actually in a space of dimension 2N D
corresponding to the couple (p, q). This space is what is called the phase space
and the product N D corresponding to the number of canonically conjugated pairs
is the number of degrees of freedom of the system. This space is often noted Γ, a
symbol which is as well used to denote its volume, we write here the infinitesimal
volume element of phase space as
N,D
Y
(2.1.4) dΓ = dpk,i dqk,i .
i=1,k=1
We shall see in the future that the notion of phase space is very useful espe-
cially in order to define in a simple way the so-called micro-canonical and canonical
ensembles. This notion of phase space is also essential in dynamical systems studies
such as for instance the study of chaotic systems.
(2.2.2) ∇ · (ρV ) = ρ∇ · V + V · ∇ρ ,
8 2. INTRODUCTION TO STATISTICAL PHYSICS
now using the motion equations (2.1.2) and (2.1.3) we may rewrite equation (2.2.3):
2 2
X ∂ H X ∂ H
(2.2.4) ∇·V = − + =0.
∂qk,i ∂pk,i ∂pk,i ∂qk,i
i,k i,k
We shall notice that the equation (2.2.4) expresses the fact that the volume of
phase space is conserved by the dynamics (like an incompressible fluid in some
sense). This last property allows us to deduce the Liouville equation
dρ ∂ρ
(2.2.5) = + V · ∇ρ = 0
dt ∂t
and using again the equation of the motion we can rewrite this equation in its
classical form
∂ρ
(2.2.6) = {H, ρ}
∂t
where we introduce the notion of Poisson brackets {, }. These brackets are applied to
functions defined on spaces with even dimensions like the phase space, for instance
in two dimensions it writes:
∂f ∂g ∂f ∂g
{f, g} = − ,
∂x ∂y ∂y ∂x
and this can be simply generalized to higher dimensions when you associate x and
y to canonically conjugated variables and just add the successive terms.
(1) Compute as a function of {f, g} the following expressions: {g, f }, {λf, µg},
{f (λx, µy), g(λx, µy)} where λ and µ are constants.
(2) Compute {f, f }, {f, g(f )}
(3) Prove equation (2.2.6) starting from (2.2.5), deduce from this that energy
is conserved.
(4) Estimate the following sum {a, {b, c}} + {b, {c, a}} + {c, {a, b}}
2.3. NOTION OF EQUILIBRIUM 9
2.3.1. Remarks.
(1) We recover the fact that at thermodynamical equilibrium (lim N → ∞),
macroscopic intensive variables of the system are constants.
(2) When the system is at equilibrium the probability density ρ is independent
of time: ∂ρ/∂t = 0. N.B This is necessary condition but not a sufficient
one to be at thermodynamical equilibrium.
4
Y
0
0 1 2 3 4 5 6
X
Figure 2.4.1. Classical phase portrait of a low dimensional
Hamiltonian chaos flow. Example taken from an ABC flow.
On the other side the presence of so called molecular chaos can help. Indeed
even in low dimensional system Hamiltonian chaos exists as illustrated on the Phase
portrait in Fig. 2.4.1. In this system which describes the evolution of field lines
of a tree-dimensional flows we observe a so-called mixed system where regions of
chaos (gray areas) and regular trajectories (closed lines) co-exist. When the initial
condition is in the chaotic region the trajectory will cover more or less uniformly the
whole “chaotic sea”, then admitting that we know sufficiently well its extension and
borders we should be able to replace the average over time (or on the trajectory)
by an average over space, and this for any initial condition in this region. Thus, let
a A0 be the macroscopic intensive variable of interest, the temporal average of
1 t
Z
(2.4.1) A0 = A(t) = lim A (Q(t′ ), P (t′ ), Q(0), P (0)) dt′ ,
t→∞ t 0
where (Q(0), P (0))stands for the initial condition at t = 0, can be replaced by the
average over phase space
Z
(2.4.2) A0 = A(t) = hAi = A(P, P )µ(P, Q)dpdq .
mer
In this last equation, we shall note the presence of a a function µ; in fact the the
notation µ(p, q)dpdq corresponds to the considered ergodic measure in phase space
of the area dpdq. Using less barbaric terms, most of the time µ can be associated
to a probability density, which ponders phase space (puts more weight on some
regions) depending on the time spent by the trajectory in this specific region of
phase space. For instance, looking at the chaotic sea in Fig. 2.4.1, we can notice
that the region 0 < Y < π is slightly darker than the region above it. This means
that the trajectory spends more time (we plot more points equidistant in time ) in
this region, therefore when we replace the time average by the spatial one, we take
this phenomenon into account through the function µ.
The founding idea of statistical physics relies on the hypothesis of molecular
chaos, and on this possibility to exchange time and space averages. One should
notice that µ does not depend on time, so assuming a microscopic conservative
system, µ should be a stationary solution of Liouville equation (To be checked).
In fact, in general the possibility to replace a temporal average by a spatial
one using an ergodic measure µ is not enough, indeed it is possible that more than
one measure µ satisfies these conditions. For instance, let us consider the system
displayed in Fig 2.4.1. In this system, let us pick a close regular trajectory and let
us consider a function µ which is more or less constant on the line in phase space
and zero else-where. This is an ergodic measure, in the sense that the temporal
average of a typical trajectory (with an initial condition in an aerea of non-zero
measure, meaning an initial condition on the line) will be identical to the spatial
2.4. THE THERMODYNAMIC LIMIT 13
average. However such a situation may induce problems for the second principle
of thermodynamics, the trajectory being regular no “disorder” is created as time
grows. We need therefore an extra condition on the dynamics for the foundation
of statistical mechanics. A sufficient condition (but usually not a necessary one)
is that the measure is mixing. This notion of mixing writes mathematically in the
following way:
2.4.2. The order of the limits. Finally, before moving to statistical physics
per se, it is important to reconsider a last but important detail. Indeed, in order to
replace temporal averages with spatial ones, a given trajectory starting from a given
initial condition must have the time to cover densely the accessible phase space.
14 2. INTRODUCTION TO STATISTICAL PHYSICS
Hence in order to exchange the averages in the equation (2.4.1), we must take the
limit t → ∞. This limit is therefore necessary to perform equilibrium statistical
physics.
On the other hand, the study of statistical physics has the goal to describe the
thermodynamical properties of a given system. This thermodynamic limit underlies
a second limit, which is the limit N → ∞ of the number of constituents.
Therefore given a classical mechanical system, the thermodynamic limit entails
the succession of two limits:
(1) First we need to make the limit in time t → ∞, which allows us to perform
statistical physics
(2) Then the limit N → ∞, for fluctuations to disappear and have constant
intensive variables of state.
N.B: The order in which these limits are performed is important, as it is not at
all clear that these two limits commute. In fact for some systems, it is possible
to show that the obtained results are different if one order versus the other one is
performed.
Remark: These are mathematical results, in all physical situations the limits
t → ∞ and N → ∞ are idealizations: we are not going to wait for the “end of
times”, moreover cosmology currently asserts that our Universe is in expansion
and is therefore finite, the second limit N → ∞ is thus as well not realistic. It is
then the job of the physicist to take into account those “imperfections” in the
mathematical sense and who has to decide given the considered systems what can
be assumed to be finite or infinite.
δp
δq q
Figure 2.4.2. A real trajectory in phase space that is sampled
with a finite resolution h = δp δq
Given this fact, we can partition the phase space in cells like for instance
illustrated in the picture described in Fig. 2.4.2. In this Figure, we have also
represented a possible trajectory in this phase space. Now let us consider again
the Liouville equation (2.2.6) which translates the conservations of phase space
volume by the microscopic dynamics. Thus, should we consider an ensemble of
initial conditions contained in a ball whose radius for instance would be smaller
than our resolution, the ball would then occupy one cell of our partition (one “tile”
in Fig. 2.4.2). Let us now the dynamics evolve. If we assume molecular chaos, the
ball is going to deform, stretch, fold etc... Thus even though the inscribed volume
is conserved, some pieces of the deformed ball will find themselves on other “tiles”.
If we now want to localize the deformed ball in phase space, given our imprecise
measurements, we shall find out that this one is localized on a given number n(t) of
tiles. With the molecular chaos hypothesis, this number will necessarily grow until
more or less all phase spaces is occupied. We thus start with an initial volume of
less than δΓ (one tile) up to a final volume ΓE , even though the real volume of the
ball is constant.
16 2. INTRODUCTION TO STATISTICAL PHYSICS
However, the dimension of phase space is 2N D, we can then expect that the volume
of the hypersurface scales like
Ω(E) ∼ γ N −1 ,
Remark 1. Going back to the limit N → ∞, we can notice that a few moles
are sufficient and thus that NA ∼ ∞ (at least for the simplest situations).
(2) Let consider a total given energy E, compute the volume of accessible
phase space Ω(E, N, V ) and deduce from it the entropy. (You may recall
you computed earlier in this course the volume of an hypersphere)
(3) Compute the temperature of the system. Recall the definition of the free
energy F from thermodynamics, and compute it. Using either the entropy
S or the free energy F compute the pressure P of the gas. Recover the
law of perfect gas (Boyle-Mariotte law).
where h is a positive constant (this can represent the energy levels of spin-1 particles
in an external magnetic field.)
(1) In what states can be one spin, two spins etc... Deduce from this the
number of possible configurations. Given these considerations what is the
phase space?
(2) We want to find the ration n−1 /n1 in the microcanonical ensemble as a
function of temperature when N → ∞. Recall the definition of the en-
tropy and temperature in this ensemble. Assuming we know these, what
is then the free energy F ?
Ω(N, E) = ω(n0 , N, E) .
n0min
(6) This sum is not easy to compute exactly. Let us try to estimate it. We
define x = n0 /N , y = n1 /N , z = n2 /N . Show that using the approximate
Stirling formula n! ∼ nn e−n we get
N
1
ω(n0 , N, E) ≈ = f (x)N .
xx y y z z
And that as a consequence
Z 1−ε
Ω(N, E) ≈ N f (x)N dx ,
0
(8) To finalize our analysis we now just have to compute x̃. It is easier then
to compute the maximum of log f . Write
1
f (x) =
xx y(x)y(x) z(x)z(x)
20 2. INTRODUCTION TO STATISTICAL PHYSICS