0% found this document useful (0 votes)
12 views20 pages

Cours Phys Stat v2004 Vapart1

Uploaded by

sanogoyssouf22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views20 pages

Cours Phys Stat v2004 Vapart1

Uploaded by

sanogoyssouf22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Statistical Physics

Pierre Devillard and Xavier Leoncini


CHAPTER 1

Introduction

The goal of this course is to cover the basic knowledge of classical statistical
physics and kinetic theory that any physicist needs to have encountered. The
links to continuous models of matter, such as solids or fluid mechanics will not be
addressed in details (this topic is covered in an other course of this master program),
neither will we recall in great depth the basic laws of thermodynamics (covered in
the first two years a typical physics course in any university). We shall assume
these to be already known by the reader.
The foundations of statistical mechanics goes back to the works of Boltzmann
and Maxwell at the end of the nineteenth century, whose goal was to bridge the
gap between the microscopic reversible world and the irreversibility that ruled any
thermodynamical transformation. The main idea behind this approach is that
macroscopic systems are composed with a large amount of microscopic components
(particles) and since we have this large number of constituents, we could use statis-
tics and probabilities to describe their state. To achieve the course, we organized it
as follows: in the first part we begin to discuss the theory and ideas that gave rise
to statistical physics, and then move on to discuss the microcanonical ensemble.
Then we will review the statistical ensemble. It is essentially from this stage that
we really begin to put theory into practice, some useful problems and guided exer-
cises must be fully understood, and mastered. Finally, we will discuss the quantum
aspects of statistical physics and briefly review some "out of equilibrium" charac-
teristics, the problem of thermal transport and/or some elements of classical kinetic
theory of gases are discussed from the point of view of statistical physics.

3
CHAPTER 2

Introduction to statistical physics

In this chapter we really start to consider statistical physics per se. The main
goal to this chapter is to give some intuitions and concepts that are important to
the foundations of statistical physics. As mentioned in the introduction, statis-
tical physics emerged as a way to explain a major discrepancy that had risen in
the XIXth century, between mechanics and thermodynamics. On one side the law
of Newton that emerged from the observation of the sky, and which appeared as
extremely powerful and true especially when considering celestial mechanics, this
law that gave rise to Lagrangian and Hamiltonian point of views are assumed to
be microscopically conservative laws, that thus conserve energy, and that are time
reversible. This reversibility is mostly what caused the problem with thermodynam-
ics. Indeed, the equivalence principle between heat and energy allowed to continue
to have a conservative vision of the laws governing physical phenomena; however
the Clausius principle, and Carnot statement concerning the second principle of
thermodynamics which explains that entropy is necessary a stale or growing func-
tion with time, implied irreversible phenomena. Indeed, should we have a physical
transformation that creates some entropy, it would be then impossible to return
to the original state because entropy can only grow or remain constant. And this
breaks the original mechanical vision.
Once we remark that thermodynamics is concerned with transformations at
the macroscopic level, meaning transformations of system which are composed of
a large number of small constituents (for instance particles like atoms or smaller
than that), we can maybe perform some statistical treatment on our underlying
mechanical system, and use the law of large numbers for that. This is more or less
what has been done by Boltzmann. In what follows, we stress out a bit more how
starting from a perfect reversible mechanical system we may devise the laws that
govern classical statistical physics. We shall start with the notion of phase space,
that will be the first useful ingredient of our theory.

2.1. Phase space

In order to introduce the notion of phase space, it is convenient to consider it


for a classical mechanical system. So, let us consider a classical mechanical system
5
6 2. INTRODUCTION TO STATISTICAL PHYSICS

composed of N identical elementary constituents as for instance particles of mass


m. We shall note Q = (q1 , q2 , · · · , qN ) and P = (p1 , p2 , · · · , pN ) the position
and momentum vectors of the global system for which for each particle we have
qi = (q1,i , · · · , qD,i ) and pi = (p1,i , · · · , pD,i ) where D is the dimension of the
space in which particles are moving. For instance for D = 3, we can consider
qi = (xi , yi , zi ), pi = m(vxi , vyi , vzi ).
Considering an isolated system, the constituents interact through a potential
V (q). In such a case, the dynamics is governed by the following Hamiltonian:
N
X p2i p2
(2.1.1) H(p, q) = + V (q1 , q2 , · · · , qN ) = + V (q)
i=1
2m 2m

from which the ordinary differential equations governing the motion are obtained:
∂H
(2.1.2) q̇k,i =
∂pk,i
∂H
(2.1.3) ṗk,i =− ,
∂qk,i
where ẋ corresponds to the temporal derivative of x, and k ∈ {1, · · · , D}. The
coordinates q = (q1 , q2 , · · · , qN ) and p = (p1 , p2 , · · · , pN ) are the canonical vari-
ables of the Hamiltonian (2.1.1) and the pair (qk,i , pk,i ) is said to be canonically
conjugated.
We notice that even-though particles are individually evolving in a space of
dimension D, the global dynamics evolves actually in a space of dimension 2N D
corresponding to the couple (p, q). This space is what is called the phase space
and the product N D corresponding to the number of canonically conjugated pairs
is the number of degrees of freedom of the system. This space is often noted Γ, a
symbol which is as well used to denote its volume, we write here the infinitesimal
volume element of phase space as
N,D
Y
(2.1.4) dΓ = dpk,i dqk,i .
i=1,k=1

We shall see in the future that the notion of phase space is very useful espe-
cially in order to define in a simple way the so-called micro-canonical and canonical
ensembles. This notion of phase space is also essential in dynamical systems studies
such as for instance the study of chaotic systems.

2.1.1. Some classical exercises :


(1) As a first contact with the notion of phase space, it is convenient to use it
to draw trajectories of simple mechanical systems (drawing a trajectory
in the (q, p) plane). For this purpose, you may for instance draw the
phase portrait of a harmonic oscillator H = p2 /2 + q 2 /2 or the one of a
2.2. THE LIOUVILLE EQUATION 7

simple simple pendulum: H = p2 /2 + (1 − cos(q)); remember that in these


systems the energy is conserved. Once this is done, we can now imagine
how trajectories of a large system are actually lines evolving in a high
dimensional space. Note that trajectories in phase space do not cross (at
least in a finite time).
(2) As a second exercise that is quite technical but as we shall see later on is
quite useful for the perfect gas, we can be interested in the volume of the
hypersphere. For this purpose we need to compute the volume enclosed
P 2
by the surface defined by T = pi /2, with i ∈ {1, · · · , N } (volume
to be considered in RN , the reasoning and results are well described on
wikipedia ( Wikipedia link).

2.2. The Liouville Equation

Starting from a given initial condition our mechanical system is moving on a


trajectory (a line in the phase space), but we may have some errors on the initial
condition, or we can only measure it approximately (up to some digits). We are
then more interested in the evolution of an ensemble of trajectories that can emerge
for our same approximate point. This mays be more and more true if the number
of particle is large.
So let us assume that the number of particles N considered in the Hamiltonian
(2.1.1) is very large, and that instead of considering in detail the evolution of each
particle we are more interested in the evolution of the probability density of phase
space ρ(p, q, t) that N ρdΓ particles are in a phase space volume dΓ at time t around
the point (p, q).
The Liouville equation describes the conservation of ρ during the evolution of
the system, in other words the conservation of the number of its constituents (i. e
the number of particles):
dρ ∂ρ
(2.2.1) = + ∇ · (ρV ) = 0 ,
dt ∂t
where V is to be understood as the speed vector in phase space, meaning (ṗ, q̇)
and the operator ∇· is the divergence in this space This equation (2.2.1) is a typical
conservation law equation, and it is formally identical to the equation of the conser-
vation of mass in a flow in fluid mechanics (you may check how this last equation
is obtained to understand how it easily generalizes to Eq. (2.2.1)). We can rewrite
the last term of the equation (2.2.1) :

(2.2.2) ∇ · (ρV ) = ρ∇ · V + V · ∇ρ ,
8 2. INTRODUCTION TO STATISTICAL PHYSICS

where the operator ∇ is the gradient. Let us compute the divergence of of V . We


obtain
   
X pk,i
˙ X qk,i
˙
(2.2.3) ∇·V = + ,
∂pk,i ∂qk,i
i,k i,k

now using the motion equations (2.1.2) and (2.1.3) we may rewrite equation (2.2.3):
   
2 2
X ∂ H   X ∂ H 
(2.2.4) ∇·V = − + =0.
∂qk,i ∂pk,i ∂pk,i ∂qk,i
i,k i,k

We shall notice that the equation (2.2.4) expresses the fact that the volume of
phase space is conserved by the dynamics (like an incompressible fluid in some
sense). This last property allows us to deduce the Liouville equation

dρ ∂ρ
(2.2.5) = + V · ∇ρ = 0
dt ∂t
and using again the equation of the motion we can rewrite this equation in its
classical form
∂ρ
(2.2.6) = {H, ρ}
∂t
where we introduce the notion of Poisson brackets {, }. These brackets are applied to
functions defined on spaces with even dimensions like the phase space, for instance
in two dimensions it writes:
∂f ∂g ∂f ∂g
{f, g} = − ,
∂x ∂y ∂y ∂x
and this can be simply generalized to higher dimensions when you associate x and
y to canonically conjugated variables and just add the successive terms.

2.2.1. Some exercises :


Properties of the Poisson Brackets. You may already have seen these operators
as they appear often in different area of physics, if you haven’t doing the compu-
tations below once in your life is not a bad thing, as it will help remembering its
properties.

(1) Compute as a function of {f, g} the following expressions: {g, f }, {λf, µg},
{f (λx, µy), g(λx, µy)} where λ and µ are constants.
(2) Compute {f, f }, {f, g(f )}
(3) Prove equation (2.2.6) starting from (2.2.5), deduce from this that energy
is conserved.
(4) Estimate the following sum {a, {b, c}} + {b, {c, a}} + {c, {a, b}}
2.3. NOTION OF EQUILIBRIUM 9

(5) Compute d{g, f }/dt

2.2.2. Remarks on the Liouville equation. We shall insist on few impor-


tant properties of the Liouville equation(2.2.6).
(1) The equation is linear in ρ.
(2) When considering a macroscopic variable hKi which describes the state
of a system, such as the energy, this variable will necessarily correspond
to an average over the microscopic states of a function K. We will be
therefore be able to write up to a normalization factor
Z
(2.2.7) hK(t)i = ρ(p, q, t) K(p, q)dΓ .
Γ

We can then formally consider the equation governing the evolution of


hKi
∂hKi
(2.2.8) = {H, hKi} .
∂t
We then shall notice that there is a strong analogy between statistical
physics and quantum mechanics describes using the Heisenberg formalism.
In both case the Hamiltonian acts as the operator of time-translation.
When the analogy is made, the notion of imaginary time is made, through
the transformation t → it.
(3) An immediate remark can be made. Since the energy of the system is
defined by E = hHi, by using the expression (2.2.8) we obtain immediately
that E = Cte and is independent of time.

2.3. Notion of equilibrium

When considering a systems with a large number of microscopic constituents


(particles) that are interacting, we are usually not concerned with the individual
particle motion, i.e knowing that a particular particle is for instance at a given
location with a given speed; we are interested in global variables that are able
to describe the macroscopic state, like temperature or pressure. The system may
therefore be at some macroscopic equilibrium, in the sense that these variable are
more or less constant with time, while microscopic motion still exists.
Because of this, it becomes clear that when considering a macroscopic system
the notion of equilibrium differs from the notion of equilibrium emerging in classical
mechanics. Indeed In the last case, for a system at equilibrium either nothing
is moving or in uniform translation (in a Galilean reference frame). But for a
macroscopic system, as we shall revisit later in this course, the kinetic approach of
the temperature of a perfect gas tells us immediately, that when the temperature
is constant, the microscopic constituents of the system are not at equilibrium.
10 2. INTRODUCTION TO STATISTICAL PHYSICS

A macroscopic system will be therefore at equilibrium in the same spirit that a


thermodynamical one is, meaning that the macroscopic variables describing the
system (temperature, energy, volume, magnetization etc...) are constant in average.
For instance for a typical intensive variable noted A we have:
!
1 ¯ =0,
(2.3.1) A = A0 + a(t) , a(t) = O p , a(t)
(N
where N represents the number of constituents of the system and ā denotes the
temporal average of a. In the equation above (2.3.1), we considered an intensive
variable, something that in the thermodynamical language means a variable that
does not depend on the size of the system. In our statistical appraoch, when
considering a system with a finite (but large) number of particles, we notice that
the fluctuations around the average are small, and are vanishing with the system
size. Should we have considered an extensive variable, which changes with system
size (very often the volume in thermodynamics), like for instance the total heat
Q stored in the system, we can associate a heat density that would be it’s related
intensive variable and that will scale like (2.3.1), to get the total heat, we just need

to multiply by N , so the fluctuation around the average will scale like N , meaning

that they will diverge with N , but if we recall the shape of the x function, when

x is large we have x ≪ x, so the fluctuations will be negligible when compared to
the average total value.

2.3.1. Remarks.
(1) We recover the fact that at thermodynamical equilibrium (lim N → ∞),
macroscopic intensive variables of the system are constants.
(2) When the system is at equilibrium the probability density ρ is independent
of time: ∂ρ/∂t = 0. N.B This is necessary condition but not a sufficient
one to be at thermodynamical equilibrium.

2.4. The Thermodynamic Limit

In the preceding section we just mentioned the notion of thermodynamic limit.


Before going further and consider equilibrium system, we shall examine a bit this
notion.

2.4.1. The ergodic hypothesis. Let us consider an isolated mechanical sys-


tem defined by a Hamiltonian H(P, Q) and composed of N constituents with N a
large number. We are then not really interested at the details of the microscopic
dynamics, but more at the macroscopic variables of the system, assuming this one
is at equilibrium. Given the equation (2.3.1), we can naively infer that we only need
to perform a temporal average of the system to obtain the values of the macroscopic
2.4. THE THERMODYNAMIC LIMIT 11

4
Y

0
0 1 2 3 4 5 6
X
Figure 2.4.1. Classical phase portrait of a low dimensional
Hamiltonian chaos flow. Example taken from an ABC flow.

variables . However for such temporal average to be performed we need to compute


the individual microscopic trajectories given an initial condition, but this compu-
tation is usually inextricable. Moreover the likely presence of chaotic phenomena
at the microscopic level implies that we need to know exactly the initial condition
to know exactly the outcome. This last point brings up the problem linked to the
microscopic world for which we expect quantum effects to be at play and thus linked
to the uncertainty principle of quantum mechanics.
12 2. INTRODUCTION TO STATISTICAL PHYSICS

On the other side the presence of so called molecular chaos can help. Indeed
even in low dimensional system Hamiltonian chaos exists as illustrated on the Phase
portrait in Fig. 2.4.1. In this system which describes the evolution of field lines
of a tree-dimensional flows we observe a so-called mixed system where regions of
chaos (gray areas) and regular trajectories (closed lines) co-exist. When the initial
condition is in the chaotic region the trajectory will cover more or less uniformly the
whole “chaotic sea”, then admitting that we know sufficiently well its extension and
borders we should be able to replace the average over time (or on the trajectory)
by an average over space, and this for any initial condition in this region. Thus, let
a A0 be the macroscopic intensive variable of interest, the temporal average of
1 t
Z
(2.4.1) A0 = A(t) = lim A (Q(t′ ), P (t′ ), Q(0), P (0)) dt′ ,
t→∞ t 0

where (Q(0), P (0))stands for the initial condition at t = 0, can be replaced by the
average over phase space
Z
(2.4.2) A0 = A(t) = hAi = A(P, P )µ(P, Q)dpdq .
mer

In this last equation, we shall note the presence of a a function µ; in fact the the
notation µ(p, q)dpdq corresponds to the considered ergodic measure in phase space
of the area dpdq. Using less barbaric terms, most of the time µ can be associated
to a probability density, which ponders phase space (puts more weight on some
regions) depending on the time spent by the trajectory in this specific region of
phase space. For instance, looking at the chaotic sea in Fig. 2.4.1, we can notice
that the region 0 < Y < π is slightly darker than the region above it. This means
that the trajectory spends more time (we plot more points equidistant in time ) in
this region, therefore when we replace the time average by the spatial one, we take
this phenomenon into account through the function µ.
The founding idea of statistical physics relies on the hypothesis of molecular
chaos, and on this possibility to exchange time and space averages. One should
notice that µ does not depend on time, so assuming a microscopic conservative
system, µ should be a stationary solution of Liouville equation (To be checked).
In fact, in general the possibility to replace a temporal average by a spatial
one using an ergodic measure µ is not enough, indeed it is possible that more than
one measure µ satisfies these conditions. For instance, let us consider the system
displayed in Fig 2.4.1. In this system, let us pick a close regular trajectory and let
us consider a function µ which is more or less constant on the line in phase space
and zero else-where. This is an ergodic measure, in the sense that the temporal
average of a typical trajectory (with an initial condition in an aerea of non-zero
measure, meaning an initial condition on the line) will be identical to the spatial
2.4. THE THERMODYNAMIC LIMIT 13

average. However such a situation may induce problems for the second principle
of thermodynamics, the trajectory being regular no “disorder” is created as time
grows. We need therefore an extra condition on the dynamics for the foundation
of statistical mechanics. A sufficient condition (but usually not a necessary one)
is that the measure is mixing. This notion of mixing writes mathematically in the
following way:

(2.4.3) lim f (p, q, t)g(p, q, t + τ ) = hf ihgi ,


τ →∞

its meaningful part is that it implies a condition on how time-correlation decrease


with time. We ma expect that this condition is realized in the chaotic zone of
Fig 2.4.1.
At the heart of statistical physics lies thus the hypothesis of molecular chaos
which allows us to avoid computing microscopic trajectories and replace temporal
averages with spatial ones.
This exchange of averages has for consequence that equilibrium statistical
physics in the end does not care about the underlying microscopic dynamics. A
priori, we can then imagine that for a given system, we can impose an ad hoc micro-
scopic dynamics, as long as this one is compatible with the hypothesis of molecular
chaos, a feature that can be very useful when performing numerical simulations.
Remark: A perfect gas is composed of point particles which do not interact.
If we consider such an isolated system, the integration of the equation of motion
is trivial et there is no chaos. However the ergodic and mixing properties have
been rigorously demonstrated in a simple system, the so-called Sinai billiard (a
simple billiard within a square domain with a disk at the center, and a particle
bounces elastically on the outer rim of the circle and the walls of the square). We
can then reasonably expect that this is well the case for a real gas: non-punctual
particles more or less spherical interacting weakly in a given container. Note also
that as already mentioned mixing is a sufficient condition, but not a necessary one.
Indeed, in a gas we have a lot of identical particles, which allows us also to use
the law of large numbers and envision potential permutation between the particles
identities that could circumvent problems with the mixing property. In this sense
the notion of molecular chaos has to be understood in this larger context, and
not necessarily confused with the notion of chaos, which is typically a dynamical
systems notion.

2.4.2. The order of the limits. Finally, before moving to statistical physics
per se, it is important to reconsider a last but important detail. Indeed, in order to
replace temporal averages with spatial ones, a given trajectory starting from a given
initial condition must have the time to cover densely the accessible phase space.
14 2. INTRODUCTION TO STATISTICAL PHYSICS

Hence in order to exchange the averages in the equation (2.4.1), we must take the
limit t → ∞. This limit is therefore necessary to perform equilibrium statistical
physics.
On the other hand, the study of statistical physics has the goal to describe the
thermodynamical properties of a given system. This thermodynamic limit underlies
a second limit, which is the limit N → ∞ of the number of constituents.
Therefore given a classical mechanical system, the thermodynamic limit entails
the succession of two limits:
(1) First we need to make the limit in time t → ∞, which allows us to perform
statistical physics
(2) Then the limit N → ∞, for fluctuations to disappear and have constant
intensive variables of state.
N.B: The order in which these limits are performed is important, as it is not at
all clear that these two limits commute. In fact for some systems, it is possible
to show that the obtained results are different if one order versus the other one is
performed.
Remark: These are mathematical results, in all physical situations the limits
t → ∞ and N → ∞ are idealizations: we are not going to wait for the “end of
times”, moreover cosmology currently asserts that our Universe is in expansion
and is therefore finite, the second limit N → ∞ is thus as well not realistic. It is
then the job of the physicist to take into account those “imperfections” in the
mathematical sense and who has to decide given the considered systems what can
be assumed to be finite or infinite.

2.4.3. Notion of coarse graining. To conclude on the founding of statistical


physics, we shall revisit the notion of “coarse graining”. It is indeed this notion which
allows how a microscopic reversible system (t → −t) can become at the macroscopic
level irreversible with a growth of entropy.
With the discovery of quantum mechanics and the Heisenberg principle and
later on the arousal of measure theory, physicist realized that it was impossible to
know exactly the position in phase space of a given mechanical system. In most
cases the constraints imposed by the uncertainty principle are often beyond the
reach of any experimental device or numerical computation. But during a numerical
computation we are limited by the precision of the computer, and experimentally
measurements are made up to a given resolution. In the same spirit the decimal
system representing numbers implies often that we measure something with a finite
number of figures. Hence we shall know the position (p, q) of a microscopic state
only up to a (δp, δq) precision. In other words, we have an uncertainty δΓ = δpδq
on a measurement in phase space.
2.4. THE THERMODYNAMIC LIMIT 15

δp
δq q
Figure 2.4.2. A real trajectory in phase space that is sampled
with a finite resolution h = δp δq

Given this fact, we can partition the phase space in cells like for instance
illustrated in the picture described in Fig. 2.4.2. In this Figure, we have also
represented a possible trajectory in this phase space. Now let us consider again
the Liouville equation (2.2.6) which translates the conservations of phase space
volume by the microscopic dynamics. Thus, should we consider an ensemble of
initial conditions contained in a ball whose radius for instance would be smaller
than our resolution, the ball would then occupy one cell of our partition (one “tile”
in Fig. 2.4.2). Let us now the dynamics evolve. If we assume molecular chaos, the
ball is going to deform, stretch, fold etc... Thus even though the inscribed volume
is conserved, some pieces of the deformed ball will find themselves on other “tiles”.
If we now want to localize the deformed ball in phase space, given our imprecise
measurements, we shall find out that this one is localized on a given number n(t) of
tiles. With the molecular chaos hypothesis, this number will necessarily grow until
more or less all phase spaces is occupied. We thus start with an initial volume of
less than δΓ (one tile) up to a final volume ΓE , even though the real volume of the
ball is constant.
16 2. INTRODUCTION TO STATISTICAL PHYSICS

This phenomenon of smoothing related to the finiteness of our measurements


is called a coarse graining procedure, as such the growing of the accessible phase
space corresponds in fact to a growth of disorder due to the accumulations of our
errors, and can be simply translated as an irreversible growth towards a stationary
state which at equilibrium maximizes the entropy of the system.
In what follows we will consider only equilibrium states for which the afore-
mentioned hypothesis are valid. We can therefore exchange the averages.

2.5. The micro-canonical ensemble

To start we shall continue on what we did before and consider an isolated


mechanical system described by a given Hamiltonian H. We shall nor car anymore
about the microscopic dynamics, and assume molecular chaos, however we have to
discuss on which part of phase space we shall perform our averages.
Since the system is isolated, the energy E of the system will necessarily be con-
versed by the dynamics. We therefore have to perform our averages in phase space
on the constant energy hypersurface. Note also that it is possible that depending
on the possible symmetries present in the system other constant quantities like for
instance the total angular momentum or the total momentum. In the simple case
for which only the energy is conserved, the microcanonical ensemble assumes
that all accessible configurations in phase space have the same probabil-
ity to be realized (same level of gray in phase space everywhere if we think of
Fig. 2.4.1), given this hypothesis, averaging over phase space implies a probability
density in phase space of the type:

(2.5.1) ρ (p, q) = δ (H (p, q) − E) .

Macroscopic variables are then computed following the equation (2.2.7).


Thus when we consider an isolated system with this ind of probability we are
directly referring to the micro-canonical ensemble

2.5.1. The entropy. When we discussed the notion of coarse graining, we


envisioned the notion of growing disorder. This notion of disorder is intimately
linked to the notion of entropy, that you have heard about in thermodynamics.
Once the stationary state is reached, a priori all accessible phase space has been
occupied, a measure of disorder could then be the volume of this accessible phase
space:
Z
(2.5.2) Ω(E) = δ (H (p, q) − E) dΓ .
Γ
2.5. THE MICRO-CANONICAL ENSEMBLE 17

However, the dimension of phase space is 2N D, we can then expect that the volume
of the hypersurface scales like

Ω(E) ∼ γ N −1 ,

where γ is an elementary volume for each constituent. In other words, Ω(E) is an


exponential function of N . Using this type of reasoning, we can arrive quite simply
to create an extensive variable measuring disorder by taking the logarithm of Ω(E).
It is there that the genius of Boltzmann appears, and he writes:
1
(2.5.3) S = −k log = k log Ω(E) ,
Ω(E)
by reasoning with probabilities and saying that all states occupying the volume
Ω(E) are evenly probable. He notices as well that up to a constant (that is useless
in traditional thermodynamics), this quantity corresponds to the actual thermody-
namic entropy: the state function that for a gaz writes
δQ
Z
S= ,
T
where T is the temperature in Kelvin and Q the quantity of heat, and if k ( the
Boltzmann constant) is such that R = k NA where R is the perfect gas constant
and NA the Avogadro number.

Remark 1. Going back to the limit N → ∞, we can notice that a few moles
are sufficient and thus that NA ∼ ∞ (at least for the simplest situations).

Remark 2. In the microcanonical ensemble (also sometimes noted µ-canonical),


the quantity to determine is thus Ω(E); it is the one that allows us to compute the
entropy for a given E. Once we have it, we can directly obtain the other variables,
for instance the temperature
1 ∂S
(2.5.4) = ...
T ∂E

Remark 3. Sometimes the total energy is not sufficient to determine a state,


in the above equation it should be noted that we consider a fix number of
consituents N (that will always be the case in the microcanonical ensemble of
statistical physics before the thermodynamic limit). We may also sometime
consider situations where for a given N , the total volume V is fixed, the partial
derivative in Eq. (2.5.4) is then to be understood at constant volume (it is also
understood that it is always taken at constant N ). In this ensemble the extensive
variables are considered constants (isolated system).
18 2. INTRODUCTION TO STATISTICAL PHYSICS

Remark 4. Logically, the number in the logarithm should be dimensionless,


so sometimes we define Ω(E) asΩ(E)/hN d , where h has the dimension of an action
(p × q) and d is the dimension of the space in which the particles evolve. This
actually has no consequence since it is a constant and log(a × b) = log(a) + log(b).

2.5.2. Application problem 1: The perfect gaz. We consider a perfect


gas, composed of N particles located in a volume V .
(1) List all the approximations/assumptions that characterize a perfect gas
and deduce from it that the Hamiltonian of the system writes
N
X p2i
(2.5.5) H(p, q) =
i=1
2m

(2) Let consider a total given energy E, compute the volume of accessible
phase space Ω(E, N, V ) and deduce from it the entropy. (You may recall
you computed earlier in this course the volume of an hypersphere)
(3) Compute the temperature of the system. Recall the definition of the free
energy F from thermodynamics, and compute it. Using either the entropy
S or the free energy F compute the pressure P of the gas. Recover the
law of perfect gas (Boyle-Mariotte law).

2.5.3. Application problem 2: A discrete system of spins. We con-


sider a system of N three-level particles (they can be in three states) who has a
Hamiltonian of the form
X
H = −h Si Si = −1, 0, 1 ,
i

where h is a positive constant (this can represent the energy levels of spin-1 particles
in an external magnetic field.)
(1) In what states can be one spin, two spins etc... Deduce from this the
number of possible configurations. Given these considerations what is the
phase space?

(2) We want to find the ration n−1 /n1 in the microcanonical ensemble as a
function of temperature when N → ∞. Recall the definition of the en-
tropy and temperature in this ensemble. Assuming we know these, what
is then the free energy F ?

(3) We consider a fixed energy E, compute E as a functions of the nS .


2.5. THE MICRO-CANONICAL ENSEMBLE 19

(4) The total number of spins is N . Let us assume a fixed number n0 of


spins in the state S = 0. What is the maximum value n0max of n0 for a
given energy E? The minimum value n0min ? How many possibilities do
we have to realize a system with n0 spins in the state S = 0?
(5) With E, N and n0 fixed, show that n1 and n−1 are then fixed. How many
possibilities are there to choose n1 spins in state S = 1, once the spins
with S = 0 have been chosen? Is there then any liberty to choose which
spins are in state S = −1. Deduce from this an expression of the number
of accessible states Ω(N, E) as a sum over n0 in the form
nX
0max

Ω(N, E) = ω(n0 , N, E) .
n0min

(6) This sum is not easy to compute exactly. Let us try to estimate it. We
define x = n0 /N , y = n1 /N , z = n2 /N . Show that using the approximate
Stirling formula n! ∼ nn e−n we get
 N
1
ω(n0 , N, E) ≈ = f (x)N .
xx y y z z
And that as a consequence
Z 1−ε
Ω(N, E) ≈ N f (x)N dx ,
0

where ε will be explicated.


(7) Let us now assume that f has a maximum at a value x̃ in the interval

]0, 1 − ε[. Let us note x = x̃ + ξ/ N and expand f around x̃. Now as a
preliminary show that
a n
lim (1 + ) → ea .
n→∞ n
Deduce from this that
2
f (x)N ≈ f (x̃)N e−σξ ,

where σ is a constant that will be explained. Deduce from this that



Ω(N, E) ≈ N f (x̃)N A ,

where A is a constant integral. Deduce from this a simple expression of


the entropy S(E, N ) of the system as a function of f (x̃).

(8) To finalize our analysis we now just have to compute x̃. It is easier then
to compute the maximum of log f . Write
1
f (x) =
xx y(x)y(x) z(x)z(x)
20 2. INTRODUCTION TO STATISTICAL PHYSICS

and find an equation for x̃, show that it implies that


1 yz
log 2 = 0 ,
2 x
Deduce from this that
1 p 
x̃ = 4 − 3ε2 − 1 ,
3
where ε = E/(N h).
(9) Show that then that the temperature satisfies the relation
1 z(x̃)
β=− log( ).
2h y(x̃)
Deduce from this that
n−1
r2 = = e−βh .
n1
(10) Compute x̃, y(x̃), z(x̃) as a function of r. Compute then the entropy, the
energy and the temperature as a function of r and possibly N . Deduce
from this the free energy?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy