Phys Tat
Phys Tat
Clément Leloup
These lecture notes have been written as support material for the physics classes in
the first year of masters at the University of science and technology of Hanoi. There is a
tremendous literature on statistical physics that is available to complete this introductory
course. Here are a few references to start with:
These are all standard internationally recognized textbooks on the topic. The first is
more introductory and reviews the fundamental aspects of the question as well as the
main results from macroscopic thermodynamics before diving into the topic of statistical
mechanics. The other two are more advanced and, although they are self-contained and
include all the material presented here, they go in more details and well beyond the
content of this course.
Contents
1 Physics 1 3
1.1 Preliminary considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 From the microscopic to the macroscopic description . . . . . . . 3
1.1.2 Statistical ensembles . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Ergodic hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.4 The quantum description . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Microcanonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 The microcanonical distribution . . . . . . . . . . . . . . . . . . . 6
1.2.2 Statistical entropy . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Sub-systems at equilibrium . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.1 General description of the canonical system . . . . . . . . . . . . 14
1.3.2 The thermostat and canonical distribution . . . . . . . . . . . . . 14
1.3.3 The partition function . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.4 The thermodynamic potential . . . . . . . . . . . . . . . . . . . . 17
1.3.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 Physics 2 20
2.1 Grand canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.1 The grand canonical distribution . . . . . . . . . . . . . . . . . . 20
2.1.2 The grand potential . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2
Physics 1
– its volume V ,
– its pressure P ,
– its temperature T ,
and that some of these quantities may be related by an equation of state, e.g. the well-
known equation of state for the ideal gas P V = N RT with R the constant of ideal gases,
3
4
further reducing the number of required macroscopic quantities. In principle, one can also
use the momentum and angular momentum of the system which can also be considered to
be constant at equilibrium, but since there is always a change of reference frame that can
make these vanish, we will consider that the system is globally at rest in the remaining
of these notes even though the microscopic particles are moving.
Although the laws of thermodynamics that rule over the evolution of macroscopic sys-
tems were defined without any consideration for its microscopic constituants, the goal of
statistical physics, that started in the second half of the 18th century and whose develop-
ment is still active to this day, is to relate the thermodynamical macroscopic quantities
to the microscopic dynamics using a statistical description of the latter.
where we have defined dq = dq1 . . . dqN and dq = dp1 . . . dpN . Equivalently in quantum
statistical physics we can define the probability of the system to be in a given state of
the state space.
From these associated probabilities in the Gibbs ensemble, macroscopic quantities f can
be computed as statistical averages:
Z
f¯ = f (q, p) ρ (q, p) dq dp (1.2)
These are ensemble averages, i.e. over distinct microstates, and are not in general related
to a physical process of averaging. They will be denoted by barred quantities in the
remaining of these notes.
The probabilistic view used in statistical physics is very different from the one followed in
the quantum theory. There, the evolution of the amplitudes of probability is deterministic
and follows the Schrodinger equation. The probabilistic description only intervened when
a physical observable was measured via a fundamentally probabilistic process. On the
other hand, the probabilistic description of statistical physics is only necessary due to our
lack of information, and computational power, regarding the system and its evolution.
Were these limitations to be overcome, the statistical description would prove to be
unnecessary.
5
Z
1ZT
(Ergodic hypothesis) f (q, p) ρ (q, p) dq dp = f (q (t) , p (t)) dt (1.3)
T 0
Such systems are called ergodic systems and this constitutes the so-called ergodic hypoth-
esis introduced by Boltzmann in 1871. To this day, there exists no proof that general
physical systems are ergodic, and such a statement has only been proven for very simple
systems. Therefore, this remains a fundamental hypothesis of the theory. As such, it can
be seen as a weak point of the current theory of statistical physics.
where the cα are C numbers whose modulus square give the probability to measure |Ψ⟩
in the state |α⟩.
On the other hand, the probabilities Pα of quantum statistical physics that the microstate
is in the state |α⟩ are of a very different nature. They are R numbers with the property
P
that α Pα = 1 and which characterize the mixing of the physical macroscopical state in
terms of pure quantum states |α⟩ of definite energy Eα . The mean value of an observable
A in statistical physics is given by:
X
Ā = Pα ⟨α | A | α⟩ (1.5)
α
from the average of A in each state |α⟩ entering the mixture, given by ⟨α | A | α⟩.
1
(Microcanonical distribution) Pα = (1.7)
Ω (E0 , δE, N0 , V0 )
As we can see, the microcanonical distribution assumes equal probabilities of each ac-
cessible states. This microcanonical distribution can be derived from more fundamental
7
principles, but so far it still always requires the introduction of an assumption that formal-
izes some notion of “maximal disorder" which can not be further justified. It is therefore a
matter of choice to build our argument on the microcanonical distribution instead. Note
that this expression for the microcanonical distribution is only valid at equilibrium, so
our choice limits our capabilities to the treatment of systems at equilibrium only, and the
study of out-of-equilibrium evolution is beyond our scope.
X
(Statistical entropy) S≡− Pα ln Pα (1.8)
α
This definition is also valid outside of the equilibrium, but it takes a very simple form at
equilibrium when the probabilities follow the microcanonical distribution:
1 X
Seq = − ln Ω (1.9)
Ω (E0 , δE, N0 , V0 ) α
Since the |α⟩ states span all the accessible states in Ω, the number of terms in the
sum exactly compensate the denominator. We can now introduce the historical notion of
entropy in statistical physics introduced by Boltzmann and therefore called the Boltzmann
entropy, S = kB S with kB the Boltzmann constant. We see that at equilibrium for the
microcanonical ensemble it is given by:
This expression for the entropy renders manifest some of its important properties. First
of all, if we assume that the complete system is made of two sub-systems A and B
that weakly interact with each other. By weakly, we mean that the micro-state of one
sub-system has no sensible impact on the other, then we have that:
Ω ≃ ΩA × ΩB ⇒ S ≃ SA + SB (1.11)
In other words, the entropy is an extensive quantity. The deviation for this equality is
extremely small in reality given the enormous amount of possible micro-states for any
realistic macro-state. This results implies that the entropy is proportional to the size of
the system and S (E, N, V ) = N · S (E/N, V /N ).
Finally, the most important property of entropy is that it is always increasing and is
maximal at equilibrium. It can be demonstrated from more fundamental assumptions
than the canonical distribution, but here we will only try to give a qualitative justification
of why this is the case. Assume that the macroscopic system can be in two distinct
macro-states X and Y , and that SX > SY . Because the actual number of micro-states
that correspond to these are exponentiated as compared to the entropy, these are vastly
different between the two states. Since each micro-state is equiprobable, it will be way
more probable to find the macro-state in the state X instead of Y , and the macro-state
8
will change towards its most probable state. At equilibrium, the entropy does not increase
anymore, so it has reached a maximum.
From these properties that characterize or even define thermodynamics entropy, we can
identify the Boltzmann entropy with the thermodynamics entropy. In addition to give
a systematic way of computing the entropy of a system, it leads to a new interpretation
of the physical meaning of entropy. Although it was understood as a measure of the
microscopic disorder of a system, it now corresponds more specifically to the amount of
missing information on a system. If the entropy vanishes, S = 0, it means that only
one micro-state corresponds to our description of the system, in other words we know
all there is to know about it. The bigger the entropy, the more information we lack to
describe the system. Therefore, from this description, the entropy becomes a subjective
quantity that depends on the amount of information possessed by a specific observer, and
two different observers can measure different amounts of entropy for the same system.
1 1 ∂S
(Thermodynamic temperature) β= = (1.15)
kB T kB ∂E
In other words, when the two sub-systems are in interaction, an irreversible exchange of
energy in the form of heat will form until they reach equilibrium where their temperature
is homogenized. In the process, the sub-system with the highest energy will lose its excess
energy to the other sub-system. This line of reasoning is still valid when considering a
large number of subsystems in interaction, whose temperature will homogenize.
9
From these three definitions, we recover the formula that relates the variation of entropy
and the variation of E, N and V , familiar from macroscopic thermodynamics:
dS = kB β (dE + P dV − µ dN ) ⇒ dE = T dS − P dV + µ dN (1.18)
1.2.4 Examples
We end this section with practical examples of how calculation of entropy in terms of
micro-states of a same macro-state can tell us about the state at equilibrium.
Turning now towards the computation of Ωp (E0 , δE, N0 ), we introduce the cumulant
function O (E, N0 ) that give the number of ways to arrange N velocities such that the
energy is lower than E:
Z E
∂Ov
Op (E, N0 ) = Ωp (E ′ , 0, N0 ) dE ′ ⇒ Ωp (E0 , δE, N0 ) = δE
(E0 ) (1.22)
0 ∂E
This corresponds to the size of the velocity space that follows the condition:
X
p2x,i + p2y,i + p2z,i ≤ 2mE (1.23)
i
√
which is the volume of a 3N0 -dimensional sphere of radius 2mE. Using the same
discretization scheme as for the number of position by dividing the hyper-sphere in small
3N0 -dimensional hyper-cubes of side ∆p, the number of ways to arrange the velocities in
the hyper-sphere is:
!3N0 /2 !3N0 /2
π 3N0 /2 2mE 1 4πemE
Op (E, N0 ) = ≃√ (1.24)
(3N0 /2)! (∆p)2 3πN0 3N0 (∆p)2
where we used the Stirling approximation to get the second expression, which is a very
good approximation as N0 is typically extremely large. From this formula, we can deter-
mine the momentum contribution to the entropy:
!
∂Op 3N0
Sp = kB ln (E0 ) + ln δE = kB ln Op + ln + ln δE (1.25)
∂E 2E0
Because the number of particles N0 is very large, and ln Op scales faster than N0 while
the other terms scale much slower, the first term largely dominates and the momentum
contribution to the entropy which becomes:
3 3N0 (∆p)2 3
" #
3
Sp ≃ kB N0 ln 4πE0 − ln + (1.26)
2 2 m 2
From equations (1.21) and (1.26), we can finally express the entropy of our gas model.
Before we do just that, we need to address a subtle question first. So far, we assumed
that we can label and follow each particles such that two micro-states that differ only by
the exchange of two particles are different. In other words, the particles of our system
are distinguishable. However, since our particles are all the same, it makes more sense
to consider them as indistinguishable such that the actual number of micro-states that
correspond to the macro-state {E0 , δE, N0 , V0 } needs to be re-scaled by a factor N0 !
which, again using the Stirling approximation, leads to the total entropy of our simple
gas model:
" #
3 E0 V0 3 4πm 5
S ≃ kB N0 ln + ln − ln 2 + (1.27)
2 N0 N0 2 3 (∆x∆p) 2
From this expression of the entropy, we can determine the temperature, chemical potential
and pressure at equilibrium:
2 E0
T = (1.28)
3 kB N0
" #
E0 4πm E0 2 V0 1
µ = ln − ln − ln − (1.29)
N0 3 (∆x∆p)2 N0 3 N0 3
2 E0 N0 kB T
P = = (1.30)
3 V0 V0
11
where the last equation gives the famous equation of state of the ideal gas.
The quantum treatment is similar. Because the particles do not interact, the Hamiltonian
is that of a free particle and its eigenstates are plane waves:
i
Ψ (x, y, z) = e ℏ (px x+py y+pz z) (1.31)
1/3
If we model the box as a cubic infinite potential well of side L = V0 , we know from
quantum mechanics that the momentum is quantified and follows px,y,z = ℏπ n
L x,y,z
with
nx,y,z three integers. It is however often more convenient to use periodic boundary con-
ditions, known as Born-von Karman (BVK) conditions, which amounts to neglecting
surface effects. These are formally expressed as:
and have the effect of removing the odd integers, which reduces the allowed momentum
values to:
h h h
px = nx , py = ny , and py = nz (1.33)
L L L
The correspondence principle implies that the energy is given, as in the classical case, by:
1 X 2 h2 X 2 2 2
E= pi = 2/3
n x,i + n y,i + n z,i (1.34)
2m i 2mV0 i
Using the cumulant function as in the classical case, we relate the number of states to
the number of points of integer coordinates in the 3N0 −dimensional sphere of radius
1/3 √ 1/3 1/2
V0 2mE0 /h. Given the orders of magnitude of V0 E0 as compared to h, this radius
is enormous, and we can consider the number of points with integer coordinates to be
very close to the actual volume of the sphere. The calculation being the same as in the
classical case, we only state the result:
3 E0 V0 3 4πm 5
(Ideal gas entropy) S ≃ kB N0 ln + ln − ln + (1.36)
2 N0 N0 2 3h2 2
We see that it is the same result as for the classical gas if we take ∆x∆p = h, which gives
the correct order of magnitude but not quite the result we obtain in quantum mechanics
from the Heisenberg uncertainty ∆x∆p ∼ h/4π.
Frenkel defects
We consider an ideal crystal made of N0 atoms regularly arranged on the N0 sites of the
crystal lattice. We say that there is a Frenkel defect in the crystal lattice when a vertex
is unoccupied and the corresponding atom is displaced to an interstice. We assume that
12
there are N0′ possible interstice positions in the lattice, and the energy needed to create
a Frenkel defect to be constant equal to ϵ > 0. Let us estimate the entropy of a crystal
with n ≪ N0′ , N0 Frenkel defects.
Since there are n defects and that each defect requires an energy ε, the total energy of
the system E is given, taking the origin of energy for the state where all vertices of the
lattice are occupied, by:
E = nε (1.37)
The estimation of the number of micro-states corresponding to this situation can be
decomposed into two problems:
1. The number of ways of arranging n indistinguishable particles in the N0′ interstices,
N0′
given by the binomial coefficient n .
S = kB [ln (N0′ !) + ln (N0 !) − ln ((N0′ − n)!) − ln ((N0 − n)!) − 2ln (n!)] (1.38)
Using the Stirling formula for the logarithms of factorials, and making the energy of the
system appear, we find:
E E
S ≃ kB N0′ ln N0′ + N0 ln N0 − N0′ − ln N0′ −
ε ε
E E E E
− N0 − ln N0 − − 2 ln (1.39)
ε ε ε ε
The temperature of the crystal with n Frenkel defect is obtained from this expression of
the entropy as:
!−1 #−1 !#−1
N0′ ε N0 N0′ ε2
" ! "
∂S ε N0 ε ε
T = = ln − 1 + ln −1 ≃ ln (1.40)
∂E kB E E kB E2
Re-introducing the number of defects n = E/ε, we find a relation between n and the
temperature of the crystal:
ε
q
n≃ N0 N0′ exp − (1.41)
2kB T
Because the oscillators are weakly coupled, the Hamiltonian of the system can be decom-
posed as the sum of the Hamiltonians of individual oscillators:
N0 N0
N0 N0
X X
H= Hi = Ni + ℏω ⇒ E = Q+ ℏω (1.43)
i=1 i=1 2 2
The macro-state is therefore determined by Q and we have to count the number of micro-
states {ni } such that ni = Q. This corresponds to the number of ways of distributing
P
Assuming that N0 , Q ≫ 1, we use the Stirling formula and find the entropy of the system:
S ≃ kB [(N0 + Q) ln (N0 + Q) − Q ln Q − N0 ln N0 ]
N0 E N0 E E N0 E N0
≃ kB + ln + − − ln − − N0 ln N(1.45)
0
2 ℏω 2 ℏω ℏω 2 ℏω 2
From this expression of the entropy, we get the temperature of the chain of oscillators:
" !#−1
ℏω N0
T ≃ ln 1 + E (1.46)
kB ℏω
− N20
From the expression for the temperature, we find that (N0 + Q) /Q = exp (ℏω/kB T ),
which gives the final expression for the probability Pi (n) at temperature T :
− kℏωT − knℏωT
Pi (n) = 1 − e B e B (1.49)
Pσρ = Pσ · Pρ (1.51)
Finally, we use the relation between the thermodynamics entropy Sρ at equilibrium and
the number of micro-states Ωρ to express the probability of finding the thermostat in the
micro-state |ρ⟩:
!
Sρ (E)
Pρ = exp βEσ − ⇒ Pσρ = Pσ eβEσ · e−Sρ (E)/kB (1.56)
kB
Since the global system is isolated, its probability distribution Pσρ must follow the canoni-
cal distribution and depend only on its total energy, therefore the probability distribution
of the system Σ must follow the canonical distribution:
e−βEσ
e−βEσ
X
(Canonical distribution) Pσ = with Z = (1.57)
Z σ
To summarize, we can calculate the moments of the energy distribution from the deriva-
tives of the partition function, the two first moments being:
∂Z ∂ 2Z
(Mean) Ē = − and (Uncertainty) (∆E)2 = (1.61)
∂β ∂β 2
Assume that the energy of the macro-state of the system depends on a set of external
parameters Λn . These can be for instance: the volume of the container, the components
of an electric field, etc. The variation of Ē produced by an infinitesimal variation of those
parameters is:
X ∂ Ē
dĒ = dΛn = δW (1.62)
n ∂Λn
where we assume that the variation of Λn generates only a variation of work δW and
does not generate heat, which is intuitive in the canonical ensemble situation where the
thermostat impose its temperature on the system, and any heat is instantly dissipated.
We introduce conjugate variable Xn to the parameters Λn as:
X ∂ Ē X ∂Eσ 1 ∂ ln Z
δW = Xn dΛn ⇒ Xn = = Pσ = − (1.63)
n ∂Λn σ ∂Λn β ∂Λn
e−βEσ =
X XX X
Z = ... exp [−β (Eσ1 + . . . + Eσn )]
σ σ1 σ2 σn
n
! ! !
−βEσ1 −βEσn −βEσi
X X Y X
= e ... e ≡ e
σ1 σn i=1 σi
n
Y
= zi (1.65)
i=1
e−βEσi
X
zi = (1.66)
σi
17
at the equilibrium, and instead a function F ∝ ln Z will be. From the expression of
entropy, we find that:
X e−βEσ Ē
S = kB [ln Z + βEσ ] = kB ln Z + (1.67)
σ Z T
Therefore, a quantity F ∝ S − Ē/T will be maximum at the equilibrium, by convention
we instead define the Gibbs free energy:
1
(Gibbs free energy) F = Ē − T S = − ln Z (1.68)
β
The free energy will instead be minimum at the equilibrium, and is therefore referred to
as a thermodynamics potential. Similarly to the case of the microcanonical ensemble, F
is the fundamental thermodynamic quantity, that depends on (T, N, V ) and from which
the other thermodynamic quantities can be derived at the equilibrium:
∂F ∂F ∂F
, S=−
µ= and P =− (1.69)
∂T ∂N ∂V
From these, we recover the fundamental relation from macroscopic thermodynamics:
dF = −S dT − P dV + µ dN (1.70)
1.3.5 Examples
Einstein crystal
We model the crystal as a 3-dimensional system of N0 oscillators as illustrated in Figure
1.1.
In the quantum description, each oscillator i follows the Hamiltonian of a harmonic
oscillator:
P2 1
Hi = + mωi2 (R − R0 )2 (1.71)
2m 2
If the oscillators are coupled, the oscillator will behave as the sum of harmonic oscillators
with different frequencies. However, here we neglect the couplings between oscillators
so ωi = ω0 , ∀i and we are left with a system of 3N0 independent harmonic oscillators.
The energy
of each harmonic oscillator is known from quantum mechanics to be Ei =
ni + 21 ℏω0 , with ni a positive integer, which lead to the reduced partition function for
oscillator i:
βℏω0
+∞
X −β (ni + 21 )ℏω0 −
βℏω0
+∞
X
−βℏω0 ni
e− 2 1
zi = e =e 2 e = −βℏω
= (1.72)
ni =0 ni =0 1−e 0
2 sh βℏω 0
2
Therefore, we see that the reduced partition function does not depend on the given
harmonic oscillator i, which is expected from the symmetry of the problem, and the total
partition function is simply:
3N
!!−3N0
Y0 βℏω0
Z= zi = 2 sh (1.73)
i=1 2
18
Figure 1.1: Three-dimensional lattice of coupled springs with two layers in the y direction
and three layers in the x and z directions, illustrated by springs of different colors for
each directions. The actual system continues to infinity following the same structure in
the three directions.
The mean value of the energy is extracted from the partition function by using:
βℏω0
∂ln Z 3N0 ℏω0 ch 2 3 3N0 ℏω0
Ē = − = = N0 ℏω0 + (1.74)
∂β 2 βℏω
sh 2 0 2 eβℏω0 − 1
The entropy of the crystal can also be computed from the partition function and the
mean energy:
" !! #
βℏω0 1 βℏω0
S = kB ln Z + β Ē = −3N0 kB ln 2 sh − βℏω0 − βℏω0 (1.75)
2 2 e −1
Finally, from the partition function, we estimate the heat capacity CV of the crystal from
its definition as the variation of energy under a variation of temperature at constant
volume:
!2 2
∂ Ē ∂ Ē βℏω0 βℏω0
CV = = −kB β 2 = 3N0 kB βℏω
eβℏω0 = 3N0 kB (1.76)
∂T ∂β e 0 −1 2 sh βℏω0
2
The Einstein model of solids has the right asymptotic behavior as T → 0 and T ≫
ℏω0 /kB , however its predictions do not match experimental observations at intermediate
temperatures. This is normal since we neglected the interactions between oscillators.
19
Figure 1.2: The heat capacity of the diamond compared to the Einstein model of solids.
The figure is taken from [1].
Taking them into account, for instance in the Debye model of solids, allow us to recover
the agreement with experiment. A comparison between the results from the Einstein
model and experimental results for diamond is illustrated in figure 1.2.
Physics 2
– The Taylor expansion of the entropy of the reservoir stops at the linear order:
Finally, we introduce the grand canonical distribution and the grand canonical partition
function Ξ:
e−β(Eσ −µNσ )
!
Grand canonical
e−β(Eσ −µNσ )
X
Pσ = with Ξ = (2.5)
distribution Ξ σ
20
21
∂ ln Ξ ∂ ln Ξ
Ē − µN̄ = − and β N̄ = (2.6)
∂β ∂µ
Similarly to the canonical partition function, the grand partition function can be fac-
torized for independent degrees of freedom by introducing the reduced grand partition
functions: n
e ( σi σi )
Y X −β E −µN
Ξ= ξ with ξ =
i i (2.7)
i=1 σi
Therefore, the grand canonical partition function Ξ plays the same role as the partition
function Z in the canonical ensemble and the number of states Ω in the microcanonical
ensemble. From the definition of the thermodynamic entropy in the grand canonical
ensemble,
X Ē µN̄
S = −kB Pσ ln Pσ = kB ln Ξ + − , (2.8)
σ T T
we define the grand potential J = −kB T ln Ξ expressed as:
∂J ∂J ∂J
S=− , N =− and P =− (2.10)
∂T ∂µ ∂V
From these, we recover the fundamental relation from macroscopic thermodynamics:
dJ = −S dT − P dV − N dµ (2.11)
2.1.3 Examples
Quantum ideal gases
When collective quantum effects become important, which happens at very low tempera-
ture, we have to describe the system of N0 particle as a single wave function Ψ (x1 , . . . , xN0 ),
where xi denotes all the degrees of freedom of the particle i. In order to account for exper-
imental results, quantum mechanics needs to be supplemented of an additional postulate
for the multi-particle wave function that we call the symmetrization postulate. It states
that the multi-particle wave function of identical particles must be either:
– Totally
symmetric: in this case we say that the particles are bosons and we have
Ψ xπ1 , . . . , xπN0 = Ψ (x1 , . . . , xN0 ) where π is a permutation of the xi ,
22
– Totally anti-symmetric:
in this case we say that the particles are fermions and
we have Ψ xπ1 , . . . , xπN0 = sign (π) Ψ (x1 , . . . , xN0 ) where sign (π) gives the sign of
the permutation (+ if the permutation is even and − if it is odd).
One immediate consequence of the anti-symmetry of the wave function for fermions is
that two particles can not be in the same quantum states, this is the Pauli exclusion
principle. This property has major consequences and lead to different reduced grand
partition functions, depending on the bosonic or fermionic nature of the particles. If the
particles are bosons, the number of particles ni in a given state i can take all values from
0 to +∞, therefore we have:
+∞
1 1
ξiBE = e−β(εi −µ)ni = n̄BE
X
⇒ i = (2.12)
ni =0 1− e−β(εi −µ) eβ(εi −µ) −1
where we assumed that the energy Eσi of micro-state σi is Eσi = ni εi , i.e. is the number
of particles in state i times the energy of state i. The superscript BE refers to the
Bose-Einstein statistic followed by bosons.
Alternatively, because the number of particles that can be in state i is restricted to be
either 0 or 1, the reduced grand partition for fermions is:
1
1
ξiFD e−β(εi −µ)ni = 1 + e−β(εi −µ) n̄FD
X
= ⇒ i = (2.13)
ni =0 eβ(εi −µ) +1
We start by treating the ideal gas of bosons. The mean number of particles is obtained
from the occupation number in each state i:
1
n̄BE
X X
N̄ = g i =g (2.14)
i i eβ(εi −µ) −1
with g the degree of degeneracy of each energy eigenstate. Degeneracies are due to the
spin degree of freedom, so for massive bosons g = 2s + 1 with s the spin of the particles.
At high temperature, the gas behaves classically, so we must have exp (−βµ) ≫ 1 which
means that the chemical potential is negative. For bosons, this is still true even at lower
temperatures, because otherwise the occupation number of the fundamental state, that
has 0 energy, would be negative which is impossible. To evaluate the mean number of
particles, we go to the continuous limit and express the energy of the free state i from its
momentum: Z
d3 p⃗ 4πgV0 Z +∞ p2 dp
N̄ −→ g = (2.15)
p2
β 2m −µ h3 0 p2
β 2m −µ
e −1 e −1
We introduce the dimensionless parameter
2 p2 q h
x =β ⇔ p = x 2mkB T = x √ (2.16)
2m λth π
23
where we introduced the thermal de Broglie wavelength λth . The previous equation now
reads in terms of the dimensionless variable:
N̄ 3 4g Z +∞ x2 dx
ρ̄λ3th ≡ λth = √ (2.17)
V0 π 0 ex2 e−βµ − 1
Therefore, we see that ρ̄λ3th diverges when the chemical potential becomes positive, al-
though it has a definite value when µ = 0. Assuming the density of particles ρ̄ is fixed,
this is achieved when the thermal de Broglie wavelength is at its critical value ΛBE :
4g Z +∞ x2 dx g
λ3BE = √ x 2 ≃ × 2.612 (2.18)
ρ̄ π 0 e −1 ρ̄
To this thermal
√ de Broglie wavelength corresponds a critical temperature TBE such that
λBE = h/ 2πmkB TBE . When the temperature goes below the critical temperature TBE ,
the occupation number becomes virtually infinite and the system enters a new state that
we call a Bose-Einstein condensate. Of course, the occupation number becomes large
but remains finite, and this divergence comes from the continuous limit that is not valid
anymore since when the temperature is so low the energy states are well separated.
We now treat the case of an ideal gas of fermions at 0 temperature, which is much simpler
than the case we just treated. Indeed, for fermions the occupation number can only be
either 0 or 1, and the mean number of particles expressed in the continuous limit gives:
4πgV0 Z +∞ FD 2
N̄ = ni p dp (2.19)
h3 0
where the degree of degeneracy for a fermion is g = 2s with s the spin of the fermion.
When T = 0, the particles occupy only the lowest energy levels, while all the others are
unoccupied, which defines the Fermi momentum pF above which all occupation number
vanish and the mean number of particles is simply:
4πgV0 Z pF 2 4πgV0 3
N̄ = p dp = p (2.20)
h3 0 3h3 F
This means that, even at 0 temperature, the particles in an ideal gas of fermions have
a non-vanishing impulsion and energy. The Fermi impulsion, Fermi energy and Fermi
temperature are obtained from the previous result:
!1/3 !2/3
3N̄ h2 3N̄
pF = h , εF = and kB TF = εF (2.21)
4πgV0 2m 4πgV0
For any temperature below the Fermi temperature, the gas of fermions act as if it were
at T = 0.
The black-body
We now consider the peculiar case of an ideal gas of photons, that is of massless bosons.
Because photons can be emitted and absorbed by atoms and molecules, the number of
photons can not be imposed by a reservoir and its chemical potential has to vanish, i.e.
24
8πV0 c Z +∞ p3 dp 8πhV0 Z +∞ ν 3 dν
Ē = = (2.24)
h3 0 eβpc − 1 c3 0 eβhν − 1
where we have used the relation ν = pc/h to express the energy as a function of the
frequency instead of the momentum. The energy density per unit of volume u (ν) trans-
ported by photons with frequency included in [ν, ν + dν] is therefore:
8πh ν 3
(Black-body spectrum) u (ν) = (2.25)
c3 eβhν − 1
This is the famous Planck law that described the spectrum of a black-body. By performing
a last change of variable to highlight a dimensionless variable x = βhν, we find the total
energy density:
Ē (kB T )4 Z +∞ x3 dx π 2 (kB T )4
= 2 = (2.26)
V0 π (ℏc)3 0 ex − 1 15 (ℏc)3
Bibliography
[1] Anton Akhmerov and Toeno van der Sar. Einstein model. Open Solid State Notes.
Last accessed: 2024-12-22.
25