0% found this document useful (0 votes)
17 views26 pages

Phys Tat

These lecture notes serve as support material for first-year master's physics classes at the University of Science and Technology of Hanoi, focusing on statistical physics. They cover topics such as the transition from microscopic to macroscopic descriptions, statistical ensembles, and the quantum description, while providing references for further reading. The notes aim to relate thermodynamic macroscopic quantities to microscopic dynamics through statistical methods.

Uploaded by

trangdtt2440064
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views26 pages

Phys Tat

These lecture notes serve as support material for first-year master's physics classes at the University of Science and Technology of Hanoi, focusing on statistical physics. They cover topics such as the transition from microscopic to macroscopic descriptions, statistical ensembles, and the quantum description, while providing references for further reading. The notes aim to relate thermodynamic macroscopic quantities to microscopic dynamics through statistical methods.

Uploaded by

trangdtt2440064
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Lectures on statistical physics

Clément Leloup

December 22, 2024


1

These lecture notes have been written as support material for the physics classes in
the first year of masters at the University of science and technology of Hanoi. There is a
tremendous literature on statistical physics that is available to complete this introductory
course. Here are a few references to start with:

– An introduction to Thermal Physics - Daniel V. Schroeder.

– Statistical mechanics - Franz Schwabl.

– Fundamental of statistical and thermal physics

These are all standard internationally recognized textbooks on the topic. The first is
more introductory and reviews the fundamental aspects of the question as well as the
main results from macroscopic thermodynamics before diving into the topic of statistical
mechanics. The other two are more advanced and, although they are self-contained and
include all the material presented here, they go in more details and well beyond the
content of this course.
Contents

1 Physics 1 3
1.1 Preliminary considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 From the microscopic to the macroscopic description . . . . . . . 3
1.1.2 Statistical ensembles . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Ergodic hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.4 The quantum description . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Microcanonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 The microcanonical distribution . . . . . . . . . . . . . . . . . . . 6
1.2.2 Statistical entropy . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Sub-systems at equilibrium . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.1 General description of the canonical system . . . . . . . . . . . . 14
1.3.2 The thermostat and canonical distribution . . . . . . . . . . . . . 14
1.3.3 The partition function . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.4 The thermodynamic potential . . . . . . . . . . . . . . . . . . . . 17
1.3.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Physics 2 20
2.1 Grand canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.1 The grand canonical distribution . . . . . . . . . . . . . . . . . . 20
2.1.2 The grand potential . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2
Physics 1

1.1 Preliminary considerations


1.1.1 From the microscopic to the macroscopic description
In standard classical and quantum mechanics, the dynamics of a system of N particles
is described as a trajectory in the phase space (qi (t) , pi (t))i∈1...N constructed from the
positions and momenta of the particles. However, in typical systems studied in the context
of statistical mechanics, the number of particles is of the order of the Avogadro number
NA ∼ 1023 and it is completely impossible to have a precise description of all of the 6N
degrees of freedom. The situation is even worse if the system is to be described with
quantum mechanics as it becomes then necessary to evolve the continuous wave-function
in 3-dimensional space built from those 6N degrees of freedom.
The computational difficulty of solving the equations of evolution of each of those degrees
of freedom is one aspect that makes this task hard although potentially manageable, but
what really makes it impossible is that the initial state of all of these degrees of freedom
needs to be precisely measured simultaneously at some time t0 .
As the microscopical state of the system (micro-state) is impossible to describe in its
entirety, we have to lower our standards and we choose to change scale and describe only
the macroscopical state of the system (macro-state) via a collection of macroscopic quan-
tities whose properties characterize the large-scale evolution of the system. The evolution
of these macroscopic quantities is determined by the laws of thermodynamics which were
found from purely macroscopic considerations in the first half of the 18th century. The
theory of thermodynamics tells us that a macroscopic system can be described by a very
small number of maccroscopic quantities which have the very interesting property of being
approximately constant once the system has reached its equilibrium, such as:

– its energy written as U in thermodynamics but that we will write as E in statistical


physics,

– its number of particles N ,

– its volume V ,

– its pressure P ,

– its temperature T ,

and that some of these quantities may be related by an equation of state, e.g. the well-
known equation of state for the ideal gas P V = N RT with R the constant of ideal gases,

3
4

further reducing the number of required macroscopic quantities. In principle, one can also
use the momentum and angular momentum of the system which can also be considered to
be constant at equilibrium, but since there is always a change of reference frame that can
make these vanish, we will consider that the system is globally at rest in the remaining
of these notes even though the microscopic particles are moving.
Although the laws of thermodynamics that rule over the evolution of macroscopic sys-
tems were defined without any consideration for its microscopic constituants, the goal of
statistical physics, that started in the second half of the 18th century and whose develop-
ment is still active to this day, is to relate the thermodynamical macroscopic quantities
to the microscopic dynamics using a statistical description of the latter.

1.1.2 Statistical ensembles


Because we decided to discard a lot of information regarding the system, focusing on the
macroscopic quantities to describe it, a potentially large number of microstates correspond
to the same macrostate defined by values of (E, N, V, . . .). All these microstates constitute
the so-called Gibbs ensemble associated to the macrostate and a large part of the following
will be dedicated to the study of particular classes of Gibbs ensembles that are applicable
to study various realistic situations.
The statistical physics approach is to affect a probability to the microstates in the Gibbs
ensemble which gives the probability of a macrostate to be the macroscopic manifestation
of a given microstate. At time t, this can be considered to be the product of probabilities of
finding a particle i in a certain volume ∆qi ∆pi around (qi , pi ) in phase space, summed over
the whole volume of phase space. Assuming an infinitesimal treatment where ∆qi → dqi
and ∆pi → dpi , we can introduce a probability density to express the probability P of
finding the system in a given configuration:

P (. . . , qi ∈ [qi + dqi ] , . . . , pi ∈ [pi , pi + dpi ] , . . .) = ρ (q1 , . . . , qN , p1 , . . . , pN ) dq dp (1.1)

where we have defined dq = dq1 . . . dqN and dq = dp1 . . . dpN . Equivalently in quantum
statistical physics we can define the probability of the system to be in a given state of
the state space.
From these associated probabilities in the Gibbs ensemble, macroscopic quantities f can
be computed as statistical averages:
Z
f¯ = f (q, p) ρ (q, p) dq dp (1.2)

These are ensemble averages, i.e. over distinct microstates, and are not in general related
to a physical process of averaging. They will be denoted by barred quantities in the
remaining of these notes.
The probabilistic view used in statistical physics is very different from the one followed in
the quantum theory. There, the evolution of the amplitudes of probability is deterministic
and follows the Schrodinger equation. The probabilistic description only intervened when
a physical observable was measured via a fundamentally probabilistic process. On the
other hand, the probabilistic description of statistical physics is only necessary due to our
lack of information, and computational power, regarding the system and its evolution.
Were these limitations to be overcome, the statistical description would prove to be
unnecessary.
5

1.1.3 Ergodic hypothesis


In order for the statistical physics approach to compare to the outcome of actual physics
experiments, physical measurements need to correspond to this procedure of ensemble
averaging over the Gibbs ensemble.
Since the characteristic time of the dynamics at the microscopic scale is much smaller
than the typical measurement times relevant in physical experiments, we can approxi-
mate these as time averages of the same system going from microstate to microstate as
time passes. The fundamental assumption of statistical physics is that the trajectory of
physical systems in phase space during the measurement is regular enough such that the
time averages are equal to the ensemble averages over the corresponding Gibbs ensemble.
More formally it is assumed that, for all physical systems and macroscopic quantity f
over a sufficiently long time T , we have:

Z
1ZT
(Ergodic hypothesis) f (q, p) ρ (q, p) dq dp = f (q (t) , p (t)) dt (1.3)
T 0

Such systems are called ergodic systems and this constitutes the so-called ergodic hypoth-
esis introduced by Boltzmann in 1871. To this day, there exists no proof that general
physical systems are ergodic, and such a statement has only been proven for very simple
systems. Therefore, this remains a fundamental hypothesis of the theory. As such, it can
be seen as a weak point of the current theory of statistical physics.

1.1.4 The quantum description


On many aspects, the quantum description of physical systems greatly clarifies the com-
putation of the probabilities needed in statistical physics. Several obscure assumptions
that had to be followed to make predictions of statistical physics agree with observations
can be justified in the light of quantum mechanics.
In particular, the fact that energy levels are discrete make the computation of the prob-
ability to find a microstate with a given energy much more straightforward than for a
continuous spectrum. In fact, this is exactly the reason that first led Planck to intro-
duce his quanta of energy. Other physical observables having a quantized spectrum also
considerably simplify and clarify the interpretation of some computations in statistical
physics as will be illustrated for instance when studying the ideal gas.
Another important aspect of quantum mechanics for statistical physics is the indiscern-
ability of particles which is a feature of quantum mechanics since the observed prob-
abilities are left the same by interchange of variables of identical particles. From this
property, it is possible to solve the Gibbs paradox according to which the classical en-
tropy (understand with discernable particles) increases when mixing two volumes of the
same gas with the same temperature and pressure. The indiscernability of quantum par-
ticles lead to many more consequences such as the spectacular collective behavior at low
temperature of bosons (particles with integer spin) or the Pauli exclusion principle that
forbid fermions (particles with half-integer spin) to be in the same state. All of these
have been observed experimentally.
In the quantum description, the physical state is described as a vector in the state space
that we can express as a linear combination of elements of a basis. It is customary to use
a basis of eigenstates |α⟩ of the Hamiltonian H with energy Eα . The microstate |Ψ⟩ is
6

thus expressed as: X


|Ψ⟩ = cα |α⟩ (1.4)
α

where the cα are C numbers whose modulus square give the probability to measure |Ψ⟩
in the state |α⟩.
On the other hand, the probabilities Pα of quantum statistical physics that the microstate
is in the state |α⟩ are of a very different nature. They are R numbers with the property
P
that α Pα = 1 and which characterize the mixing of the physical macroscopical state in
terms of pure quantum states |α⟩ of definite energy Eα . The mean value of an observable
A in statistical physics is given by:
X
Ā = Pα ⟨α | A | α⟩ (1.5)
α

from the average of A in each state |α⟩ entering the mixture, given by ⟨α | A | α⟩.

1.2 Microcanonical ensemble


1.2.1 The microcanonical distribution
We consider the case of an isolated system, i.e. a system with fixed volume V0 , fixed
number of particles N0 , and an energy that fluctuates in a thin shell E ∈ [E0 , E0 + δE].
The energy is not exactly constant because the system is generally contained in a box
that sets its volume, real or virtual such as a potential well, and the system can exchange
small amounts of energy with the box itself. This can describe for instance a classical
gas in e confined physical box where the particles exchange kinetic energy with the walls,
or a quantum state in a square potential well exchanging energy with the source of
the potential. Even in the absence of a box to contain the system, e.g. for the whole
Universe itself, quantum fluctuations make it impossible to have a rigorously constant
energy. The microstates characterized by (E ∈ [E0 , E0 + δE] , N0 , V0 ) compose the so-
called microcanonical ensemble.
We define Ω (E0 , δE, N0 , V0 ) the number of quantum states of the system with energy
lying in the energy shell [E0 , E0 + δE]. Assuming that δE ≪ E0 , which is a very good
assumption in general, and that δE is large compared to the spacing between energy
levels of the system, which breaks down in some known cases, we can assume that Ω is a
continuous function and that:

Ω (E0 , δE, N0 , V0 ) = ω (E0 , N0 , V0 ) δE (1.6)

with ω the density of states per unit of energy.


We will start our statistical study of the microcanonical ensemble from the fundamental
hypothesis that the probability of each states in the mixture follow the microcanonical
distribution at the equilibrium:

1
(Microcanonical distribution) Pα = (1.7)
Ω (E0 , δE, N0 , V0 )

As we can see, the microcanonical distribution assumes equal probabilities of each ac-
cessible states. This microcanonical distribution can be derived from more fundamental
7

principles, but so far it still always requires the introduction of an assumption that formal-
izes some notion of “maximal disorder" which can not be further justified. It is therefore a
matter of choice to build our argument on the microcanonical distribution instead. Note
that this expression for the microcanonical distribution is only valid at equilibrium, so
our choice limits our capabilities to the treatment of systems at equilibrium only, and the
study of out-of-equilibrium evolution is beyond our scope.

1.2.2 Statistical entropy


From the probabilities defined on the Gibbs ensemble, we introduce the statistical entropy
S that is defined by:

X
(Statistical entropy) S≡− Pα ln Pα (1.8)
α

This definition is also valid outside of the equilibrium, but it takes a very simple form at
equilibrium when the probabilities follow the microcanonical distribution:
1 X
Seq = − ln Ω (1.9)
Ω (E0 , δE, N0 , V0 ) α

Since the |α⟩ states span all the accessible states in Ω, the number of terms in the
sum exactly compensate the denominator. We can now introduce the historical notion of
entropy in statistical physics introduced by Boltzmann and therefore called the Boltzmann
entropy, S = kB S with kB the Boltzmann constant. We see that at equilibrium for the
microcanonical ensemble it is given by:

(Boltzmann entropy) S = kB ln Ω (1.10)

This expression for the entropy renders manifest some of its important properties. First
of all, if we assume that the complete system is made of two sub-systems A and B
that weakly interact with each other. By weakly, we mean that the micro-state of one
sub-system has no sensible impact on the other, then we have that:

Ω ≃ ΩA × ΩB ⇒ S ≃ SA + SB (1.11)

In other words, the entropy is an extensive quantity. The deviation for this equality is
extremely small in reality given the enormous amount of possible micro-states for any
realistic macro-state. This results implies that the entropy is proportional to the size of
the system and S (E, N, V ) = N · S (E/N, V /N ).
Finally, the most important property of entropy is that it is always increasing and is
maximal at equilibrium. It can be demonstrated from more fundamental assumptions
than the canonical distribution, but here we will only try to give a qualitative justification
of why this is the case. Assume that the macroscopic system can be in two distinct
macro-states X and Y , and that SX > SY . Because the actual number of micro-states
that correspond to these are exponentiated as compared to the entropy, these are vastly
different between the two states. Since each micro-state is equiprobable, it will be way
more probable to find the macro-state in the state X instead of Y , and the macro-state
8

will change towards its most probable state. At equilibrium, the entropy does not increase
anymore, so it has reached a maximum.
From these properties that characterize or even define thermodynamics entropy, we can
identify the Boltzmann entropy with the thermodynamics entropy. In addition to give
a systematic way of computing the entropy of a system, it leads to a new interpretation
of the physical meaning of entropy. Although it was understood as a measure of the
microscopic disorder of a system, it now corresponds more specifically to the amount of
missing information on a system. If the entropy vanishes, S = 0, it means that only
one micro-state corresponds to our description of the system, in other words we know
all there is to know about it. The bigger the entropy, the more information we lack to
describe the system. Therefore, from this description, the entropy becomes a subjective
quantity that depends on the amount of information possessed by a specific observer, and
two different observers can measure different amounts of entropy for the same system.

1.2.3 Sub-systems at equilibrium


Temperature
One straightforward application of this definition of entropy is in studying the equilibrium
between two subsystems A and B defined by their respective macro-states (EA , NA , VA )
and (EB , NB , VB ), of an isolated system defined by its macro-state (E, N, V ). For our
purpose, we will first assume that the macro-state is characterized by its energy E. If the
two sub-systems are isolated from each other, the number of micro-states corresponding
to E can be separated into the individual contributions of each subsystems:
Ω (E = EA + EB ) = ΩA (EA ) × ΩB (EB ) ⇒ S = SA + SB (1.12)
We now assume that the two subsystems can exchange energy. However, since the total
system is isolated, its total energy E is fixed, so we have the relations EB = E − EA
which using the additivity of entropy leads to:
S (EA , E − EB ) = SA (EA ) + SB (E − EA ) (1.13)
The equilibrium is characterized as the maximum of the entropy S, so at equilibrium we
have:
∂S ∂SA ∂SB
= − =0 (1.14)
∂EA ∂EA ∂EB
Thus, when reaching equilibrium, the quantity ∂Si /∂Ei will tend to homogenize between
the subsystems, and eventually reach a common value that can therefore be defined for the
whole system itself. It corresponds to the quantity that becomes homogeneous when the
subsystems can exchange energy, so is intuitively related to temperature. We therefore
define:

1 1 ∂S
(Thermodynamic temperature) β= = (1.15)
kB T kB ∂E

In other words, when the two sub-systems are in interaction, an irreversible exchange of
energy in the form of heat will form until they reach equilibrium where their temperature
is homogenized. In the process, the sub-system with the highest energy will lose its excess
energy to the other sub-system. This line of reasoning is still valid when considering a
large number of subsystems in interaction, whose temperature will homogenize.
9

Pressure and chemical potential


When the number of particles N and volume V of the sub-systems can change, the
analogous relation N = NA + NB and V = VA + VB leads to the same line of reasoning.
The entropy at equilibrium has to satisfy the three relations:
∂SA ∂SB ∂SA ∂SB ∂SA ∂SB
= , = , = (1.16)
∂EA ∂EB ∂NA ∂NB ∂VA ∂VB
Similarly, we introduce the chemical potential µ and pressure P that will tend to homog-
enize to reach a constant value as the global system reaches equilibrium, and that are
defined as:
!
Chemical 1 ∂S 1 ∂S
−β·µ= and (Pressure) β · P = (1.17)
potential kB ∂N kB ∂V

From these three definitions, we recover the formula that relates the variation of entropy
and the variation of E, N and V , familiar from macroscopic thermodynamics:

dS = kB β (dE + P dV − µ dN ) ⇒ dE = T dS − P dV + µ dN (1.18)

1.2.4 Examples
We end this section with practical examples of how calculation of entropy in terms of
micro-states of a same macro-state can tell us about the state at equilibrium.

The ideal gas


We study the case of a collection of rigid spheres in a cubic box of volume V that interact
with one another as well as with the wall of the box only through elastic collisions (i.e.
without loss of energy). We are interested in determining the number of micro-states
of the gas that correspond to a macro-state of energy E0 , number of particles N0 and
volume V0 . Since the particles do not interact, the energy is entirely constituted of kinetic
energy:
1 X 2
E= p (1.19)
2m i i
where the sum runs over the N0 particles. The micro-state is determined by the position
and velocity of each of the N0 particles, and we thus have to determine the number
of ways of placing N0 moving particles in a volume V such that the total energy E is
in [E0 , E0 + δE]. Because each of the particles can have any of the available positions
Ωx (N0 , V0 ) and any of the allowed momenta Ωp (E0 , δE, N0 ) independently, the total
number of micro-states Ω (E0 , δE, N0 , V0 ) can be factorized as:

Ω (E0 , δE, N0 , V0 ) = Ωx (N0 , V0 ) × Ωp (E0 , δE, N0 ) ⇒ S = Sx + Sp (1.20)

To compute the number of ways to arrange N0 particles in a volume V0 , we start by


coarse-graining the volume in small cubes of side ∆x. Therefore, Ωx corresponds to the
way of independently placing N0 particles in V0 / (∆x)3 cubes:
!N0
V0
Ωx (N0 , V0 ) = ⇒ Sx = kB N0 (ln V0 − 3ln ∆x) (1.21)
(∆x)3
10

Turning now towards the computation of Ωp (E0 , δE, N0 ), we introduce the cumulant
function O (E, N0 ) that give the number of ways to arrange N velocities such that the
energy is lower than E:
Z E
∂Ov
Op (E, N0 ) = Ωp (E ′ , 0, N0 ) dE ′ ⇒ Ωp (E0 , δE, N0 ) = δE
(E0 ) (1.22)
0 ∂E
This corresponds to the size of the velocity space that follows the condition:
X 
p2x,i + p2y,i + p2z,i ≤ 2mE (1.23)
i

which is the volume of a 3N0 -dimensional sphere of radius 2mE. Using the same
discretization scheme as for the number of position by dividing the hyper-sphere in small
3N0 -dimensional hyper-cubes of side ∆p, the number of ways to arrange the velocities in
the hyper-sphere is:
!3N0 /2 !3N0 /2
π 3N0 /2 2mE 1 4πemE
Op (E, N0 ) = ≃√ (1.24)
(3N0 /2)! (∆p)2 3πN0 3N0 (∆p)2
where we used the Stirling approximation to get the second expression, which is a very
good approximation as N0 is typically extremely large. From this formula, we can deter-
mine the momentum contribution to the entropy:
!
∂Op 3N0
 
Sp = kB ln (E0 ) + ln δE = kB ln Op + ln + ln δE (1.25)
∂E 2E0
Because the number of particles N0 is very large, and ln Op scales faster than N0 while
the other terms scale much slower, the first term largely dominates and the momentum
contribution to the entropy which becomes:
3 3N0 (∆p)2 3
" #
3
Sp ≃ kB N0 ln 4πE0 − ln + (1.26)
2 2 m 2
From equations (1.21) and (1.26), we can finally express the entropy of our gas model.
Before we do just that, we need to address a subtle question first. So far, we assumed
that we can label and follow each particles such that two micro-states that differ only by
the exchange of two particles are different. In other words, the particles of our system
are distinguishable. However, since our particles are all the same, it makes more sense
to consider them as indistinguishable such that the actual number of micro-states that
correspond to the macro-state {E0 , δE, N0 , V0 } needs to be re-scaled by a factor N0 !
which, again using the Stirling approximation, leads to the total entropy of our simple
gas model:
" #
3 E0 V0 3 4πm 5
S ≃ kB N0 ln + ln − ln 2 + (1.27)
2 N0 N0 2 3 (∆x∆p) 2
From this expression of the entropy, we can determine the temperature, chemical potential
and pressure at equilibrium:
2 E0
T = (1.28)
3 kB N0
" #
E0 4πm E0 2 V0 1
µ = ln − ln − ln − (1.29)
N0 3 (∆x∆p)2 N0 3 N0 3
2 E0 N0 kB T
P = = (1.30)
3 V0 V0
11

where the last equation gives the famous equation of state of the ideal gas.

The quantum treatment is similar. Because the particles do not interact, the Hamiltonian
is that of a free particle and its eigenstates are plane waves:
i
Ψ (x, y, z) = e ℏ (px x+py y+pz z) (1.31)
1/3
If we model the box as a cubic infinite potential well of side L = V0 , we know from
quantum mechanics that the momentum is quantified and follows px,y,z = ℏπ n
L x,y,z
with
nx,y,z three integers. It is however often more convenient to use periodic boundary con-
ditions, known as Born-von Karman (BVK) conditions, which amounts to neglecting
surface effects. These are formally expressed as:

Ψ (x, y, z) = Ψ (x + L, y, z) = Ψ (x, y + L, z) = Ψ (x, y, z + L) (1.32)

and have the effect of removing the odd integers, which reduces the allowed momentum
values to:
h h h
px = nx , py = ny , and py = nz (1.33)
L L L
The correspondence principle implies that the energy is given, as in the classical case, by:

1 X 2 h2 X  2 2 2

E= pi = 2/3
n x,i + n y,i + n z,i (1.34)
2m i 2mV0 i

Using the cumulant function as in the classical case, we relate the number of states to
the number of points of integer coordinates in the 3N0 −dimensional sphere of radius
1/3 √ 1/3 1/2
V0 2mE0 /h. Given the orders of magnitude of V0 E0 as compared to h, this radius
is enormous, and we can consider the number of points with integer coordinates to be
very close to the actual volume of the sphere. The calculation being the same as in the
classical case, we only state the result:

1 N0 π 3N0 /2 (2mE0 )3N0 /2


Ω (E0 , δE, N0 , V0 ) ≃ V (1.35)
N0 ! 0 (3N0 /2)! h3N0
Using the Stirling approximation for the factorials and taking the logarithm give the
entropy:

3 E0 V0 3 4πm 5
 
(Ideal gas entropy) S ≃ kB N0 ln + ln − ln + (1.36)
2 N0 N0 2 3h2 2

We see that it is the same result as for the classical gas if we take ∆x∆p = h, which gives
the correct order of magnitude but not quite the result we obtain in quantum mechanics
from the Heisenberg uncertainty ∆x∆p ∼ h/4π.

Frenkel defects
We consider an ideal crystal made of N0 atoms regularly arranged on the N0 sites of the
crystal lattice. We say that there is a Frenkel defect in the crystal lattice when a vertex
is unoccupied and the corresponding atom is displaced to an interstice. We assume that
12

there are N0′ possible interstice positions in the lattice, and the energy needed to create
a Frenkel defect to be constant equal to ϵ > 0. Let us estimate the entropy of a crystal
with n ≪ N0′ , N0 Frenkel defects.
Since there are n defects and that each defect requires an energy ε, the total energy of
the system E is given, taking the origin of energy for the state where all vertices of the
lattice are occupied, by:
E = nε (1.37)
The estimation of the number of micro-states corresponding to this situation can be
decomposed into two problems:
1. The number of ways of arranging n indistinguishable particles in the N0′ interstices,
N0′

given by the binomial coefficient n .

2. The number of ways of arranging the N0 − n remaining


  particles in the N0 vertices
of the lattice, given by the binomial coefficient NN0 −n
0
.
Combining these, the total number of states is the product of the two binomial coefficients,
and the entropy is given by:

S = kB [ln (N0′ !) + ln (N0 !) − ln ((N0′ − n)!) − ln ((N0 − n)!) − 2ln (n!)] (1.38)

Using the Stirling formula for the logarithms of factorials, and making the energy of the
system appear, we find:
E E
    
S ≃ kB N0′ ln N0′ + N0 ln N0 − N0′ − ln N0′ −
ε ε
E E E E
    
− N0 − ln N0 − − 2 ln (1.39)
ε ε ε ε
The temperature of the crystal with n Frenkel defect is obtained from this expression of
the entropy as:
!−1 #−1 !#−1
N0′ ε N0 N0′ ε2
" ! "
∂S ε N0 ε ε

T = = ln − 1 + ln −1 ≃ ln (1.40)
∂E kB E E kB E2

Re-introducing the number of defects n = E/ε, we find a relation between n and the
temperature of the crystal:
ε
q  
n≃ N0 N0′ exp − (1.41)
2kB T

Chain of harmonic oscillators


We consider a chain of N0 harmonic oscillators in one dimension. These are assumed
to be distinguishable, with the same characteristic frequency ω and weakly coupled such
that we can neglect the coupling between the oscillators. Recall from quantum mechanics
that the state of a quantum harmonic oscillator is described by the eigenvalue n of the
number of particles operator N = a† a, so the state of our chain of oscillators is fully
determined by the values of the ni , and we define the total number of particles in the
chain as:
N0
X
Q≡ ni (1.42)
i=1
13

Because the oscillators are weakly coupled, the Hamiltonian of the system can be decom-
posed as the sum of the Hamiltonians of individual oscillators:
 
N0 N0
N0  N0
X X  
H= Hi =  Ni + ℏω ⇒ E = Q+ ℏω (1.43)
i=1 i=1 2 2

The macro-state is therefore determined by Q and we have to count the number of micro-
states {ni } such that ni = Q. This corresponds to the number of ways of distributing
P

Q quantas in N0 oscillators or, more pragmatically to put Q balls in N0 distinguishable


boxes. This result is well known from combinatorics and we find that:
 
N0 E
+ −1 !
!
N0 + Q − 1 2 ℏω
Ω (E, N0 ) = =   (1.44)
Q (N0 − E
1)! ℏω − N20 !

Assuming that N0 , Q ≫ 1, we use the Stirling formula and find the entropy of the system:

S ≃ kB [(N0 + Q) ln (N0 + Q) − Q ln Q − N0 ln N0 ]
N0 E N0 E E N0 E N0
        
≃ kB + ln + − − ln − − N0 ln N(1.45)
0
2 ℏω 2 ℏω ℏω 2 ℏω 2
From this expression of the entropy, we get the temperature of the chain of oscillators:
" !#−1
ℏω N0
T ≃ ln 1 + E (1.46)
kB ℏω
− N20

The probability for a given oscillator i to be in a state with ni = n quantas can be


expressed simply as:
Nb of states such that ni = n
Pi (n) =
Ω (E, N0 )
   
1
Ω E− n+ 2
ℏω, N0 − 1 Q! (N0 + Q − n − 2)!
= = (N0 − 1) (1.47)
Ω (E, N0 ) (Q − n)! (N0 + Q − 1)!

Assuming that N0 , Q ≫ n and using the Stirling formula, we find that:


!n+1
N0 Q
Pi (n) ≃ (1.48)
Q N0 + Q

From the expression for the temperature, we find that (N0 + Q) /Q = exp (ℏω/kB T ),
which gives the final expression for the probability Pi (n) at temperature T :
 
− kℏωT − knℏωT
Pi (n) = 1 − e B e B (1.49)

This allow us to determine the mean energy ε̄i of a given oscillator i:


+∞
1
X  
ε̄i = n+ ℏωPi (n) (1.50)
n=0 2
14

1.3 Canonical ensemble


1.3.1 General description of the canonical system
We now turn to the study of a system Σ that is closed but not isolated but instead is in
contact with a reservoir R with which it exchanges energy. The reservoir is supposed to
be much larger than the system Σ that focuses our interest, so it is assume to behave as
a thermostat whose properly will be made explicit in the following. The union of the two
systems Σ ∩ R is itself assumed to be isolated and its total energy E is supposed to be
conserved, so can be studied with the tools of the microcanonical ensemble. In particular,
although they have vastly different energies, the system and the reservoir assumed to be
at equilibrium share the same temperature T (and parameter β) defined by (1.15).
We assume that the coupling between the system and the reservoir is weak, i.e. that the
energy exchanged is small compared to the energy of both the system and the reservoir.
That means in particular that we can use eigenstates of the Hamiltonian of the system
|σ⟩ and of the reservoir |ρ⟩, which are independent states, to build a basis of the state
space of the full system as |σ, ρ⟩. Therefore, the probability that a macro-state is in the
micro-state |σ, ρ⟩ can be factorized as:

Pσρ = Pσ · Pρ (1.51)

1.3.2 The thermostat and canonical distribution


The first property of the thermostat is that its interactions with the system does not spoil
its internal equilibrium. In other words, the thermostat is big enough compared to the
system that it is not affected by its interaction with the latter. Formally, this is enforced
by using the microcanonical distribution with energy Eρ = E − Eσ for the probability of
the micro-state of the reservoir:
1
Pρ = (1.52)
Ωρ (E − Eσ , Nρ , Vρ )
The state of the reservoir is only affected by the presence of the system Σ by the constraint
that its energy follows Eρ = E − Eσ .
In addition to this characteristic, we assume that the presence or absence of the system
and its evolution does not affect the value of the temperature T . Concretely, as expected
from a thermostat, we expect its temperature to be absolutely stable. Quantitatively,
in the absence (Eρ = E) or presence (Eρ = E − Eσ ) of the system, the temperature is
defined by:
1 ∂Sρ 1 ∂Sρ
(E) = β and (E − Eσ ) = β (1.53)
kB ∂Eρ kB ∂Eρ
By Taylor expanding the entropy of the reservoir around E,
∂Sρ  
Sρ (E − Eσ ) = Sρ (E) − Eσ (E) + O Eσ2 , (1.54)
∂Eρ
we find that the previous condition means that all terms of order higher than 2 in Eσ
must strictly vanish, and we are left with:

(Thermostat entropy) Sρ (E − Eσ ) = Sρ (E) − kB βEσ (1.55)


15

Finally, we use the relation between the thermodynamics entropy Sρ at equilibrium and
the number of micro-states Ωρ to express the probability of finding the thermostat in the
micro-state |ρ⟩:
!
Sρ (E)
Pρ = exp βEσ − ⇒ Pσρ = Pσ eβEσ · e−Sρ (E)/kB (1.56)
kB
Since the global system is isolated, its probability distribution Pσρ must follow the canoni-
cal distribution and depend only on its total energy, therefore the probability distribution
of the system Σ must follow the canonical distribution:

e−βEσ
e−βEσ
X
(Canonical distribution) Pσ = with Z = (1.57)
Z σ

At low temperature, where β ≫ 1, the probability of a given micro-state is lower as


its energy gets bigger. If the temperature is low enough, the fundamental state has
a probability close to 1 while any other micro-state has a probability that is almost
vanishing.
On the other hand when the temperature is high, where β ≪ 1, all states whose energy
gap with the fundamental state are small compared to kB T will be equiprobable.

1.3.3 The partition function


In the definition of the canonical distribution (1.57), we introduced the partition function
Z. Note that the sum runs over the micro-states and not over the energy levels, which can
be different in the presence of degenerate energy levels. The expression of the partition
function can be rewritten to make it invariant under change of basis:
XD E  
e−βEσ = σ e−βH σ = Tr e−βH
X
Z= (1.58)
σ σ
The significance of the partition function lies, among other things, in its convenience to
compute moments of the energy distribution of the system. We start with the mean value
of the energy of the system:
1X
Eσ e−βEσ
X
Ē = E σ Pσ =
σ Z σ
1 X ∂ −βEσ 1 ∂Z
= − e =−
Z σ ∂β Z ∂β
∂ ln Z
= − (1.59)
∂β
Similarly, we can calculate the statistical uncertainty on the energy as its variance
(∆E)2 = E 2 − Ē 2 . We use (1.59) for the second term, and the first term is obtained
from:
1 X 2 −βEσ
Eσ2 Pσ =
X
E2 = E e
σ Z σ σ
1 X ∂ 2 −βEσ 1 ∂ 2Z 2 ∂ 2 ln Z
= e = ⇒ σE = (1.60)
Z σ ∂β 2 Z ∂β 2 ∂β 2
!2
∂ 2 ln Z ∂ ln Z
= +
∂β 2 ∂β
16

To summarize, we can calculate the moments of the energy distribution from the deriva-
tives of the partition function, the two first moments being:

∂Z ∂ 2Z
(Mean) Ē = − and (Uncertainty) (∆E)2 = (1.61)
∂β ∂β 2

Assume that the energy of the macro-state of the system depends on a set of external
parameters Λn . These can be for instance: the volume of the container, the components
of an electric field, etc. The variation of Ē produced by an infinitesimal variation of those
parameters is:
X ∂ Ē
dĒ = dΛn = δW (1.62)
n ∂Λn

where we assume that the variation of Λn generates only a variation of work δW and
does not generate heat, which is intuitive in the canonical ensemble situation where the
thermostat impose its temperature on the system, and any heat is instantly dissipated.
We introduce conjugate variable Xn to the parameters Λn as:
X ∂ Ē X ∂Eσ 1 ∂ ln Z
δW = Xn dΛn ⇒ Xn = = Pσ = − (1.63)
n ∂Λn σ ∂Λn β ∂Λn

The Xn can therefore be considered as generalized forces that produce a variation of


work associated to a change of Λn . For instance, for the work associated with a change
of volume dV , the conjugate variable is the pressure P and we find that:
∂ ln Z
P = (1.64)
∂V
This expression is to be compared with equation (1.17) that gives the thermodynamic
pressure in the microcanonical ensemble. We see that in the canonical ensemble, the
partition function Z plays the same role as the number of micro-states Ω in the micro-
canonical ensemble.
Finally, the partition function is convenient because the contribution from independent
degrees of freedom, either from the same system or from different weakly coupled sub-
systems, can be factorized in terms of reduced partition functions. More precisely, we
decompose the state σ into n independent degrees of freedom σ = (σ1 , . . . , σn ). Because
these are independent, the total energy is just the sum of the energies of each degrees of
freedom Eσ = Eσ1 + . . . + Eσn . Therefore, the partition function is written as:

e−βEσ =
X XX X
Z = ... exp [−β (Eσ1 + . . . + Eσn )]
σ σ1 σ2 σn
n
! ! !
−βEσ1 −βEσn −βEσi
X X Y X
= e ... e ≡ e
σ1 σn i=1 σi
n
Y
= zi (1.65)
i=1

where we have defined the reduced partition functions:

e−βEσi
X
zi = (1.66)
σi
17

1.3.4 The thermodynamic potential


The above analogy between Ω in the microcanonical ensemble and Z in the canonical
ensemble suggests that in the system Σ, the entropy S = −kB Pσ ln Pσ is not maximal
P

at the equilibrium, and instead a function F ∝ ln Z will be. From the expression of
entropy, we find that:
X e−βEσ Ē
S = kB [ln Z + βEσ ] = kB ln Z + (1.67)
σ Z T
Therefore, a quantity F ∝ S − Ē/T will be maximum at the equilibrium, by convention
we instead define the Gibbs free energy:

1
(Gibbs free energy) F = Ē − T S = − ln Z (1.68)
β

The free energy will instead be minimum at the equilibrium, and is therefore referred to
as a thermodynamics potential. Similarly to the case of the microcanonical ensemble, F
is the fundamental thermodynamic quantity, that depends on (T, N, V ) and from which
the other thermodynamic quantities can be derived at the equilibrium:
∂F ∂F ∂F
, S=−
µ= and P =− (1.69)
∂T ∂N ∂V
From these, we recover the fundamental relation from macroscopic thermodynamics:
dF = −S dT − P dV + µ dN (1.70)

1.3.5 Examples
Einstein crystal
We model the crystal as a 3-dimensional system of N0 oscillators as illustrated in Figure
1.1.
In the quantum description, each oscillator i follows the Hamiltonian of a harmonic
oscillator:
P2 1
Hi = + mωi2 (R − R0 )2 (1.71)
2m 2
If the oscillators are coupled, the oscillator will behave as the sum of harmonic oscillators
with different frequencies. However, here we neglect the couplings between oscillators
so ωi = ω0 , ∀i and we are left with a system of 3N0 independent harmonic oscillators.
The energy
  of each harmonic oscillator is known from quantum mechanics to be Ei =
ni + 21 ℏω0 , with ni a positive integer, which lead to the reduced partition function for
oscillator i:
βℏω0
+∞
X −β (ni + 21 )ℏω0 −
βℏω0
+∞
X 
−βℏω0 ni
 e− 2 1
zi = e =e 2 e = −βℏω
=   (1.72)
ni =0 ni =0 1−e 0
2 sh βℏω 0
2

Therefore, we see that the reduced partition function does not depend on the given
harmonic oscillator i, which is expected from the symmetry of the problem, and the total
partition function is simply:
3N
!!−3N0
Y0 βℏω0
Z= zi = 2 sh (1.73)
i=1 2
18

Figure 1.1: Three-dimensional lattice of coupled springs with two layers in the y direction
and three layers in the x and z directions, illustrated by springs of different colors for
each directions. The actual system continues to infinity following the same structure in
the three directions.

The mean value of the energy is extracted from the partition function by using:
 
βℏω0
∂ln Z 3N0 ℏω0 ch 2 3 3N0 ℏω0
Ē = − =   = N0 ℏω0 + (1.74)
∂β 2 βℏω
sh 2 0 2 eβℏω0 − 1

The entropy of the crystal can also be computed from the partition function and the
mean energy:
" !! #
  βℏω0 1 βℏω0
S = kB ln Z + β Ē = −3N0 kB ln 2 sh − βℏω0 − βℏω0 (1.75)
2 2 e −1

Finally, from the partition function, we estimate the heat capacity CV of the crystal from
its definition as the variation of energy under a variation of temperature at constant
volume:
!2  2
∂ Ē ∂ Ē βℏω0 βℏω0
CV = = −kB β 2 = 3N0 kB βℏω
eβℏω0 = 3N0 kB    (1.76)
∂T ∂β e 0 −1 2 sh βℏω0
2

We distinguish two asymptotic regimes in temperature:

– At high temperature (classical regime): T → +∞ so β → 0 and the heat


capacity tends to a constant value CV −→ 3N0 kB .
T →+∞

– At low temperature (quantum regime): T → 0 so β → +∞ and the hyper-


bolic sine diverges and dominates over the numerator so CV −→ 0. One can also
T →0
compute the derivative of CV as a function of temperature as T → 0 and show that
dCV /dT −→ 0 as well, so there is a horizontal tangent at the origin.
T →0

The Einstein model of solids has the right asymptotic behavior as T → 0 and T ≫
ℏω0 /kB , however its predictions do not match experimental observations at intermediate
temperatures. This is normal since we neglected the interactions between oscillators.
19

Figure 1.2: The heat capacity of the diamond compared to the Einstein model of solids.
The figure is taken from [1].

Taking them into account, for instance in the Debye model of solids, allow us to recover
the agreement with experiment. A comparison between the results from the Einstein
model and experimental results for diamond is illustrated in figure 1.2.
Physics 2

2.1 Grand canonical ensemble


The grand canonical ensemble generalizes the canonical ensemble to the case where the
system can exchange particles with the reservoir, in addition to being able to exchange
energy. Therefore, we will follow the same steps in the development and will not justify
everything as the generalization from section 1.3 is straightforward.

2.1.1 The grand canonical distribution


The probability distribution of the global system is split into the probability of the sys-
tem and that of the reservoir since we assume that the energy and number of particles
exchanged are small:
Pσρ = Pσ · Pρ (2.1)
Because the state of the reservoir is not perturbed by the presence (or absence) of the
system, we have the two analogue properties to the case of the canonical ensemble:

– The probability distribubtion of the reservoir follows that of the microcanonical


ensemble:
1
Pρ = (2.2)
Ω (E − Eσ , N − Nσ , Vρ )

– The Taylor expansion of the entropy of the reservoir stops at the linear order:

(Reservoir entropy) Sρ (E − Eσ , N − Nσ ) = Sρ (E, N ) − kB β (Eσ − µNσ ) (2.3)

Therefore, the probability distribution of the reservoir is found to be:


" #
Sρ (E)
Pρ = exp β (Eσ − µNσ ) − ⇒ Pσρ = Pσ eβ(Eσ −µNσ ) · e−Sρ (E)/kB (2.4)
kB

Finally, we introduce the grand canonical distribution and the grand canonical partition
function Ξ:

e−β(Eσ −µNσ )
!
Grand canonical
e−β(Eσ −µNσ )
X
Pσ = with Ξ = (2.5)
distribution Ξ σ

20
21

2.1.2 The grand potential


The mean values and higher moments of the energy and number of particles distributions
can also be found from the derivatives of the grand canonical partition function. In
particular, one can verify following the same steps that:

∂ ln Ξ ∂ ln Ξ
Ē − µN̄ = − and β N̄ = (2.6)
∂β ∂µ

Similarly to the canonical partition function, the grand partition function can be fac-
torized for independent degrees of freedom by introducing the reduced grand partition
functions: n
e ( σi σi )
Y X −β E −µN
Ξ= ξ with ξ =
i i (2.7)
i=1 σi

Therefore, the grand canonical partition function Ξ plays the same role as the partition
function Z in the canonical ensemble and the number of states Ω in the microcanonical
ensemble. From the definition of the thermodynamic entropy in the grand canonical
ensemble,
X Ē µN̄
S = −kB Pσ ln Pσ = kB ln Ξ + − , (2.8)
σ T T
we define the grand potential J = −kB T ln Ξ expressed as:

(Grand potential) J = Ē − T S − µN̄ (2.9)

The grand potential J is the fundamental thermodynamic quantity, that depends on


(T, µ, V ) and which defined the other quantities at the equilibrium:

∂J ∂J ∂J
S=− , N =− and P =− (2.10)
∂T ∂µ ∂V
From these, we recover the fundamental relation from macroscopic thermodynamics:

dJ = −S dT − P dV − N dµ (2.11)

2.1.3 Examples
Quantum ideal gases
When collective quantum effects become important, which happens at very low tempera-
ture, we have to describe the system of N0 particle as a single wave function Ψ (x1 , . . . , xN0 ),
where xi denotes all the degrees of freedom of the particle i. In order to account for exper-
imental results, quantum mechanics needs to be supplemented of an additional postulate
for the multi-particle wave function that we call the symmetrization postulate. It states
that the multi-particle wave function of identical particles must be either:

– Totally
 symmetric: in this case we say that the particles are bosons and we have
Ψ xπ1 , . . . , xπN0 = Ψ (x1 , . . . , xN0 ) where π is a permutation of the xi ,
22

– Totally anti-symmetric:
  in this case we say that the particles are fermions and
we have Ψ xπ1 , . . . , xπN0 = sign (π) Ψ (x1 , . . . , xN0 ) where sign (π) gives the sign of
the permutation (+ if the permutation is even and − if it is odd).
One immediate consequence of the anti-symmetry of the wave function for fermions is
that two particles can not be in the same quantum states, this is the Pauli exclusion
principle. This property has major consequences and lead to different reduced grand
partition functions, depending on the bosonic or fermionic nature of the particles. If the
particles are bosons, the number of particles ni in a given state i can take all values from
0 to +∞, therefore we have:

+∞
1 1
ξiBE = e−β(εi −µ)ni = n̄BE
X
⇒ i = (2.12)
ni =0 1− e−β(εi −µ) eβ(εi −µ) −1

where we assumed that the energy Eσi of micro-state σi is Eσi = ni εi , i.e. is the number
of particles in state i times the energy of state i. The superscript BE refers to the
Bose-Einstein statistic followed by bosons.
Alternatively, because the number of particles that can be in state i is restricted to be
either 0 or 1, the reduced grand partition for fermions is:

1
1
ξiFD e−β(εi −µ)ni = 1 + e−β(εi −µ) n̄FD
X
= ⇒ i = (2.13)
ni =0 eβ(εi −µ) +1

The superscript FD refers to the Fermi-Dirac statistic followed by fermions.


In the classical limit, where exp (−βµ) ≫ 1, the distinction between bosons and fermions
disappear, and we find the results for the ideal gas as in section 1.2.4.

We start by treating the ideal gas of bosons. The mean number of particles is obtained
from the occupation number in each state i:
1
n̄BE
X X
N̄ = g i =g (2.14)
i i eβ(εi −µ) −1
with g the degree of degeneracy of each energy eigenstate. Degeneracies are due to the
spin degree of freedom, so for massive bosons g = 2s + 1 with s the spin of the particles.
At high temperature, the gas behaves classically, so we must have exp (−βµ) ≫ 1 which
means that the chemical potential is negative. For bosons, this is still true even at lower
temperatures, because otherwise the occupation number of the fundamental state, that
has 0 energy, would be negative which is impossible. To evaluate the mean number of
particles, we go to the continuous limit and express the energy of the free state i from its
momentum: Z
d3 p⃗ 4πgV0 Z +∞ p2 dp
N̄ −→ g   =   (2.15)
p2
β 2m −µ h3 0 p2
β 2m −µ
e −1 e −1
We introduce the dimensionless parameter

2 p2 q h
x =β ⇔ p = x 2mkB T = x √ (2.16)
2m λth π
23

where we introduced the thermal de Broglie wavelength λth . The previous equation now
reads in terms of the dimensionless variable:
N̄ 3 4g Z +∞ x2 dx
ρ̄λ3th ≡ λth = √ (2.17)
V0 π 0 ex2 e−βµ − 1

Therefore, we see that ρ̄λ3th diverges when the chemical potential becomes positive, al-
though it has a definite value when µ = 0. Assuming the density of particles ρ̄ is fixed,
this is achieved when the thermal de Broglie wavelength is at its critical value ΛBE :

4g Z +∞ x2 dx g
λ3BE = √ x 2 ≃ × 2.612 (2.18)
ρ̄ π 0 e −1 ρ̄

To this thermal
√ de Broglie wavelength corresponds a critical temperature TBE such that
λBE = h/ 2πmkB TBE . When the temperature goes below the critical temperature TBE ,
the occupation number becomes virtually infinite and the system enters a new state that
we call a Bose-Einstein condensate. Of course, the occupation number becomes large
but remains finite, and this divergence comes from the continuous limit that is not valid
anymore since when the temperature is so low the energy states are well separated.

We now treat the case of an ideal gas of fermions at 0 temperature, which is much simpler
than the case we just treated. Indeed, for fermions the occupation number can only be
either 0 or 1, and the mean number of particles expressed in the continuous limit gives:
4πgV0 Z +∞ FD 2
N̄ = ni p dp (2.19)
h3 0

where the degree of degeneracy for a fermion is g = 2s with s the spin of the fermion.
When T = 0, the particles occupy only the lowest energy levels, while all the others are
unoccupied, which defines the Fermi momentum pF above which all occupation number
vanish and the mean number of particles is simply:
4πgV0 Z pF 2 4πgV0 3
N̄ = p dp = p (2.20)
h3 0 3h3 F
This means that, even at 0 temperature, the particles in an ideal gas of fermions have
a non-vanishing impulsion and energy. The Fermi impulsion, Fermi energy and Fermi
temperature are obtained from the previous result:
!1/3 !2/3
3N̄ h2 3N̄
pF = h , εF = and kB TF = εF (2.21)
4πgV0 2m 4πgV0

For any temperature below the Fermi temperature, the gas of fermions act as if it were
at T = 0.

The black-body
We now consider the peculiar case of an ideal gas of photons, that is of massless bosons.
Because photons can be emitted and absorbed by atoms and molecules, the number of
photons can not be imposed by a reservoir and its chemical potential has to vanish, i.e.
24

µ = 0. In addition, because there is no interaction photon-photon, the gas of photons is


truly ideal.
We can therefore use the results from the previous case of the ideal gas of bosons, including
the additional constraint that µ = 0. We find the occupation number of the state i of
energy εi to be:
2
n̄γi = βεi (2.22)
e −1
The factor 2 in the above expression comes from the two polarizations of photons that
have degenerate energy levels. Because the chemical potential, the mean energy is easily
expressed from the grand canonical partition function as:
X ∂ln ξi X 2εi
Ē = −2 = βεi − 1
(2.23)
i ∂β i e

If we go to the continuous limit, noting that the special relativity energy-momentum


relation for the massless photons reduce simply to εi = pi c with c the speed of light in
the vacuum, we find the mean value of the energy:

8πV0 c Z +∞ p3 dp 8πhV0 Z +∞ ν 3 dν
Ē = = (2.24)
h3 0 eβpc − 1 c3 0 eβhν − 1
where we have used the relation ν = pc/h to express the energy as a function of the
frequency instead of the momentum. The energy density per unit of volume u (ν) trans-
ported by photons with frequency included in [ν, ν + dν] is therefore:

8πh ν 3
(Black-body spectrum) u (ν) = (2.25)
c3 eβhν − 1

This is the famous Planck law that described the spectrum of a black-body. By performing
a last change of variable to highlight a dimensionless variable x = βhν, we find the total
energy density:
Ē (kB T )4 Z +∞ x3 dx π 2 (kB T )4
= 2 = (2.26)
V0 π (ℏc)3 0 ex − 1 15 (ℏc)3
Bibliography

[1] Anton Akhmerov and Toeno van der Sar. Einstein model. Open Solid State Notes.
Last accessed: 2024-12-22.

25

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy