Unit2 1 Slides
Unit2 1 Slides
The counting of the number of states available to a particle amounts to determining the
available volume in phase space. One might preclude that for a continuous phase space,
any finite volume would contain an infinite number of states. But the uncertainty principle
tells us that we cannot simultaneously know both the position and momentum, so we
cannot really say that a particle is at a mathematical point in phase space. So when we
contemplate an element of "volume" in phase space
1
We now want to address the question of how to relate thermodynamics to mechanics more generally. Kinetic theory
is one approach, but one has to be able to go beyond the simple model of non-interacting particles discussed in
the previous section, and be able to incorporate interactions between the particles – the Boltzmann equation is one
formalism for doing that.
Here we will adopt a more “modern” approach and use the method of statistical ensembles developed by Gibbs in the
early 1900’s (so 120 years old is still “modern”!). The method of statistical ensembles has at its conceptual core the
ergodic hypothesis.
Consider a system of N particles, each with three spatial degrees of freedom x, y, z. The system is described in classical
mechanics by the Hamiltonian H, which is a function of 6N canonical variables, q1 , q2 , . . . , q3N , p1 , p2 , . . . , p3N . The
3N variables qi give the spatial coordinates of the N particles, and the 3N variables pi are the corresponding canonical
momenta. These 6N variables denote the phase space of the system. At any moment in time, the state of the system
is specified by its position in phase space, i.e. by specifying the value of each of these 6N coordinates.
The time evolution of the system as it moves in phase space is given by Hamilton’s equations,
∂H ∂H
ṗi = − , q̇i = , i = 1, 2, . . . , 3N (2.2.1)
∂qi ∂pi
Solving Hamilton’s equations gives the phase space trajectory of the system {qi (t), pi (t)}.
In general, assuming the system is isolated from all external degrees of freedom, the total energy of the system will be
conserved as the system moves on its phase space trajectory. If the total energy of the system is E, then the condition
H[qi , pi ] = E defines a 6N − 1 dimensional surface in phase space to which the system’s trajectory is confined (when
I write [qi , pi ], I will mean the set of all 3N of the qi and all 3N of the pi ).
If one wanted to compute the measured value of some physical quantity X, averaged over an interval of time τ , it
would be given by,
Z t0 +τ
1
hXi = dt X[qi (t), pi (t)] (2.2.2)
τ t0
where X[qi (t), pi (t)] is the value that the quantity X takes when the system is at coordinates {qi (t), pi (t)} in phase
space.
kiT÷:¥s÷
of constant energy E, that we have no hope of computing directly
(note, since Hamilton’s equations give a unique trajectory for each
initial condition {qi0 , pi0 }, such a phase space trajectory may never
\
intersect itself).
Hamilton's eyes
motion
82 of [Even if we had the best computers in the universe and beyond,
we could not do the calculation because most classical mechanical
G1
systems with many interacting degrees of freedom develop “deter-
ministic chaos” after a sufficiently long time. Although the equa-
tions of motion are deterministic, small changes in the initial conditions lead to exponentially growing changes in the
final state after a long time period of evolution. So any uncertainly in the initial conditions, no matter how small
(and the finite bit size of words in computer memory always results in a finite accuracy), will ultimately lead to
unpredictability of the state of the system after sufficiently long time.]
To compute hXi we therefore need to make an assumption. The ergodic hypothesis says that, for a system in equi-
librium, during any time interval τ sufficiently long, the location of the system in phase space {qi (t), pi (t)} is equally
2
likely to be anywhere on the surface of constant energy E. If this is so, then we can write,
Z
(i) hXi = dqi dpi X[qi , pi ] ρ(qi , pi ) where (ii) ρ(qi , pi ) = C δ (H[qi , pi ] − E) (2.2.3)
R
where C is a normalizing factor such that dqi dpi ρ(qi , pi ) = 1.
The distribution ρ(qi , pi ) is called the density matrix, and it is the probability density for the system, in equilibrium,
to be found at a particular point {qi , pi } in phase space. According to the ergodic hypothesis, for a system with fixed
energy E, that probability distribution is a uniform constant on the surface of constant energy E and zero elsewhere.
In other words, in the absence of any further information, we assume that in equilibrium all microscopic states {qi , pi }
consistent with a give set of macroscopic thermodynamic variables, E, V , N , are equally likely.
Computing averages as in Eq. (2.2.3)(i) is called ensemble theory. Using the density matrix ρ of the form in
Eq. (2.2.3)(ii) is called the microcanonical ensemble.
In ensemble theory one abandons any effort to compute thermodynamic properties from the explicit time dependent
trajectory of the system in phase space as in Eq. (2.2.2). Rather one describes the thermodynamic state as represented
by a particular ensemble given by a density matrix ρ(qi , pi ).
One can interpret the ensemble average, as in Eq. (2.2.3)(i), as the value one would find, not for a single isolated
system moving on its trajectory, but for the average of a collection of systems distributed in phase space according
to the density ρ. The ergodic hypothesis asserts that the time average of Eq. (2.2.2) and the ensemble average of
Eq. (2.2.3) are equal.
Equilibrium is described by a density matrix ρ that does not vary in time. We will soon see other ensembles besides
the microcanonical ensemble of Eq. (2.2.3)(ii).
The ergodic hypothesis cannot in general be proven (it has been proven only for some very special simple systems –
see the work of Sinai). But the existence of thermodynamics as an empirically consistent theory suggests why the
ergodic hypothersis may be true. We can consider the following two points.
1) Liouville’s Theorem of classical mechanics, which we will discuss in the next set of notes, shows that any ρ(qi , pi )
that is independent of time, and so may describe equilibrium, must be constant on constant energy surfaces.
2) By positing a thermodynamic description we assume that the macroscopic properties of a system are completely
described by a set of only a few macroscopic variables, such as E, V , and N . If the ergodic hypothesis were not true,
then there would be parts of phase space with the same value of E that never “saw” each other – i.e. a trajectory
in one part would not enter the other part, and vice versa. One could imagine, therefore, that systems in these two
disjoint regions of phase space might have different properties, i.e. have different time averages of some particular
property X[qi , pi ]. One therefore might expect them to represent thermodynamically distinguishable states. But this
would contradict the assumption that E alone is the important thermodynamic variable.
Alternatively, if ergodicity fails, there might be some other important macroscopic variable (for example magne-
tization) which one overlooked. The disjoint regions of the constant energy surface could correspond to different
values of this new macroscopic variable. This reflects back to a comment made early in Notes 1-1, that in making
a thermodynamic description of a system one must first correctly identify all the relevant macroscopic variables. A
globally conserved quantity is always a candidate for such a macroscopic variable. Once one has identified all the
macroscopically relevant variables, the ergodic hypothesis implies that further, more microscopic, information about
the state of the system will not effect the thermodynamic behavior, i.e. all microscopic states consistent with a set of
macroscopic variables are all equally likely.
1
The concept of the density matrix will soon be expanded beyond the particular example of the microcanonical ensemble
discussed in the previous section. It can also be generalized to non-equilibrium situations, where the density matrix
∂ρ
varies with time, ρ(qi , pi , t). We therefore want to see what general condition ρ must satisfy in order that = 0,
∂t
and so ρ is describing a steady, time-independent, state.
to
fr
-
by these initial points evolve in time, their trajectories give the density ρ(t) at later
• .
§#oo
times. Think of the points in ρ like particles in a fluid. The probability density ρ
must obey a local conservation equation (think of the charge conservation equation
•
⑧ 7-a
• of E&M),
& ∂ρ
+ ∇ · (ρu) = 0 (2.3.1)
t =
O ∂t
where u is the “velocity” vector of the probability current ρu, that tells how the
points in ρ flow in the 6N dimensional phase space.
∂ ∂ ∂ ∂
The vector u is the 6N dimensional vector u ≡ (q̇1 , . . . , q̇3N , ṗ1 , . . . , ṗ3N ), and ∇ ≡ ,..., , ,..., ,
∂q1 ∂q3N ∂p1 ∂p3N
so
3N X3N
X ∂ ∂ ∂ρ ∂ q̇i ∂ρ ∂ ṗi
∇ · (ρu) ≡ (ρq̇i ) + (ρṗi ) = q̇i + ρ + ṗi + ρ (2.3.2)
i=1
∂qi ∂pi i=1
∂qi ∂qi ∂pi ∂pi
3N
X ∂ρ ∂ρ ∂ q̇i ∂ ṗi
= q̇i + ṗi + ρ + (2.3.3)
i=1
∂qi ∂pi ∂qi ∂pi
where [ρ, H] defines the Poisson bracket of the two observables ρ and H (in the correspondence of classical to quantum
mechanics, the Poisson bracket becomes the commutator).
3N
∂ρ ∂ρ X ∂ρ dqi ∂ρ dpi dρ
+ [ρ, H] = 0 or + + ≡ =0 (2.3.6)
∂t ∂t i=1 ∂qi dt ∂pi dt dt
This is Liouville’s theorm. Here the total derivative dρ/dt, sometimes called the convective derivative, is just the total
derivative of ρ(qi (t), pi (t), t) with respect to t; dρ/dt tells how the value of ρ changes in time as seen by an observer
who travels along with the system on its trajectory {qi (t), pi (t)}.
2
fluid Liouville’s theorem, that dρ/dt = 0, therefore says that the probability
-
density in phase space ρ stays constant in time as one flows along with
¥1or i
'
the density, just like the behavior of an incompressible fluid. This is a
consequence of the probability conservation law of Eq. (2.3.1).
t =
to
of fluid stays constant However, for ρ to describe equilibrium, the probability density must obey
density the stronger condition that ∂ρ/∂t = 0, i.e. the probability for the system
volume fluid constant
so
of stays to be at any fixed point {qi , pi } in phase space stays constant in time.
as
drop deforms Only when ∂ρ/∂t = 0 will ensemble averages be independent of time.
∂ρ
To have =0 ⇒ [ρ, H] = 0, and so for equilibrium ρ must satisfy,
∂t
3N
X ∂ρ ∂H ∂ρ ∂H
[ρ, H] = − =0 (2.3.7)
i=1
∂qi ∂pi ∂pi ∂qi
We will have [ρ, H] = 0 provided ρ(qi , pi ) depends on the {qi , pi } only via the function H[qi , pi ], i.e. if ρ = ρ(H[qi , pi ]).
Then we have,
∂ρ ∂ρ ∂H ∂ρ ∂ρ ∂H
= and = (2.3.8)
∂qi ∂H ∂qi ∂pi ∂H ∂pi
so that,
3N X3N
X ∂ρ ∂H ∂ρ ∂H ∂ρ ∂H ∂H ∂ρ ∂H ∂H
[ρ, H] = − = − =0 (2.3.9)
i=1
∂qi ∂pi ∂pi ∂qi i=1
∂H ∂qi ∂pi ∂H ∂pi ∂qi
We saw that the microcanonical ensemble, at constant total energy E, assigned equal weight to all systems on the
surface of constant energy in phase space, defined by H[qi , pi ] = E.
To count the number of such states on the constant energy surface, we define the density of states g(E), which is the
number of states of total energy E per unit energy,
Z
dqi dpi
g(E) ≡ δ(H[qi , pi ] − E) (2.4.1)
h3N
3N
Y
Here dqi dpi stands for dqi dpi , i.e. we integrate over all 6N of the phase space coordinates. The delta function
i=1
means we count only coordinates that lie on the surface of constant energy E.
The constant h has units of qi pi and is introduced so that g(E) will have the units of 1/energy. You can think of h
as the grid width in discretizing the continuous phase space into a grid of discrete cells, each with phase space volume
h3N . Such a discretization is useful since it will allow us to count states, something that is conceptually more difficult
when states are specified by continuous coordinates. Classically, h should be small but is otherwise totally arbitrary,
and so we expect our thermodynamic results should not depend on its value (though later we will see that this is not
always so!). Quantum mechanically we will see that h turns out to be Planck’s constant (recall, Planck’s constant
has the units of energy · time which is the same as the units of qi pi ).
We can now define the number of states Ω in a shell of thickness ∆E about the surface of constant energy E,
E+∆E/2
Z
Ω(E, V, N ) = dE 0 g(E 0 ) since g(E) has units 1/energy, Ω is a dimensionless pure number (2.4.2)
E−∆E/2
As with h, the energy width ∆E is arbitrary, and so our thermodynamic results should not depend on the value of
∆E. We will assume it to be in the range E/N < ∆E E, so it is larger than the energy per particle but much
smaller than the total energy. One can think of ∆E as representing the finite accuracy with which one knows the
total energy E. Both h and ∆E are introduced so that Ω is a dimensionless pure number that we can think of as
being the number of microscopic states that are occupied in the microcanonical ensemble at total energy E.
We will now compute Ω(E, N, V ) for the ideal gas of non-interacting point particles. For this system the energy is
entirely the kinetic energy of the particles, and so the Hamiltonian is,
3N
X p2i
H= (2.4.3)
i=1
2m
We see from H that the surface of constant energyP3NE is just the surface on Ps
which the momenta {pi } satisfy the constraint i=1 p2i = 2mE. This con-
straint defines the surface of a 3N dimensional sphere
qPcentered at the origin ← sukma:Fant
in momentum space. The radius of that sphere is
3N 2
i=1 pi =
√
2mE. We
¢ Pit PE tpj
therefore have for the density of states, ¢ th
1
3N Z L Z ∞
3N
p2
q
j
Y X
g(E) = 3N dqi dpi δ − E Pi (2.4.4)
h i=1 0 −∞ j=1
2m
where we assume the system is confined to a box of length L. Since the box volume is V = L3 , doing the integration
2
Let us now convert the integral over the {pi } to spherical coordinates. This gives,
3N Z
Y ∞ Z Z ∞
dpi = dΩ3N dP P 3N −1 (2.4.6)
i=1 −∞ Ω3N 0
qP
3N 2
where P = i=1 pi is the magnitude of the 3N dimensional vector P = (p1 , p2 , . . . , p3N ), and Ω3N is the 3N dimen-
sional solid angle giving the orientation of the vector P (don’t confuse Ω3N with the number of states Ω(E, V, N )!).
We then get,
Z ∞
VN
Z 2
3N −1 P
g(E) = 3N dΩ3N dP P δ −E (2.4.7)
h Ω3N 0 2m
√
VN Z ∞ δ P − 2mE
= S3N dP P 3N −1 (2.4.8)
h3N 0 (P/m)
VN 3N −2
= S3N m(2mE) 2 (2.4.9)
h3N
Here S3N is the area of the surface of a sphere with unit radius in 3N -dimensional space, and we converted the delta
function in the integrand using the result that for a monotonic increasing function f (x),
δ(x − x0 )
δ(f (x)) = where f (x0 ) = 0 and f 0 (x) = df /dx. (2.4.10)
f 0 (x0 )
X δ(x − xi )
[Note: for the more general case where f (x) is any continuous function: δ(f (x)) = , where the xi are
i
|f 0 (xi )|
the zeros of f (x), i.e. f (xi ) = 0.]
2π d/2
From appendix C of Pathria and Beale we have Sd = , where Γ(x) is the gamma function that obeys Γ(n) =
Γ(d/2)
2π 3N/2
(n − 1)! for integer n. So, S3N = 3N
, and we have,
2 −1 !
VN 2π 3N/2 (2mE)3N/2
g(E) = 3N
m (2.4.11)
h3N
2 −1 !
2mE
and so
V N (2πmE)3N/2 1
g(E) = 3N
(2.4.12)
h3N
2 −1 !
E
Integrating g(E) over a shell of energy thickness ∆E then gives for the number of states,
E+∆E/2
Z
Ω(E) = dE 0 g(E 0 ) ≈ g(E)∆E (2.4.13)
E−∆E/2
3
V N (2πmE)3N/2 ∆E
Ω(E) = 3N
(2.4.14)
h3N
2 −1 !
E
Note, for convenience we write the number of states as Ω(E) rather than the more complete Ω(E, V, N ).
For large N ∼ 1023 , Ω(E) ∼ E (3N/2)−1 is a very rapidly increasing function of the total energy E!
We will now argue that Ω(E) is related to the entropy S(E) of the system.
since system 1 can be in any one of Ω1 (E1 ) states, and system 2 can be in any one of Ω2 (E2 ) states.
Now suppose the wall is thermally conducting so that energy can be transferred between the two systems. Then E1
can vary but ET = E1 + E2 stays fixed. What will be the value of E1 when the system comes into equlibrium?
In this case the number of states available to the total system is obtained just by adding up the terms as in Eq. (2.4.15),
but now considering all possible divisions of the energy between the two systems,
Z ET
dE1
ΩT (ET ) = Ω1 (E1 ) Ω2 (ET − E1 ) (2.4.16)
0 ∆E
Consider the behavior of the integrand:
The product Ω1 (E1 )Ω2 (ET − E1 ) therefore has a sharp maximum at some particular value of E1 , as illustrated in the
sketches below where I took N1 = N2 = 20. As N1 and N2 increase, Ω1 and Ω2 become ever more sharply varying,
and the product Ω1 Ω2 becomes ever more sharply peaked.
E1 E1
4
In the microcanonical ensemble all states with total energy ET are equally likely. So the probability that the total
system has its energy divided with E1 in system 1 and ET − E1 in system 2 is just proportional to the number of
states that have this particular division of energy, i.e. Ω1 (E1 )Ω2 (ET − E1 ). The most likely value for the energy E1
is therefore the value Ē1 that maximizes Ω1 (E1 )Ω2 (ET − E1 ). Since this quantity, as argued above, has a very sharp
maximum at Ē1 as N gets large, then one is almost certain to find E1 = Ē1 and the probability to find any other
value of E1 will vanish as the size of the system N gets infinitely large.
What condition determines this maximizing value of E1 ? As usual, it is given by the value where the derivative of
Ω1 (E1 )Ω2 (ET − E1 ) vanishes,
∂
[Ω1 (E1 )Ω2 (ET − E1 )] = 0 (2.4.17)
∂E1
∂Ω1 (E1 ) ∂Ω2 (ET − E1 )
⇒ Ω2 (ET − E1 ) + Ω1 (E1 ) =0 (2.4.18)
∂E1 ∂E1
∂Ω1 (E1 ) ∂Ω2 (ET − E1 )
⇒ Ω2 (ET − E1 ) − Ω1 (E1 ) =0 (2.4.19)
∂E1 ∂E2
Comparing Eq. (2.4.20) with Eq. (2.4.21), and following Boltzmann, we therefore identify
as the entropy.
Since the relation between thermodynamics and mechanics should be fundamental, Boltzmann postulated that the
proportionality constant in the above equation should be a universal constant of nature, and not depend on the
particular system being considered. That constant is Boltzmann’s constant kB . So finally we have for the entropy,
Looking at Eq. (2.4.14) giving Ω(E) for the ideal gas, we see as expected that S(E) is a monotonic increasing function
of E.
Stat Mech and Thermodynamics
Postulates of Equilibrium Statistical Mechanics:
●
Evolves over time to explore "all accessible" states: all with the same energy and composition.
●
Principle of indifference: In the absence of any further information, we can only assign equal probabilities to
each compatible situation.
Stat Mech and Thermodynamics
Statistical
Statistical Mechanics
Mechanics Postulates
Postulates of
of Thermodynamics
Thermodynamics Laws
Laws of
of Thermodynamics
Thermodynamics
Equilibrium
Equilibrium exists
exists Equilibrium
Equilibrium states
states follow
follow Euclid’s
Euclid’s first
first
axiom (two states in mutual equilibrium
axiom (two states in mutual equilibrium
have
have the
the same
same temperature)
temperature)
Trivial Trivial
Equilibrium
Equilibrium exists
exists Entropy
Entropy SS exists
exists and
and isis maximized
maximized
Not trivial, but already proved Trivial. Essentially the definition of heat
Ergodicity
Ergodicity is
is attained
attained Entropy
Entropy is
is additive
additive
Not trivial Not trivial
When
When entropy
entropy vanishes,
vanishes, so
so does
does When
When entropy
entropy vanishes,
vanishes, so
so does
does
temperature
temperature
Quantum origins: Will discuss later
Stat Mech and Thermodynamics
Statistical Mechanics
Thermodynamics
Thermodynamics Thermodynamics (laws)
(postulates)
(postulates)
Stat Mech and Thermodynamics
●
Additive property of Entropy
Stat Mech and Thermodynamics
●
Additive property of Entropy
Statistical Mechanics
Thermodynamics
Thermodynamics Thermodynamics (laws)
(postulates)
(postulates)
Stat Mech and Thermodynamics
●
Let state have energy
●
Let there be a generalized coordinate such that
Generalized Force
Stat Mech and Thermodynamics
●
Let state have energy
●
Let there be a generalized coordinate such that
Stat Mech and Thermodynamics
●
Let state have energy
●
Let there be a generalized coordinate such that
Suppose
Stat Mech and Thermodynamics
●
Let state have energy
●
Let there be a generalized coordinate such that
Stat Mech and Thermodynamics
●
Let state have energy
●
Let there be a generalized coordinate such that
Vanishes in the
thermodynamic limit
Stat Mech and Thermodynamics
●
Let state have energy
●
Let there be a generalized coordinate such that
Statistical Mechanics
?
aw
rd L
Thi
Thermodynamics
Thermodynamics Thermodynamics (laws)
(postulates)
(postulates)
Additional Results
Observation:
Additional Results
Observation:
Additional Results
1
From the last section we had for the number of states for the ideal gas,
V N (2πmE)3N/2 ∆E
Ω(E, V, N ) = (2.5.1)
h3N ( 3N
2 − 1)!
E
V (2πmE)3/2
3N 3N 3N 2 3N 2 3N ∆E
S(E, V, N ) = kB N ln − ln + + ln − + − 1 + ln
h3 2 2 2 3N 2 3N 2 E
(2.5.5)
( " 3/2 # )
V 2πmE) 3N 3N 1 ∆E
= kB N ln 3 + + ln +O + ln (2.5.6)
h 3N/2 2 2 N E
∆E
Recall, we took ∆E ∼ E/N , so ln ∼ − ln N . Since we are interested in the thermodynamic limit of N → ∞,
E
we can drop all but the leading terms that scale proportional to N , and we then get,
( " 3/2 #)
3 V 4mE
S(E, V, N ) = N kB + ln 3 (2.5.7)
2 h 3N
With the above expression for the entropy, we can recover some familiar results,
1 ∂S ∂ 3 3 1 3
= = N kB ln E = N kB ⇒ E = N kB T (2.5.8)
T ∂E V,N ∂E 2 2 E 2
p ∂S ∂ 1
= = (N kB ln V ) = N kB ⇒ pV = N kB T (2.5.9)
T ∂V E,N ∂V V
So far, so good!
But there is a problem. S, as given by Eq. (2.5.7) above, is not extensive. If we take E → 2E, V → 2V , and N → 2N ,
we do not get S → 2S. It is the ln V term in Eq. (2.5.7) that spoils the desired extensivity. When we double E, V ,
and N , the argument of the logarithm doubles, whereas we would need it to stay constant if S is to double.
2
We can compare Eq. (2.5.7) to the entropy for the ideal gas that we found in Notes 1-3, where we used the experimental
pV = N kB T and E = 23 N kB T together with the Gibbs-Duhem relation to get,
" #
3/2 −5/2
N V E N
S(E, V, N ) = S0 + kB N ln (2.5.10)
N0 V0 E0 N0
where N0 , S0 , V0 and E0 are all constants. The result above in Eq. (2.5.10) is properly extensive. Compared to
Eq. (2.5.7), the entropy of Eq. (2.5.10) has an extra factor of N −1 inside the logarithm, which makes the argument
of the logarithm stay constant as E, V , and N are all doubled, as is needed if S is to double and so be extensive.
Note: The Gibbs-Duhem relation was derived assuming the entropy S was extensive. Hence it should not be surprising
the entropy of Eq. (2.5.10), derived using the Gibbs-Duhem relation, is indeed extensive.
What is the physical reason that the entropy of Eq. (2.5.7), as obtained in the microcanonical ensemble, fails to be
extensive?
Note, the entropy of Eq. (2.5.7) is not properly additive over subsystems, as is the entropy of Eq. (2.5.10). That is,
the entropy of Eq. (2.5.7) does not obey the desired condition,
that it must. Rather Eq. (2.5.7) obeys the relation λS(E, V, N ) = S(λE, V, λN ).
●
Derived from Microcanonical Ensemble: Not extensive
●
Derived from classical thermodynamics (check any textbook) assuming ideal gas law: Extensive
Extensivity of the Ideal Gas
Entropy of an ideal gas:
●
Derived from Microcanonical Ensemble: Not extensive
●
Derived from classical thermodynamics (check any textbook) assuming ideal gas law: Extensive
●
Derived from Microcanonical Ensemble: Not extensive
●
Derived from classical thermodynamics (check any textbook) assuming ideal gas law: Extensive
●
Derived from Microcanonical Ensemble: Not extensive
●
Derived from classical thermodynamics (check any textbook) assuming ideal gas law: Extensive
Stirling’s Approx.
Extensivity of the Ideal Gas
Entropy of an ideal gas:
●
Derived from Microcanonical Ensemble: Not extensive
●
Derived from classical thermodynamics (check any textbook) assuming ideal gas law: Extensive
●
Derived from Microcanonical Ensemble: Not extensive
●
Derived from classical thermodynamics (check any textbook) assuming ideal gas law: Extensive
●
Derived from Microcanonical Ensemble + Correction: Extensive!!!
●
Derived from classical thermodynamics (check any textbook) assuming ideal gas law: Extensive
Extensivity of the Ideal Gas
Entropy of an ideal gas:
●
Derived from Microcanonical Ensemble + Correction: Extensive!!!
●
So, by the postulates of thermodynamics, this MUST be correct!
●
But why?
Extensivity of the Ideal Gas
Identical particles are (apparently) indistinguishable
Out of total configurations, you can get configurations with these values
Extensivity of the Ideal Gas
Identical particles are (apparently) indistinguishable
●
The indistinguishability of identical particles has its origins in quantum mechanics
●
We will circle back to this topic in subsequent classes
●
It turns out, even this result is wrong, and we will need to modify it to account for quantum degeneracy
●
Doesn’t matter at high temperatures.
●
So, for now, let us just go with this correction, and divide all phase space counts by
Equipartition Theorem
●
There is one more topic involving the microcanonical ensemble:
The Generalized Equipartition Theorem
●
Although not essential towards understanding statistical mechanics, it does help solve problems
●
Often required for aptitude tests (UGC-NET, GATE, JEST, GRE etc.)
●
Not in our syllabus, but I suggest studying it anyway