Statistical Physics II - Lecture Note
Statistical Physics II - Lecture Note
Department of Physics
1|Page
After completion of this course students should be able to:
given system,
Understand the ways of incorporating the interaction term while studying dynamics of
interacting particles.
2|Page
UNIT-ONE
REVIEW OF THERMODYNAMICS
(P + )(V –nb) –nRT - van der Waals equation, where a, and b are van der Waals
3|Page
Zeroth law of thermodynamics state that: If two systems are separately in equilibrium
with a third, then they must also be in equilibrium with each other. The temperature of a
system in equilibrium is constant throughout the system.
First Law of Thermodynamics
The first law of thermodynamics represents an adaptation of the law of conservation of
energy to thermodynamics, where energy can be stored in internal degrees of freedom.
The first law of thermodynamics states that energy is conserved. The change in internal
energy E of a system is equal to the sum of the amount of heat energy added to the
system & the amount of work done on the system.
dE =dQ + dW
Second law of thermodynamics
The second law of thermodynamics tells us that life is not free. According to the first law
we can change heat into work, apparently without limits. The second law, however, puts
restrictions on this exchange. There are two versions of the second law, due to Kelvin and
Claussius.
The Second law of thermodynamics-The second law can be formulated in many
equivalent ways. The two statements generally considered to be the most clear are those
due to Claussius and to Kelvin.
Claussius statement: There exists no thermodynamic transformation whose sole effect is
to transfer heat from a colder reservoir to a warmer reservoir.
Kelvin statement: There exists no thermodynamic transformation whose sole effect is to
extract heat from a reservoir and to convert that heat entirely into work.
Paraphrasing, the Claussius statement expresses the common experience that heat
naturally flows downhill from hot to cold, whereas the Kelvin statement says that no heat
engine can be perfectly efficient. Both statements merely express facts of common
experience in thermodynamic language.
It is relatively easy to demonstrate that these alternative statements are, in fact, equivalent
to each other. Although these postulates may appear somewhat mundane, their
consequences are quite profound; most notably, the second law provides the basis for the
thermodynamic concept of entropy.
4|Page
There exists a state function of the extensive parameters of any thermodynamic system,
called entropy S, with the following properties:
1. The values assumed by the extensive variables are those which maximize S consistent
with the external constraints; and
2. The entropy of a composite system is the sum of the entropies of its constituent
subsystems.
Third Law of Thermodynamics:
The Third Law of Thermodynamics state as: The change in entropy that results from any
isothermal reversible transformation of a condensed system approaches zero as the
temperature approaches zero.
=0
1.3 Thermodynamic potential
Thermodynamic potential includes the parameters such as Helmholtz free energy (F),
Enthalpy (H) & Gibbs free energy (G), where, H=E+PV, F=E-TS,& G=F-TS+PV
5|Page
dE =d(TS) – SdT – pdV
d(E –TS) = – SdT – pdV, where F=E-TS
dF = – SdT – pdV, F=F(T,V)- Helmholtz free energy
Exercise start from dE= TdS– pdV show that dG= -SdT +VdT
1.4 Gibbs Durem’s and Maxwell’s relations
Maxwell’s relations
dE= TdS– pdV ---------------------------------------(1)
E =E(S,V)
dE =( dS + dV-------------------------------(2)
T= , P=-
( = , - Then, ( = -(
The four Maxwell relations below are easily derived (verify!) for simple compressible
systems.
E=E(S,V) =
1) Isolated system-no interaction (no exchange of energy with the environments). Energy -
constant
6|Page
2) Closed system- no exchange of particles (matter). N-constant, mass-constant, charge-constant
3) Open system- a system can interact with the environment. N- Vary (not constant)
S=S(E, V, , , …….., )
dS= dE + dV+∑ d
-constant
=-T
dS= dE + dV-∑ d
=T =
dS= + dV -∑ d
TdS=dE+PdV-∑ d
dF=-SdT-PdV+∑ d
dE=d(TS)-SdT-d(PV) +VdP+∑ d ,
dF=∑ =
7|Page
dG=∑ d , =
Therefore, dE dF dG
G=G (T, P, ), G
g(T,P)= =
S S, V V&N N
Suppose E(S, V, N)
E( S, V, N) E, Assume =1+ , 1
E= EE=E( +1)
E=S + +∑
dE=TdS+SdT-PdV-VdP +∑ d +∑ d
dE=TdS+-PdV +∑ d
The experimental important response functions are obtained by the second –order differential of
the internal energy. Consider the entropy S expressed as a function of T, V, and N :
8|Page
Dividing by dT , multiplying by T , and assuming dN = 0 throughout, we have
Appealing to a Maxwell relation derived from F (T, V, and N), and then we get:
isothermal compressibility
Adiabatic compressibility
9|Page
Where, as always, v = is the molar volume.
This above relation generalizes to any conjugate force-displacement pair (−p, V ) → (y, X ):
A similar relationship can be derived between the compressibility and . We then clearly
must start with the volume, writing
We get,
10 | P a g e
Comparing the above two equations, we obtain
Where m = M/ν is the magnetization per mole of substance, and isothermal susceptibility:
adiabatic susceptibility:
Remark: The previous discussion has assumed an isotropic magnetic system where M and H are
collinear, hence H · M = HM .
11 | P a g e
In this case, the enthalpy and Gibbs free energy are
V= +
N= +
If the system is isolated, these extensive quantities are conserved, such that
12 | P a g e
Thermal equilibrium between the two subsystems requires that dS be stationary with respect to
first-order variations of each variable independently, such that
The maximum entropy principle can also be applied to systems that are in thermal, mechanical,
or chemical contact with an environment that constrains one or more of its intensive parameters.
For example, we often seek to determine the equilibrium state for a system with fixed
temperature and pressure instead of fixed energy and volume. Under these conditions it is useful
to consider a small subsystem with energy and volume in thermal and mechanical contact
with a much larger reservoir. If the initial configuration is not in equilibrium there will be
exchanges of energy and volume between the subsystem and the reservoir. We assume that the
reservoir is sufficiently large that small transfers of energy or volume do not change the
temperature T0or pressure of the reservoir. The net change in entropy for the combined
system can be expressed as
13 | P a g e
Where
The increase in the entropy of the subsystem is at least as large as the decrease of the entropy of
the reservoir that occurs when heat is transferred from the reservoir to the subsystem. The change
in the internal energy of the subsystem can be expressed as
d =dQ+dW=
where it is convenient to divide the work into two contributions, dW=- d +d , where d
is non mechanical work performed upon the subsystem in some form other than mechanical
work against the constant external pressure . The requirement that the total entropy increases
can now be expressed as
A= - + dA= d - + d
Although this availability function strongly resembles the Gibbs free enthalpy for the
subsystem, it is important to recognize that we do not require the temperature and pressure of the
subsystem to match those of the environment; hence, T0and appear as fixed parameters in
14 | P a g e
A. At equilibrium, the first derivatives of availability with respect to any of the system
parameters must vanish. Furthermore, if the equilibrium is to be stable, the second derivatives
must all be positive. Applying these observations to temperature and pressure, we require.
15 | P a g e
Chapter -Two
--------------------------------------------(2.1)
The mean energy is written as
------------------------------------------(2.2)
where the sum is taken over all states of the system, irrespective of their energy.
Note that
------------(2.3)
Where,
--------------------------------------------------------(2.4)
It follows that
------------------------------------------------(2.5)
The quantity Z, which is defined as the sum of the Boltzmann factor over all states, irrespective
of their energy, is called the partition function.
It is clear that all important macroscopic quantities associated with a system can be expressed in
terms of its partition function Z. Let us investigate how the partition function is related to
thermodynamically quantities. Partition function is the basic parameter in statistical physics.
Z
Z=∑ , = ----------------------------------------------------------------(2.6)
16 | P a g e
Recall that Z is a function of both β and x (where x is the single external parameter). Hence, Z =
Z(β, x), and we can write
----------------------------------------(2.7)
Consider a quasi-static change by which x and β change so slowly that the system stays close to
equilibrium, and, thus, remains distributed according to the Boltzmann distribution.
-------------------------------------------(2.8)
Then, we get
---------------------------------------(2.9)
--------------------------------------------------------(2.11)
This expression enables us to calculate the entropy of a system from its partition function.
Suppose that we are dealing with a system (0) consisting of two systems A and which only
interact weakly with one another. Let each state of A be denoted by an index r and have a
corresponding energy . Likewise, let each state of be denoted by an index s and have a
corresponding energy . A state of the combined system is then denoted by two indices r
and s. Since A and only interact weakly their energies are additive, and the energy of state rs
is
------------------------------------------------------(2.12)
17 | P a g e
----------------------(2.13)
Hence, --------------------------------------------------------(2.14)
Yields , ----------------------------------------------------(2.15)
Where Z and are the partition functions of A and , respectively. It follows that the mean
energies of , A, and are related by
-----------------------------------------------(2.16)
-------------------------------------------(2.17)
Hence, the partition function tells us that the extensive thermodynamic functions of two weakly
interacting systems are simply additive.
---------------------------------------------(2.18)
18 | P a g e
Let us treat the problem classically. In this approach, we divide up phase-space into cells of
equal volume . Here, f is the number of degrees of freedom, and is a small constant with
dimensions of angular momentum which parameterizes the precision to which the positions and
momenta of molecules are determined. Each cell in phase-space corresponds to a different state.
The partition function is the sum of the Boltzmann factor exp(−β ) over all possible states,
where is the energy of state r. Classically, we can approximate the summation over cells in
phase-space as an integration over all phase-space. Thus,
-----------------------(2.19)
-----------(2.20)
Note that the integral over the coordinates of a given molecule simply yields the volume of the
container, V , since the energy E is independent of the locations of the molecules in an ideal gas.
There are N such integrals, so we obtain the factor in the above expression. The partition
function Z of the gas is made up of the product of N identical factors: i.e.,
--------------------------------------------------(2.21)
Where
-----------------------------------------------(2.22)
This equation is the partition function for a single molecule. Of course, this result is obvious,
since we have already shown that the partition function for a system made up of a number of
weakly interacting subsystems is just the product of the partition functions of the subsystems.
The integral in Eq. (2.22) is easily evaluated:
19 | P a g e
------------(2.23)
---------------------------------(2.24)
Thus,
------------------------------------------------(2.25)
And,
--------------------------------(2.26)
-----------------------------------------------(2.27)
---------------------------------------------------(2.28)
Where, N = and R = k , then the mean energy of the gas is obtained as:
------------------------------------------(2.29)
Note that the internal energy is a function of temperature alone, with no dependence on volume.
The molar heat capacity at constant volume of the gas is given by
20 | P a g e
----------------------------------------------(2.30)
---------------------------------------------------(2.31)
Now let us use the partition function to calculate a new result. The entropy of the gas can be
calculated quite simply from the expression
-----------------------------------------------------(2.32)
Thus,
-------------------------------(2.33)
Or ----------------------------------(2.34)
Where, ------------------------------------------(2.35)
2.3) Gibbs paradox
Thermodynamic quantities can be divided into two groups, extensive and intensive. Extensive
quantities increase by a factor α when the size of the system under consideration is increased by
the same factor. Intensive quantities stay the same. Energy and volume are typical extensive
quantities. Pressure and temperature are typical intensive quantities. Entropy is very definitely an
extensive quantity. Suppose that we have a system of volume V containing ν moles of ideal gas
at temperature T. Doubling the size of the system is like joining two identical systems together to
form a new system of volume 2 V containing 2 ν moles of gas at temperature T. Let
------------------------------(2.36)
21 | P a g e
Denote the entropy of the original system, and let
---------------------------------------(2.37)
Denote the entropy of the double-sized system. Clearly, if entropy is an extensive quantity
(which it is!) then we should have
= 2S ---------------------------------------------2.38)
But, in fact, we find that
---------------------------------------------(2.39)
So, the entropy of the double-sized system is more than double the entropy of the original
system. Where does this extra entropy come from?
or ------------------------------------------------
(2.40)
The mean momentum ⃗ can be estimated from the known mean energy of a molecule in the
gas at temperature T.
------------------------------------(2.41)
Hence the condition becomes
22 | P a g e
----------------------------(2.42)
The internal energy of a monatomic ideal gas containing N particles is (3/2) N k T .This means
that each particle possess, on average, (3/2) k T units of energy. Mon-atomic particles have only
three translational degrees of freedom, corresponding to their motion in three dimensions. They
possess no internal rotational or vibrational degrees of freedom. Thus, the mean energy per
degree of freedom in a monatomic ideal gas is (1/2) k T. In fact, this is a special case of a rather
general result.
Suppose that the energy of a system is determined by some f generalized coordinates and
corresponding f generalized momenta , so that
---------------------------------------------(2.43)
a) The total energy splits additively into the form
-----------------------------------------------(2.44)
Where involves only one variable , and the remaining part does not depend on p.
b) The function is quadratic in pi, so that
-----------------------------------------------------(2.45)
Where b is constant
The most common situation in which the above assumptions are valid is where a momentum
is. This is because the kinetic energy is usually a quadratic function of each momentum
component, whereas the potential energy does not involve the momenta at all. However, if a
coordinate were to satisfy assumptions 1 and 2 then the theorem we are about to establish
would hold just as well. In the classical approximation, the mean value of is expressed in
terms of integrals over all phase-space:
23 | P a g e
-----------------------------------(2.46)
Condition 1 gives
---------------------(2.47)
where use has been made of the multiplicative property of the exponential function, and where
the last integrals in both the numerator and denominator extend over all variables and
except . These integrals are equal and, thus, cancel. Hence
--------------------------(2.48)
This expression can be simplified further since
-----------------(2.49)
-------------------------(2.50)
According to condition 2
------(2.51)
Where y= √ , Thus,
----------------------(2.52)
Note that the integral on the right-hand side does not depend on β at all. Then it follows from
eq.(2.49)
24 | P a g e
-----------------------------------(2.53)
It gives
-------------------------------------(2.54)
This is the famous equipartition theorem of classical physics. It states that the mean value of
every independent quadratic term in the energy is equal to (1/2) k T .
Suppose a gas of a system consists r, and its total kinetic energy is K=∑ = (
+ + )
= kT -----------------------------------------------------------(2.52)
= = kT ----------------------------------------------------------(2.55)
---------------------------------------------(2.56)
Where is the (angular) oscillation frequency of the normal mode. It is clear that in normal
mode coordinates, the linearized lattice vibrations are equivalent to 3 N independent harmonic
oscillators (of course, each oscillator corresponds to a different normal mode).
If the lattice vibrations behave classically then, according to the equipartition theorem, each
normal mode of oscillation has an associated mean energy k T in equilibrium at temperature T
25 | P a g e
[(1/2) k T resides in the kinetic energy of the oscillation, and (1/2) k T resides in the potential
energy]. Thus, the mean internal energy per mole of the solid is
------------------------------------------------(2.57)
It follows that the molar heat capacity at constant volume is
--------------------------------------------------(2.58)
We can use the quantum mechanical result for a single oscillator to write the mean energy of the
solid in the form
------------------------------------(2.59)
The molar heat capacity is defined as:
----------------------------(2.60)
Then it gives
-----------------------------(2.61)
Which reduces to:
-----------------------------------(3.62)
Where,
26 | P a g e
=- H -------------------------------------(2.63)
Here is the magnetic moment of the atom. It is proportional to the total angular momentum J
of the atom and is conventionally written in the form:
=g J -----------------------------(2.64)
Where is the standard unit of magnetic moment (usually the Bohr magneton =e/2mc, m is
electron mass) and where g is a number of the order of unit, the so called g factor of the atom.
------------------------------(2.65)
The distribution function for momenta is given by
-------------------------------(2.66)
Note that g(p) = 〈 〉 (p is the same for every particle, independent of its label i. We
compute the average
̂
〈 〉= ̂ . Setting i = 1, all the integrals other than that over divide out between
27 | P a g e
-------------------(2.67)
The note commonly refer to the velocity distribution f (v), which is related to g(p) by
---------------------------(2.68)
Hence
---------------- (2.69)
This is known as the Maxwell velocity distribution. Note that the distributions are normalized,
viz.
-------------------- (2.70)
If we are only interested in averaging functions of v = |v| which are isotropic, then we can define
the Maxwell speed distribution, ̃(v), as
------ (2.71)
Note that ̃(v), is normalized according to
------------------ (2.72)
28 | P a g e
------------ (2.73)
----------------(2.74)
----------------------- (2.75)
29 | P a g e
Figure 2.1: Maxwell distribution of speeds
--------------(2.76)
---------- (2.77)
Hence ̂ is obtained as
------------------------------------- (2.78)
⁄
All these various speeds are proportional to . Thu the molecular speed increases when
30 | P a g e
2.11 Number of molecule striking a surface
Consider that the container is a box in the form of parallelepiped, the area of one end wall being
A. How many molecules per unit time strike this end wall? Suppose that there are n molecules
per unit volume in this gas. Since they move in random direction , assume one third of them ( )
molecules per unit volume. Half of these molecules, i.e, ⁄ molecules per unit volume, have
velocity in the +z-direction so that they will strike the end wall under consideration. If the mean
speed of the molecules is ̅ , these molecules cover in an infinitesimal
time dt a mean distance ̅ dt. Thus the number of molecules which strike the end wall of area A
in time dt is equal to the number of molecules having velocity ̅ , in the z-direction and
contained in the cylinder of volume A ̅ dt. It is given by:
= A ̅ dt ------------------------ (2.79)
The total number of molecules which strike unit area of the wall per unit time ( i.e, the total
molecular flux) is give as:
= n̅
31 | P a g e
-------------------------- (2.80)
---------------------------------------- (2.81)
Thus the above equation implies that :
---------------------------------- (2.82)
The volume of this cylinder is dAvdt , while the number of molecules per unit volume in
the velocity range is f(v) v. Hence the number of molecules of this type which strike the area
dA of the wall in time dt is equal to[ f(v) v][dAvdt ]
Where (v) v the number of molecules, with velocity b/n v and v + dv, which strike a unit
area of the wall per unit time.
------------------------ (2.83)
In other words, we have to sum over all possible velocities v to the restriction that velocity
component v 0, since molecules with 0 will not collide with the element of area.
------------------------------------------- (2.84)
After we goes further, the equation becomes
----------------------------(2.85)
32 | P a g e
2.12 Effusions
Consider a small hole or slit is made in the wall of container, the equilibrium of the gas inside the
container is disturbed to a negligible extent. I n the case the number of molecule which emerges
through the small hole is the same as the number of molecules which strike the area occupied by
the hole if the latter were closed off. The process whereby molecules emerge through such a
small hole is called effusion.
------------ (2.86)
The number of molecules which pass per second through the hole from left to right equals the
number of molecules which pass per second through the hole from right to left. This leads to the
simple equality:
------------------------------------- (2.87)
33 | P a g e
We from a detailed kinetic point of view how a gas exerts a pressure. The mean force exerted on
the wall of the container is due to collision of molecules with the wall. If there are a collision b/n
the molecules, there will be a change of momentum.
Let denote by the mean molecular momentum crossing this surface dA per unit time from
left to right, and by the mean molecular momentum crossing this surface dA per unit time
from right to left.
---------------------------- (2.88)
Where F =
------------ (2.89)
And
------------ (2.90)
Then
--------- (2.91)
34 | P a g e
Chapter Three
The basic postulate of statistical mechanics says that every microstate of an ensemble of isolated
systems occurs with equal probability. Thus, if the ensemble has W different microstates, the
probability that any particular member of the ensemble is in a particular microstate is
= --------------------------(3.1)
since the sum of all the probabilities have to add to 1 and they all must be equal. Another
way to think about this equation is that the micro canonical partition function is just W .
The entropy of a micro canonical ensemble is calculated using the Boltzmann equation:
S=k ------------------------------------- (3.2)
this equation, we first note that the number of microstates available to the system depends on the
energy, so W = W (E). This means that S depends on E. We can therefore write
35 | P a g e
T= = ------------------------------(3.3)
Assume we have T; we can define other quantities that depend on it, like the Helmholtz free
energy:
F = E − T S = E − kT ---------------------------------------- (3.4)
Canonical ensemble - is used to describe system in contact with a heat reservoir. The total
number of particles are constant.
Both the micro canonical and canonical ensembles deal with systems with a fixed number of
particles N (or, in general, fixed values of the numbers of particles of each type, , , . . . ).
However, the number of molecules of a given type in a chemical system at equilibrium is not
generally fixed. Due to chemical reactions, the number of molecules of each type fluctuates
with time.
A grand canonical ensemble is an ensemble of fixed T, V, and µ. Suppose that the energies of
the two subsystems are and , and that the numbers of molecules in each subsystem are
and . Since the system is isolated, the total energy and number of molecules are conserved:
E= + and N = +
Note:
36 | P a g e
{ } ----------------------------------(3.5)
The system of gas in this state can be described by the following wave function
= ----------------------------(3.6)
If the particles are distinguishable and any number of particles are allowed to be in the same state
S, then the particles are said to Obey Maxwell-Boltzmann statistics. This is called classical
description of a system and it doesn’t impose symmetry requirements on the wave function when
two particles are interchanged.
However, the quantum mechanical description, where identical particles are considered to be
indistinguishable imposes symmetry requirements on the wave function during interchanging
two particles, i-e. Interchanging two identical particles doesn’t lead the whole sytem to a new
state.
Each particle in the system has integral spin, and then the wave function of the system must be
symmetric under interchanging two particles.
Particles satisfying this symmetry condition are said to obey Bose- Einstein statistics and they
are called bosons.
On the other hand if each particle in the system has half integral spin, then the wave function of
the system satisfy anti symmetric condition during interchanging two particles.
Particles satisfying this anti symmetric condition are said to obey Fermi- Dirac Statistics & they
are called fermions.
If two particles, Say i & j of the same state S are interchanged, the wave function of the system
should remain the same. At the same time, if the particles have half-integral spin condition (4)
must be satisfied. This leads to the conclusion that for a system containing two particles in the
37 | P a g e
same state the wave function should vanish.
Thus in the Fermi-Dirac Case there exists no state of the whole gas for which two or more
particles are in the same single particle state. This is called Pauli exclusion principle.
The energy of the whole system which is supposed to be in state R is then given by
+ +…=∑ -----------------------------------------(3.9)
Z=∑ = ∑ ---------------------------------------(3.10)
∑
̅ = ∑
∑
= ∑
=- ---------------------------------(3.11)
̅̅̅̅=∑ = ∑
∑
= = * ( ) +
38 | P a g e
= * +
= * +
= + --------------------------------------(3.12)
If the particles under consideration obey Maxwell-Boltzmann statistics, partition function of the
system can be obtained by eqn.(6) after summing over all possible values of =0,1,2,3…)
for each r provided that total no. of particles in the system is constant ∑ = N.
In the case of Bose-Einstein statistics, the summation in equation (3.10) should be taken over all
possible values of for each r, and the total number of particles must be fixed. However, since
the particles are considered to be indistinguishable, specifying the number of particles in each
state is sufficient. A special case of Bose-Einstein statistics where total number of particles in the
system is not constant, i.e, where the restriction in eq.(3.14) is lifted, is called photon statistics.
Two possible values of ,( =0,1) for each r, since more than one particle on particular state is
not allowed, and total number of the particles must be fixed.
Consider a system of gas containing N particles. Let the lowest energy level of a single
particle is denoted by . For a system of particles that obeys BE statistics, where there is no
restriction on the number of particles in any state, the lowest energy level of the whole
system can be obtained by placing all the particles in the lowest energy level, and hence for
the lowest energy level of the whole system we can write =N
39 | P a g e
In the case of FD statistics however, where we are not allowed to have more than one particle
in any state, the lowest energy level of the whole system can only obtained by placing one
particle in each consecutive states of increasing energy starting from the lowest energy level
. For a system maintained at absolute temperature T, the mean number of particles in a
particular state s is
---------------------(3.14)
Photon statistics: The summation in eq.(3.14) should be over all possible values of for each s.
Since there is no restriction on the total number of particles, the sums in the numerator &
denominator of eq.(3.14) are the same. Hence we can write
---------------------------- (3.15)
However, the above expression can be rewritten
---------------------------- (3.16)
Now, the sum on the right-hand side of the above equation is an infinite geometric series, which
can easily be evaluated. In fact,
----------- (3.17)
Thus, eq.(3.17) gives
40 | P a g e
----------------------(3.18)
Which is called Planck distribution.
Fermi-Dirac (FD) statistics: In this case have only two values, =0, 1 for each r and there
is restriction on the total number of particles where it supposed to be fixed ∑ =N.
Let us introduce the function
-------------------------------- (3.19)
Which is defined as the partition function for N particles distributed over all quantum states,
excluding state s, according to Fermi-Dirac statistics. By explicitly performing the sum over =
0 and 1, the expression (3.14) reduces to
--------------------------- (3.20)
Which yields
---------------------- (3.21)
In order to make further progress, we must somehow relate (N−1) to (N).
Suppose that ∆N N. It follows that ln (N − ∆N) can be Taylor expanded to give
----------------- (3.22)
Where
------------------------------------- (3.23)
41 | P a g e
Taylor expand the slowly varying function in (N), rather than the rapidly varying function
(N), because the radius of convergence of the latter Taylor series is too small for the series to
be of any practical use. Equation (3.22) can be rearranged to give
------------------------------(3.24)
Since (N) is a sum over very many different quantum states, we would not expect the
logarithm of this function to be sensitive to which particular state s is excluded from
consideration. Let us, introduce the approximation that is independent of s, so that we can
write
------------------------ (3.25)
For all s, it follows that the derivative (3.23) can be expressed approximately in terms of the
derivative of the full partition function Z(N) (in which the N particles are distributed over all
quantum states). In fact,
--------------------- (3.26)
Making use of Eq. (3.24), with ∆N = 1, plus the approximation (3.25), the expression (3.21)
reduces to
-------------------------- (3.27)
This is called the Fermi-Dirac distribution. The parameter α is determined by the constraint that
∑ ̅ =N: i.e.,
---------------------- (3.28)
42 | P a g e
Note that ̅ 0 if becomes sufficiently large. On the other hand, since the denominator in
Eq. (3.27) can never become less than unity, no matter how small becomes, it follows that ̅ ≤
1. Thus,
----------------------------------- (3.29)
in accordance with the Pauli exclusion principle.
Bose-Einstein statistics: The summation in eq.(3.14) is rang over all possible values of
( =0,1,2,3, …), for each r and the total number of particles is restricted to be constant.
Applying the concept of eq.(3.6) in eq.3.14) we can write
--------------------- (3.30)
Where (N) is the partition function for N particles distributed over all quantum states,
excluding state s, according to Bose-Einstein statistics [cf., Eq. (3.19)].Using Eq. (3.24), and the
approximation (3.25), the above equation reduces to
------------------------------------------- (3.31)
Note that this expression is identical to (3.15), except that β is replaced by α +β . Hence, an
analogous calculation to that outlined in the previous subsection yields
-------------------------------------- (3.32)
This is called the Bose-Einstein distribution.
43 | P a g e
----------------------------- (3.33)
Where the sum is over all distinct states R of the gas, and the particles are treated as
distinguishable. For given values of , , · · there are the number of possible ways of
In these arrangements yields distinct state for the system. Hence partition function of the system
becomes
-------------------------- (3.34)
Where the sum is over all values of = 0, 1, 2, · · · for each r, subject to the constraint that
------------------------------------------ (3.35)
Now, Eq. (3.34) can be written
-------------------------- (3.36)
Eq. (3.35) is by the result of expanding a polynomial it can be written as.
----------------------- (3.37)
In other word
--------------------- (3.38)
Note that the argument of the logarithm is simply the partition function for a single particle.
Equations (3.18) and (3.38) can be combined to give
44 | P a g e
----------------------- (3.39)
This is known as the Maxwell-Boltzmann distribution.
Z=∑
=∑ ∑ ….
= ) )….
= ∑ = -∑ ) -----------------------------------(3.40)
The mean number of particles in a particular state r with the corresponding energy
Is given by
̅ =- = ∑ = ------------------------- (3.41)
Let us now consider Bose-Einstein statistics. The particles in the system are assumed to be
massive, so the total number of particles N is a fixed number. Consider the expression (3.14). For
the case of massive bosons, the numbers , , · · · assume all values = 0, 1, 2, · · · for each
45 | P a g e
r, subject to the constraint that ∑ =N Performing explicitly the sum over , this expression
reduces to
------------(3.42)
Where (N) is the partition function for N particles distributed over all quantum states,
excluding state s, according to Bose-Einstein statistics
Let Z= Z(N), for a system particles the partition function is Z( ). This function is rapidly
increasing function of , and hence multiplying it by rapidly decreasing function
produces a function Z( ) which usually has a very sharp maximum. The proper choice
of the positive parameter will made the sharp maximum to occur at =N. Thus we can write
--------------------(3.44)
Taking the logarithm of eq.(3.44),then, obtain an excellent approximation
----------------------- (3.45)
Where we have neglected the term ) which is utterly negligible compared to the other terms
which are of order N. Here the sum eq.(3.44) is easily performed, since it extends over all
possible numbers without any restriction. The quantity Z is called a grand partition function.
Now let evaluate Z
--------------- (3.46)
Where the sum is over all possible numbers without restriction. By regrouping terms one
obtains.
46 | P a g e
------------ (3.47)
------------------ (3.48)
The eq.(3.44) yields
---------------- (3.49)
Our argument assumed that the parameter is to be chosen so that the function Z( ) has
its maximum for =N, i.e, so that
----------------------- (3.50)
Since this condition involves the particular value =N, itself must be a function of N. By the
virtue of eq.(3.45), the condition eq.(3.50) is equivalent to
-------------------- (3.51)
Using the expression of eq.(3.48), the relation of eq.(3.51) which determines then
47 | P a g e
------------------- (3.52)
By applying eq.(3.16) in to eq.(3.49), then obtains
---------------- (3.53)
The last term takes into account the fact that is a function of through the relation eq.(3.52).
But this term vanishes by virtue of eq.(3.51). Hence it has simplify
--------------------------- (3.54)
The obvious requirement needed to satisfy the conservation of particles. The chemical potential
of the gas is given by
---------------- (3.55)
Hence
----------------- (3.56)
48 | P a g e
Thus the relative dispersion does not become arbitrarily small even when ̅ 1.
------------------------ (3.57)
Since the summation is over the two values, then it becomes
----------------------- (3.58)
Then
-------------------------- (3.60)
By using other equation, we can obtain
--------------------------- (3.61)
This expression simplified as
49 | P a g e
------------------------------- (3.62)
̅̅̅ = ---------------------------------(3.63)
Where the +/- sign in the denominator represents FD/BE statistics & is supposed to be obtained
from the relation ∑ ̅̅̅ = ∑ = N ---------------------------------------------(3.64)
Consider a system of gas where the concentration is sufficient low. This condition (39) can be
satisfied only when each term in the sum over all states is sufficiently low, ̅̅̅ or
for all r.
Similarly for a system consisting of fixed number N of particles of the temperature is sufficiently
large so that , the parameter must be largeenoughto prevent the sum from exceeding
N, then we can have or ̅̅̅ .
̅̅̅ = ---------------------------------------------------------(3.65)
∑ = ∑ =N
= N ∑ ---------------------------------------------------------(3.66)
̅̅̅ = N ∑
---------------------------------------------(3.67)
50 | P a g e
In the classical limit of sufficiently low concentration or sufficiently high temperature the
quantum statistics, FD & BE statistics, reduces to MB statistics
Lnz = ∑ (1 ) ---------------------------------------------(3.68)
for X
ln(1+x) = x- +
Lnz = ∑
+∑
+ N ------------------------------------------------------------------------(3.69)
- = lnN – ln∑
= -lnN + ln∑
= ln – lnN! ; = ∑
Z=
51 | P a g e
3.12 Evaluation of Partition Function
Consider a system of monatomic ideal gas. Suppose the system is in the classical limit, i.e., it has
sufficiently low concentration or it is at sufficiently high temperature. The partition function of
the system is then given by equation (45)
= N(-lnN + ln + 1) ----------------------------------------------------------(3.71)
Where =∑ and the sum is over all possible states of a single particle. To evaluate this
sum we need to know the energy of a single particle corresponding to the possible states.
A system of single non interacting particle of mass M, Position vector & momentum
confined in a container of volume V can be described by a wave function ( ) of the plane
wave form
⃗⃗⃗
( ,t) =A = ( ) ------------------------------------ (3.72)
Which propagates in the direction of the wave vector ⃗ . The energy of these particles is then
⃗ ⃗
given by = =
( + + ) ----------------------(3.74)
Wave function in eq.(3.72) is assumed to satisfy the periodic boundary condition, provided that
dimensions of the container are large enough to the de-Broglie wave length of the particle.
( ) } ------------------------------------------------ (3.75)
( )
52 | P a g e
Where , ,& are the dimensions of the container.
⃗
( )= = --------------------------------------------------
(3.76)
--------- (3.77)
Z=∑ =∑ ( + + )] ---------------------------------------(3.78)
Successive term in a sum in eq.(3.78) correspond to a very small change like = , hence a
small change b/n & +d contains = d terms which have nearly the same
magnitude .
53 | P a g e
Chapter Four
Consider a system of solid consisting of N atoms. Let the mass and position vector of the i th
particle are denoted by Mi and ⃗ and let the equilibrium position of this atom is ⃗ . Since the
atoms can vibrate about their equilibrium position, the kinetic energy of vibration of the system
is given by
K= ∑ ∑ ⃗ = ∑ ∑ ̇
Where α stands for x., y and z components of vector and ̇ = is displacement of the
atom from equilibrium position.
+∑ ( ) ∑ ( ) + …
V0 is the potential energy in the equilibrium configuration of the atoms. At equilibrium V must
V V0 + ∑
Where, ( )
The total energy associated with vibrations of the atoms in the is then
H = V0 + ∑ ̇ + ∑
This equation can be simplified by eliminating the cross product terms in the potential energy
and this can be done changing the 3N old coordinates to new set of 3N generalized
coordinates qγ by a linear transformation.
=∑
54 | P a g e
Thus the total energy can be rewritten as
H = V0 + ∑ ( ̇ )
This expression is similar to that of the total energy of 3N independent 1D harmonic oscillators.
The total energy of 1D harmonic oscillator is given by
Hr= ( ̇ )
The corresponding energies for the possible quantum states of this oscillator are then
The total energy of a system of 3N independent harmonic oscillators, where the state of the
whole system specified by {n1, n2, ….,n3N}, becomes
H = V0 +∑
= V0 +∑ ( ) wγ
= V0 + ∑ + ∑
= Nη +∑
Z=∑ =∑
=∑
= ∑
= ∑ ∑
= ( ) ----- ( )
55 | P a g e
lnZ = + ln( ) + ------ + ln ( )
= ( ( ) ( ))
= ∑ ( )
Angular frequencies which also called normal mode frequencies are closely spaced. Hence,
if _____ represents the number of normal modes with angular frequency in the range between w
and +d the expression for becomes,
= ∫ σ(w)d
̅= = + ∫ σ( )d
= + ∫ σ( )d
̅ ̅
Cv = ( ) = ( ) ( )
= = = = β . =
̅
Cv = ( )
= ( ) * ∫ +
=+ ∫
=1+
Cv = ∫
56 | P a g e
= ∫
= 3N =3 NA = 3nR
The approximation of treating solid as continuous elastic medium is valid when λ>> a, where λ
is the wavelength of the vibration of the elastic medium while “a” is mean interatomic separation
in the solid.
Consider a solid which can be treated as elastic continuous medium of volume V. Let ⃗⃗⃗ (⃗⃗⃗ t)
denote the displacement of a point in this medium from its equilibrium position. This
displacement is expected to satisfy a wave equation which describes the propagation through the
medium of sound waves travelling with some effective velocity Cs.
From the previous chapter we have for a given wave vector ⃗⃗⃗ the number ∆nx of possible
integers nx for which Kx lies in the range between Kx and Kx + dKx is
∆nx = dKx
Then number of states ρ(⃗⃗⃗ )d3⃗⃗⃗ for which ⃗⃗⃗ lies in the range between ⃗⃗⃗ and ⃗⃗⃗ + d⃗⃗⃗ is then
57 | P a g e
The number of states for which the value of ⃗⃗⃗ , /⃗⃗⃗ /, lies in the range between K and K +
dk↑ by summing the above relation over the volume in ⃗⃗⃗ space of spherical shell of inner radius
k and outer radius K + dk,
ρx dx = 4 K2 dx
= K2 dx
Since sound wave of wave vector ⃗⃗⃗ corresponds to an angular frequency, = Cs.K, the number
of possible wave modes with frequency between w and +d is then,
=3 (4π) ( )
=3 d
According to Debye approximation, for law frequencies w the density of modes σc( ) for the
continuous elastic medium is nearly the same as the mode density σ( ) for the actual solid.
The Debye approximation, σ( ) ≈σc( ), is valid not only for low frequencies but for all 3N
lowest frequency modes of the elastic continuum, and it is defined by
σD( )= {
Where is called Debre frequency and it is chosen so that σD(w) yields the correct total
number of 3N normal modes.
∫ =∫ = 3N
∫ = ∫ = = 3N
⁄ ⁄
=( ) = ( )
58 | P a g e
4. 3. Calculation of the Partition Function for Law densities
H= K + U
U = U(Rij) ; Rij= ∕ ∕
U=∑ ∑ = ∑ ∑
i< j i≠ j
U(K) = [( ) ( ) ]
U(R)
R0
R
U0
⃗
Z= ∫
59 | P a g e
(⃗ ⃗ ⃗ ) ⃗ ------ ⃗ ∫
= ∫ ------
(⃗ ⃗ ⃗ ) ⃗ ------ ⃗ ∫
= ∫ ------
⃗
= (∫ ⃗) ∫ ------
⁄
( )
= ∫ ------
⁄
= ( ) Zu
Where Zu = ∫ ------
For a system of gas whose density is not too large one can calculate the approximate value of Zu.
The mean potential energy of the gas is
̅=∫ = lnZu
∫
∫ = ∫ ̅
lnZu (β) = N ln V ∫ ̅
= = N(N 1) ≈ N2 for N>>1
̅ = N2 ̅
60 | P a g e
By approximating that the motion of any pair of molecules is not correlated appreciably with the
motion of the remaining molecules, we can consider a system of pair of molecules is in thermal
contact with a system containing the rest molecules (heat reservoir).
⃗
̅ =∫ = ln∫ ⃗
∫ ⃗
∫ ⃗ =∫ ⃗ =∫ ⃗ ∫ ⃗ = V + I = V(1 + )
Where I(β) = ∫ ⃗ =∫
̅ = ln (V(1 + ))
= , -
̅ = ( )=
̅ = N2 ̅ =
= N ln V + ∫
√ √
Zu (β) = VN Z= ( )
61 | P a g e
Mean pressure of a system can be obtained from the partition function of the system as
̅=
Z= ( )
= because the first term in r.h.s of the above equation is independent of qv.
̅ = = (NlnV +
= ( )
̅
β ̅ = = = n In2
̅ =N
̅
= =n
The deviation of real gas equation from ideal gas can be considered by introducing correcting
term in the above equation. This case is discussed by KamerlingOnnes and he introduced a virial
expansion to generalize the ideal gas law.
̅
= n + β2(T)n2 + B3(T)n3 + …
62 | P a g e
β2 = I ∫ ( )4πR2dR
= 2π∫ ( )R2dR
U(R) {
( )
For , U(R) = ∞ = 10
= 2π∫ ( )R2dR
If << 1, ≈1
β2 = + 2π∫
= + 2πβ∫ ( )
= 2πβ ∫
= 2πβ
= 2πβ
β2 = 2πβ
=
63 | P a g e
=
= , -
̅
= n +β2n2
=n+ n2
=n+( )n2
̅= + ( )n2
= +
̅+ = (1 + =
+ ----
≈1
̅ + = =
(̅ + =( )=
64 | P a g e
( ̅ )( )
( ̅ )(
or( ̅ ) (V Vb) = VRT Van der Waals’ equation of state a and b are called Van der
Waals’ constants.
65 | P a g e