0% found this document useful (0 votes)
20 views65 pages

Statistical Physics II - Lecture Note

The document outlines the course content for Statistical Physics II at Jimma University, focusing on classical and quantum statistics, thermodynamics laws, and the properties of substances. It covers key topics such as the laws of thermodynamics, thermodynamic potentials, Maxwell's relations, and response functions, providing students with a comprehensive understanding of statistical physics. The course aims to equip students with the ability to apply statistical approaches to various physical systems and understand their thermodynamic behavior.

Uploaded by

getatsehay1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views65 pages

Statistical Physics II - Lecture Note

The document outlines the course content for Statistical Physics II at Jimma University, focusing on classical and quantum statistics, thermodynamics laws, and the properties of substances. It covers key topics such as the laws of thermodynamics, thermodynamic potentials, Maxwell's relations, and response functions, providing students with a comprehensive understanding of statistical physics. The course aims to equip students with the ability to apply statistical approaches to various physical systems and understand their thermodynamic behavior.

Uploaded by

getatsehay1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Jimma University

College of Natural Sciences

Department of Physics

Notes on course Statistical Physics II (Phys 3092)

Academic Year: 2020GC / Semester: II

Instructor’s Name: Tesema Kebede Tufa (Msc)

1|Page
After completion of this course students should be able to:

 identify simple application of classical and quantum statistics,

 apply statistical approaches in studying different properties of a system,

 derive and apply equi-partition theorem,

 explain the applications of laws of thermodynamics,

 employ Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac statistics in describing a

given system,

 explain magnetic properties of substances at low temperature,

 discuss about different properties of substances related with their movement by

using kinetic theory of transport process,

 Understand the ways of incorporating the interaction term while studying dynamics of

interacting particles.

2|Page
UNIT-ONE
REVIEW OF THERMODYNAMICS

 After completion of this unit the learner will be able to:


 Explain the laws of thermodynamics
 Obtain the thermodynamics relations
 Determine the expressions of the thermodynamic potential
 Describe the Maxwell’s thermodynamic relations
INTRODUCTION
 What is Thermodynamics?
 Thermodynamics is the study of relations among the state variables describing a
thermodynamic system, and of transformations of heat into work and vice versa.
 Thermodynamic systems contain large numbers of constituent particles, and are
described by a set of state variables which describe the system’s properties in an average
sense.
1.1 State of variable and equation of state
 State variables are classified as being either extensive or intensive.
 Intensive variables (independent of their size) have the same value everywhere in the
system. Examples: pressure p, temperature T, chemical potential µ, density, electric
field E, etc Intensive variables may also be inhomogeneous.
 Extensive variables, such as volume V , particle number N , total internal energy E,
magnetization M , etc., scale linearly with the system size.
 Extensive state variables correspond to quantities which can be determined, measured, or
prescribed directly.
 Equation of state is a thermodynamics equation relating state of variables which describe
the state of matter under a given set of physical conditions, such as pressure, volume,
temperature (PVT), or internal energy.
PV= nRT – ideal gas law

(P + )(V –nb) –nRT - van der Waals equation, where a, and b are van der Waals

1.2 Laws of thermodynamics

3|Page
 Zeroth law of thermodynamics state that: If two systems are separately in equilibrium
with a third, then they must also be in equilibrium with each other. The temperature of a
system in equilibrium is constant throughout the system.
First Law of Thermodynamics
 The first law of thermodynamics represents an adaptation of the law of conservation of
energy to thermodynamics, where energy can be stored in internal degrees of freedom.
 The first law of thermodynamics states that energy is conserved. The change in internal
energy E of a system is equal to the sum of the amount of heat energy added to the
system & the amount of work done on the system.
dE =dQ + dW
Second law of thermodynamics
 The second law of thermodynamics tells us that life is not free. According to the first law
we can change heat into work, apparently without limits. The second law, however, puts
restrictions on this exchange. There are two versions of the second law, due to Kelvin and
Claussius.
The Second law of thermodynamics-The second law can be formulated in many
equivalent ways. The two statements generally considered to be the most clear are those
due to Claussius and to Kelvin.
 Claussius statement: There exists no thermodynamic transformation whose sole effect is
to transfer heat from a colder reservoir to a warmer reservoir.
 Kelvin statement: There exists no thermodynamic transformation whose sole effect is to
extract heat from a reservoir and to convert that heat entirely into work.
 Paraphrasing, the Claussius statement expresses the common experience that heat
naturally flows downhill from hot to cold, whereas the Kelvin statement says that no heat
engine can be perfectly efficient. Both statements merely express facts of common
experience in thermodynamic language.
 It is relatively easy to demonstrate that these alternative statements are, in fact, equivalent
to each other. Although these postulates may appear somewhat mundane, their
consequences are quite profound; most notably, the second law provides the basis for the
thermodynamic concept of entropy.

4|Page
 There exists a state function of the extensive parameters of any thermodynamic system,
called entropy S, with the following properties:
1. The values assumed by the extensive variables are those which maximize S consistent
with the external constraints; and
2. The entropy of a composite system is the sum of the entropies of its constituent
subsystems.
 Third Law of Thermodynamics:
 The Third Law of Thermodynamics state as: The change in entropy that results from any
isothermal reversible transformation of a condensed system approaches zero as the
temperature approaches zero.

=0
1.3 Thermodynamic potential

Thermodynamic potential includes the parameters such as Helmholtz free energy (F),
Enthalpy (H) & Gibbs free energy (G), where, H=E+PV, F=E-TS,& G=F-TS+PV

Entropy defined as: dS= TdS =dQ

dE=TdS –dW, where dW=PdV, Then

dE= TdS-PdV Ffundamental relation of thermodynamics

 From dE= TdS– pdV , E =E(S,V)


 d(PV) =PdV +VdP  PdV = d(PV) – VdP
 dE= TdS– pdV
 dE= TdS– d(PV) + VdP
 dE +d(PV) =TdS +VdP
 d(E +PV) = TdS +VdP where, H= E +PV then
 dH = TdS +VdP  H= H(S,P)-Enthalpy
 dE= TdS– pdV
 Let d(TS) =TdS + SdT TdS = d(TS) – SdT,then

5|Page
 dE =d(TS) – SdT – pdV
 d(E –TS) = – SdT – pdV, where F=E-TS
 dF = – SdT – pdV, F=F(T,V)- Helmholtz free energy
Exercise start from dE= TdS– pdV show that dG= -SdT +VdT
1.4 Gibbs Durem’s and Maxwell’s relations
Maxwell’s relations
 dE= TdS– pdV ---------------------------------------(1)
E =E(S,V)

 dE =( dS + dV-------------------------------(2)

Comparing equation 1 & 2

 T= , P=-

( = , - Then, ( = -(

 The four Maxwell relations below are easily derived (verify!) for simple compressible
systems.

E=E(S,V) =

Exercise: Derive each all the four Maxwell relations above.


Gibbs Durem’s Equations

 Consider a system E, N, composed from several molecules Ni

 Systems in nature classify as:

1) Isolated system-no interaction (no exchange of energy with the environments). Energy -
constant

6|Page
2) Closed system- no exchange of particles (matter). N-constant, mass-constant, charge-constant

3) Open system- a system can interact with the environment. N- Vary (not constant)

E=E(S,V, , , ….., ), m-types of molecules

S=S(E, V, , , …….., )

dS= dE + dV+∑ d

-constant

=-T

dS= dE + dV-∑ d

=T  =

= -P  . =-P. , Then we get =

dS= + dV -∑ d

TdS=dE+PdV-∑ d

dE=TdS-PdV+∑ d dE=d(TS)-SdT-PdV+∑ d , Then,

dF=-SdT-PdV+∑ d

Again for Gibb’s free energy it derives as follows:

dE=d(TS)-SdT-d(PV) +VdP+∑ d ,

dG=-SdT +VdP+∑ d , If S & V are constant

dE=TdS-PdV+∑ d , but dE=∑ d , =(

dF=∑  =

7|Page
dG=∑ d , =

Therefore, dE dF dG

G=G (T, P, N)-for one molecules (i-molecules)

G=G (T, P, ), G

g(T,P)=  =

Consider a system have S, N, V

S S, V V&N N

Suppose E(S, V, N)

E( S, V, N) E, Assume =1+ , 1

E= EE=E( +1)

f(x+h)=f(x) +hf’(x) + f’’(x)……+ (x)=∑ Taylor theorem

E( S, V, N) = E(S, V, N)+ )N,V S+ +

= E(S, V, N)+ )N,VS+ + ]

E=S + +∑

dE=TdS+SdT-PdV-VdP +∑ d +∑ d

dE=TdS+-PdV +∑ d

SdT-VdP +∑ d =0 Gibb’s Durem’s relation

1.5) Response functions

The experimental important response functions are obtained by the second –order differential of
the internal energy. Consider the entropy S expressed as a function of T, V, and N :

8|Page
Dividing by dT , multiplying by T , and assuming dN = 0 throughout, we have

Appealing to a Maxwell relation derived from F (T, V, and N), and then we get:

This allows to writing as:

We define the response function as:

isothermal compressibility

Adiabatic compressibility

thermal expansivity , thus

It also interms of intensive quantity expressed as follows:

9|Page
Where, as always, v = is the molar volume.

This above relation generalizes to any conjugate force-displacement pair (−p, V ) → (y, X ):

For example, we could have (y, X) = ( , )

A similar relationship can be derived between the compressibility and . We then clearly
must start with the volume, writing

Dividing by dP, multiplying by − , and keeping N constant, we have

Again we appeal to a Maxwell relation, writing

Then, after involving chain rule

We get,

10 | P a g e
Comparing the above two equations, we obtain

This result entails

The corresponding result for magnetic systems is

Where m = M/ν is the magnetization per mole of substance, and isothermal susceptibility:

adiabatic susceptibility:

Here the enthalpy and Gibbs free energy are

Remark: The previous discussion has assumed an isotropic magnetic system where M and H are

collinear, hence H · M = HM .

11 | P a g e
In this case, the enthalpy and Gibbs free energy are

1.6) Condition for equilibrium


The principle of maximum entropy requires that any spontaneous transformation of an isolated
system increases its entropy. Thus, if a system begins in some arbitrary non equilibrium
condition, internal changes tend to accumulate until the entropy reaches the maximum possible
value compatible with the external constraints, which finally becomes the state of
thermodynamic equilibrium. For the present purposes, we can consider a system to be isolated if
all of its extensive quantities (such as energy, volume, and particle number) are fixed. However,
the distribution of these extensive quantities is generally non uniform in some arbitrary initial
configuration. Suppose that an isolated system is divided into two subsystems which share the
total internal energy, volume, particle number, and any other extensive quantities needed to
characterize its state, such that
E= +

V= +

N= +

If the system is isolated, these extensive quantities are conserved, such that

dE=0 d =- , dV=0 d = - , and dN=0 d =-

Thus, variations of the total entropy S= + can then be expressed as

12 | P a g e
Thermal equilibrium between the two subsystems requires that dS be stationary with respect to
first-order variations of each variable independently, such that

Where the fundamental relations

are used to identify the intensive


parameters 8T, p, m, ∫<conjugate to the extensive variables {U, V, N,…}for each subsystems.
Therefore, thermal equilibrium between two systems requires equality between their intensive
parameters. These intensive parameters govern the sharing of a conserved extensive quantity
between interacting subsystems. Furthermore, by choosing one of the systems to be a small but
macroscopic portion of a larger system, we conclude that equilibrium requires the intensive
parameters like temperature, pressure, and chemical potential to be uniform throughout the
system. Any local fluctuations in these parameters would induce unbalanced forces that tend to
restore equilibrium by eliminating gradients in the intensive parameters. Obviously, temperature
or pressure gradients would induce heat or mass flows that tend to homogenize the system.

The maximum entropy principle can also be applied to systems that are in thermal, mechanical,
or chemical contact with an environment that constrains one or more of its intensive parameters.
For example, we often seek to determine the equilibrium state for a system with fixed
temperature and pressure instead of fixed energy and volume. Under these conditions it is useful
to consider a small subsystem with energy and volume in thermal and mechanical contact
with a much larger reservoir. If the initial configuration is not in equilibrium there will be
exchanges of energy and volume between the subsystem and the reservoir. We assume that the
reservoir is sufficiently large that small transfers of energy or volume do not change the
temperature T0or pressure of the reservoir. The net change in entropy for the combined
system can be expressed as

13 | P a g e
Where

The principle of maximum entropy then requires

The increase in the entropy of the subsystem is at least as large as the decrease of the entropy of
the reservoir that occurs when heat is transferred from the reservoir to the subsystem. The change
in the internal energy of the subsystem can be expressed as

d =dQ+dW=

where it is convenient to divide the work into two contributions, dW=- d +d , where d
is non mechanical work performed upon the subsystem in some form other than mechanical
work against the constant external pressure . The requirement that the total entropy increases
can now be expressed as

where the availability Ais defined as

A= - +  dA= d - + d

Although this availability function strongly resembles the Gibbs free enthalpy for the
subsystem, it is important to recognize that we do not require the temperature and pressure of the
subsystem to match those of the environment; hence, T0and appear as fixed parameters in

14 | P a g e
A. At equilibrium, the first derivatives of availability with respect to any of the system
parameters must vanish. Furthermore, if the equilibrium is to be stable, the second derivatives
must all be positive. Applying these observations to temperature and pressure, we require.

1.7 Thermodynamics of phase transitions


A typical phase diagram of a p-V -T system is shown in the Fig. below. The solid lines delineate
boundaries between distinct thermodynamic phases. These lines are called coexistence curves.
Along these curves, we can have coexistence of two phases, and the thermodynamic potentials
are singular. The order of the singularity is often taken as a classification of the phase transition.
i.e. if the thermodynamic potentials E, F , G, and H have discontinuous or divergent
derivatives, the transition between the respective phases is said to be order . Modern
theories of phase transitions generally only recognize two possibilities: first order transitions,
where the order parameter changes discontinuously through the transition, and second order
transitions , where the order parameter vanishes continuously at the boundary from ordered to
disordered phases.

15 | P a g e
Chapter -Two

Simple Applications of Statistical

2.1) Partition function and their properties ideal monatomic gas


Consider a system in contact with a heat reservoir. The systems in the representative ensemble
are distributed over their accessible states in accordance with the Boltzmann distribution. Thus,
the probability of occurrence of some state r with energy is given by

--------------------------------------------(2.1)
The mean energy is written as

------------------------------------------(2.2)
where the sum is taken over all states of the system, irrespective of their energy.
Note that

------------(2.3)
Where,

--------------------------------------------------------(2.4)
It follows that

------------------------------------------------(2.5)
The quantity Z, which is defined as the sum of the Boltzmann factor over all states, irrespective
of their energy, is called the partition function.
It is clear that all important macroscopic quantities associated with a system can be expressed in
terms of its partition function Z. Let us investigate how the partition function is related to
thermodynamically quantities. Partition function is the basic parameter in statistical physics.
Z

Z=∑ , = ----------------------------------------------------------------(2.6)

16 | P a g e
Recall that Z is a function of both β and x (where x is the single external parameter). Hence, Z =
Z(β, x), and we can write

----------------------------------------(2.7)

Consider a quasi-static change by which x and β change so slowly that the system stays close to
equilibrium, and, thus, remains distributed according to the Boltzmann distribution.

-------------------------------------------(2.8)

Then, we get

---------------------------------------(2.9)

Hence, S= or dS= --------------------------------------------------------2.10)

--------------------------------------------------------(2.11)

This expression enables us to calculate the entropy of a system from its partition function.

Suppose that we are dealing with a system (0) consisting of two systems A and which only
interact weakly with one another. Let each state of A be denoted by an index r and have a
corresponding energy . Likewise, let each state of be denoted by an index s and have a
corresponding energy . A state of the combined system is then denoted by two indices r
and s. Since A and only interact weakly their energies are additive, and the energy of state rs
is

------------------------------------------------------(2.12)

The partition function takes the form

17 | P a g e
----------------------(2.13)

Hence, --------------------------------------------------------(2.14)

Yields , ----------------------------------------------------(2.15)

Where Z and are the partition functions of A and , respectively. It follows that the mean
energies of , A, and are related by

-----------------------------------------------(2.16)

The respective entropies of these systems are also related

-------------------------------------------(2.17)

Hence, the partition function tells us that the extensive thermodynamic functions of two weakly
interacting systems are simply additive.

2.2) Calculations of thermodynamic quantities

Consider a gas consisting of N identical monatomic molecules of mass m enclosed in a container


of volume V. Let us denote the position and momentum vectors of the molecule by and ,
respectively. Since the gas is ideal, there are no interatomic forces, and the total energy is simply
the sum of the individual kinetic energies of the molecules:

---------------------------------------------(2.18)

18 | P a g e
Let us treat the problem classically. In this approach, we divide up phase-space into cells of
equal volume . Here, f is the number of degrees of freedom, and is a small constant with
dimensions of angular momentum which parameterizes the precision to which the positions and
momenta of molecules are determined. Each cell in phase-space corresponds to a different state.
The partition function is the sum of the Boltzmann factor exp(−β ) over all possible states,
where is the energy of state r. Classically, we can approximate the summation over cells in
phase-space as an integration over all phase-space. Thus,

-----------------------(2.19)

where 3N is the number of degrees of freedom of a monatomic gas containing N molecules.


Then, the above equation reduces to:

-----------(2.20)

Note that the integral over the coordinates of a given molecule simply yields the volume of the
container, V , since the energy E is independent of the locations of the molecules in an ideal gas.
There are N such integrals, so we obtain the factor in the above expression. The partition
function Z of the gas is made up of the product of N identical factors: i.e.,

--------------------------------------------------(2.21)

Where

-----------------------------------------------(2.22)

This equation is the partition function for a single molecule. Of course, this result is obvious,
since we have already shown that the partition function for a system made up of a number of
weakly interacting subsystems is just the product of the partition functions of the subsystems.
The integral in Eq. (2.22) is easily evaluated:

19 | P a g e
------------(2.23)

---------------------------------(2.24)

Thus,

------------------------------------------------(2.25)

And,

--------------------------------(2.26)

Then the expression for the mean pressure yields as:

-----------------------------------------------(2.27)

Which reduces to the ideal gas equation of state

---------------------------------------------------(2.28)

Where, N = and R = k , then the mean energy of the gas is obtained as:

------------------------------------------(2.29)

Note that the internal energy is a function of temperature alone, with no dependence on volume.
The molar heat capacity at constant volume of the gas is given by

20 | P a g e
----------------------------------------------(2.30)

Therefore, the mean energy written as

---------------------------------------------------(2.31)

Now let us use the partition function to calculate a new result. The entropy of the gas can be
calculated quite simply from the expression

-----------------------------------------------------(2.32)

Thus,

-------------------------------(2.33)

Or ----------------------------------(2.34)

Where, ------------------------------------------(2.35)
2.3) Gibbs paradox
Thermodynamic quantities can be divided into two groups, extensive and intensive. Extensive
quantities increase by a factor α when the size of the system under consideration is increased by
the same factor. Intensive quantities stay the same. Energy and volume are typical extensive
quantities. Pressure and temperature are typical intensive quantities. Entropy is very definitely an
extensive quantity. Suppose that we have a system of volume V containing ν moles of ideal gas
at temperature T. Doubling the size of the system is like joining two identical systems together to
form a new system of volume 2 V containing 2 ν moles of gas at temperature T. Let

------------------------------(2.36)

21 | P a g e
Denote the entropy of the original system, and let

---------------------------------------(2.37)
Denote the entropy of the double-sized system. Clearly, if entropy is an extensive quantity
(which it is!) then we should have
= 2S ---------------------------------------------2.38)
But, in fact, we find that

---------------------------------------------(2.39)
So, the entropy of the double-sized system is more than double the entropy of the original
system. Where does this extra entropy come from?

2.4) Validity of the classical approximation


Suppose the mean intermolecular separation ⃗ can be estimated by imaging each molecule at
center of a little cube of side ⃗ , these cubes filling the available volume V. Then

or ------------------------------------------------
(2.40)
The mean momentum ⃗ can be estimated from the known mean energy of a molecule in the
gas at temperature T.

------------------------------------(2.41)
Hence the condition becomes

22 | P a g e
----------------------------(2.42)

2.5) Proof of equipartition

The internal energy of a monatomic ideal gas containing N particles is (3/2) N k T .This means
that each particle possess, on average, (3/2) k T units of energy. Mon-atomic particles have only
three translational degrees of freedom, corresponding to their motion in three dimensions. They
possess no internal rotational or vibrational degrees of freedom. Thus, the mean energy per
degree of freedom in a monatomic ideal gas is (1/2) k T. In fact, this is a special case of a rather
general result.
Suppose that the energy of a system is determined by some f generalized coordinates and
corresponding f generalized momenta , so that

---------------------------------------------(2.43)
a) The total energy splits additively into the form

-----------------------------------------------(2.44)
Where involves only one variable , and the remaining part does not depend on p.
b) The function is quadratic in pi, so that

-----------------------------------------------------(2.45)
Where b is constant

The most common situation in which the above assumptions are valid is where a momentum
is. This is because the kinetic energy is usually a quadratic function of each momentum
component, whereas the potential energy does not involve the momenta at all. However, if a
coordinate were to satisfy assumptions 1 and 2 then the theorem we are about to establish
would hold just as well. In the classical approximation, the mean value of is expressed in
terms of integrals over all phase-space:

23 | P a g e
-----------------------------------(2.46)
Condition 1 gives

---------------------(2.47)

where use has been made of the multiplicative property of the exponential function, and where
the last integrals in both the numerator and denominator extend over all variables and
except . These integrals are equal and, thus, cancel. Hence

--------------------------(2.48)
This expression can be simplified further since

-----------------(2.49)

-------------------------(2.50)
According to condition 2

------(2.51)

Where y= √ , Thus,

----------------------(2.52)
Note that the integral on the right-hand side does not depend on β at all. Then it follows from
eq.(2.49)

24 | P a g e
-----------------------------------(2.53)
It gives

-------------------------------------(2.54)
This is the famous equipartition theorem of classical physics. It states that the mean value of
every independent quadratic term in the energy is equal to (1/2) k T .

2.6) Simple applications

Suppose a gas of a system consists r, and its total kinetic energy is K=∑ = (

+ + )

= kT -----------------------------------------------------------(2.52)

= = kT ----------------------------------------------------------(2.55)

2.7 Specific heat of solids


Consider simple solid containing N atoms. Now, atoms in solids cannot translate (unlike those in
gases), but are free to vibrate about their equilibrium positions. Such vibrations are called lattice
vibrations, and can be thought of as sound waves propagating through the crystal lattice. In
normal mode coordinates, the total energy of the lattice vibrations takes the particularly
simple form

---------------------------------------------(2.56)
Where is the (angular) oscillation frequency of the normal mode. It is clear that in normal
mode coordinates, the linearized lattice vibrations are equivalent to 3 N independent harmonic
oscillators (of course, each oscillator corresponds to a different normal mode).

If the lattice vibrations behave classically then, according to the equipartition theorem, each
normal mode of oscillation has an associated mean energy k T in equilibrium at temperature T

25 | P a g e
[(1/2) k T resides in the kinetic energy of the oscillation, and (1/2) k T resides in the potential
energy]. Thus, the mean internal energy per mole of the solid is

------------------------------------------------(2.57)
It follows that the molar heat capacity at constant volume is

--------------------------------------------------(2.58)

We can use the quantum mechanical result for a single oscillator to write the mean energy of the
solid in the form

------------------------------------(2.59)
The molar heat capacity is defined as:

----------------------------(2.60)
Then it gives

-----------------------------(2.61)
Which reduces to:

-----------------------------------(3.62)

Where,

2.8) General calculation of magnetism


Consider a system consisting of N non-interacting atoms in a substance in a substance at absolute
temperature T and placed in an external magnetic field H pointing along the z-direction. Then the
magnetic energy of an atom can be written as:

26 | P a g e
=- H -------------------------------------(2.63)
Here is the magnetic moment of the atom. It is proportional to the total angular momentum J
of the atom and is conventionally written in the form:
=g J -----------------------------(2.64)
Where is the standard unit of magnetic moment (usually the Bohr magneton =e/2mc, m is
electron mass) and where g is a number of the order of unit, the so called g factor of the atom.

2.9) Maxwell’s velocity distribution


Consider a molecule of a mass m in a dilute gas. The gas may consist of several different kinds
of molecules, the molecules under consideration may also be polyatomic. If the external force
field neglect, the energy of this molecule is equal to:

------------------------------(2.65)
The distribution function for momenta is given by

-------------------------------(2.66)
Note that g(p) = 〈 〉 (p is the same for every particle, independent of its label i. We
compute the average
̂
〈 〉= ̂ . Setting i = 1, all the integrals other than that over divide out between

numerator and denominator. We then have

27 | P a g e
-------------------(2.67)
The note commonly refer to the velocity distribution f (v), which is related to g(p) by

---------------------------(2.68)
Hence

---------------- (2.69)
This is known as the Maxwell velocity distribution. Note that the distributions are normalized,
viz.

-------------------- (2.70)

If we are only interested in averaging functions of v = |v| which are isotropic, then we can define
the Maxwell speed distribution, ̃(v), as

------ (2.71)
Note that ̃(v), is normalized according to

------------------ (2.72)

It is convenient to represent v in units of =√ , in which case

28 | P a g e
------------ (2.73)

The distribution ϕ(s) is shown in fig. below. Computing averages, we have

----------------(2.74)

Thus, = 1, =√ , = 3, etc. The speed averages are

----------------------- (2.75)

Note that the average velocity is 〈 〉 =0 but the average speed is 〈 〉 =√ .

The speed distribution is plotted as follows

29 | P a g e
Figure 2.1: Maxwell distribution of speeds

The root mean square speed is given as:

--------------(2.76)

2.10) Related velocity distribution


Consider the most probable maximum velocity ̂ is given as:

---------- (2.77)
Hence ̂ is obtained as

------------------------------------- (2.78)


All these various speeds are proportional to . Thu the molecular speed increases when

the temperature is raised.

30 | P a g e
2.11 Number of molecule striking a surface
Consider that the container is a box in the form of parallelepiped, the area of one end wall being
A. How many molecules per unit time strike this end wall? Suppose that there are n molecules
per unit volume in this gas. Since they move in random direction , assume one third of them ( )

molecules per unit volume. Half of these molecules, i.e, ⁄ molecules per unit volume, have
velocity in the +z-direction so that they will strike the end wall under consideration. If the mean
speed of the molecules is ̅ , these molecules cover in an infinitesimal
time dt a mean distance ̅ dt. Thus the number of molecules which strike the end wall of area A
in time dt is equal to the number of molecules having velocity ̅ , in the z-direction and
contained in the cylinder of volume A ̅ dt. It is given by:

= A ̅ dt ------------------------ (2.79)

The total number of molecules which strike unit area of the wall per unit time ( i.e, the total
molecular flux) is give as:

= n̅

Figure 2.2 molecules colliding with walls


The dependence of on the temperature T and mean pressure ̅ of the gas follows immediately
:

31 | P a g e
-------------------------- (2.80)

Furthermore, by the equipartition theorem:

---------------------------------------- (2.81)
Thus the above equation implies that :

---------------------------------- (2.82)
The volume of this cylinder is dAvdt , while the number of molecules per unit volume in
the velocity range is f(v) v. Hence the number of molecules of this type which strike the area
dA of the wall in time dt is equal to[ f(v) v][dAvdt ]
Where (v) v the number of molecules, with velocity b/n v and v + dv, which strike a unit
area of the wall per unit time.

------------------------ (2.83)
In other words, we have to sum over all possible velocities v to the restriction that velocity
component v 0, since molecules with 0 will not collide with the element of area.

------------------------------------------- (2.84)
After we goes further, the equation becomes

----------------------------(2.85)

32 | P a g e
2.12 Effusions

Consider a small hole or slit is made in the wall of container, the equilibrium of the gas inside the
container is disturbed to a negligible extent. I n the case the number of molecule which emerges
through the small hole is the same as the number of molecules which strike the area occupied by
the hole if the latter were closed off. The process whereby molecules emerge through such a
small hole is called effusion.

Figure 2.3 shows that formation of a molecular beam by effusing molecules.


The number of molecules which have speed in the range b/n v and v +dv and which emerge per
second from a small area A into a solid angle range dΩ in the forward direction 0 is given
as:

------------ (2.86)

The number of molecules which pass per second through the hole from left to right equals the
number of molecules which pass per second through the hole from right to left. This leads to the
simple equality:

------------------------------------- (2.87)

2.13 Pressure and momentum

33 | P a g e
We from a detailed kinetic point of view how a gas exerts a pressure. The mean force exerted on
the wall of the container is due to collision of molecules with the wall. If there are a collision b/n
the molecules, there will be a change of momentum.

Let denote by the mean molecular momentum crossing this surface dA per unit time from
left to right, and by the mean molecular momentum crossing this surface dA per unit time
from right to left.

---------------------------- (2.88)

Where F =

------------ (2.89)

And

------------ (2.90)

Then

--------- (2.91)

34 | P a g e
Chapter Three

Quantum Statistics of Ideal Gases

3.1 Isolated systems: micro canonical ensembles


In statistical mechanics, there are three basic types of ensembles.
i. Micro canonical ensemble
ii. Canonical ensemble
iii. Grand canonical ensemble

The micro canonical ensemble


An isolated system is both thermally insulated and mechanically undisturbed. According to
the first law of thermodynamics, the internal energy of such a system is constant. We define the
following ensemble to describe isolated systems:
A micro canonical ensemble is an ensemble of fixed E, V and N

The basic postulate of statistical mechanics says that every microstate of an ensemble of isolated
systems occurs with equal probability. Thus, if the ensemble has W different microstates, the
probability that any particular member of the ensemble is in a particular microstate is
= --------------------------(3.1)

since the sum of all the probabilities have to add to 1 and they all must be equal. Another
way to think about this equation is that the micro canonical partition function is just W .
The entropy of a micro canonical ensemble is calculated using the Boltzmann equation:
S=k ------------------------------------- (3.2)

3.2 System at mixed temperature


Temperature can be defined for a micro canonical ensemble using equation = To apply

this equation, we first note that the number of microstates available to the system depends on the
energy, so W = W (E). This means that S depends on E. We can therefore write

35 | P a g e
T= = ------------------------------(3.3)

Assume we have T; we can define other quantities that depend on it, like the Helmholtz free
energy:
F = E − T S = E − kT ---------------------------------------- (3.4)
Canonical ensemble - is used to describe system in contact with a heat reservoir. The total
number of particles are constant.

3.3 Grand canonical ensembles

Both the micro canonical and canonical ensembles deal with systems with a fixed number of
particles N (or, in general, fixed values of the numbers of particles of each type, , , . . . ).
However, the number of molecules of a given type in a chemical system at equilibrium is not
generally fixed. Due to chemical reactions, the number of molecules of each type fluctuates
with time.
A grand canonical ensemble is an ensemble of fixed T, V, and µ. Suppose that the energies of
the two subsystems are and , and that the numbers of molecules in each subsystem are
and . Since the system is isolated, the total energy and number of molecules are conserved:
E= + and N = +

Note:

i. micro-canonical ensemble have constant N, V, E number of particles, volume, and


energy
ii. canonical ensemble have constant N, V, T number of particles, volume, temperature
iii. grand-canonical ensemble have constant µ, V, T chemical potential, volume,
temperature
3.4 Identical Particles & Symmetry Requirements

Consider a system of gas consisting of N identical particle enclosed in a container of volume V.


suppose the collective coordinates of the ith particle is represented by Qi and its quantum state by
Si. The state of the system of gas is then described by the set of quantum number

36 | P a g e
{ } ----------------------------------(3.5)

The system of gas in this state can be described by the following wave function

 = ----------------------------(3.6)

If the particles are distinguishable and any number of particles are allowed to be in the same state
S, then the particles are said to Obey Maxwell-Boltzmann statistics. This is called classical
description of a system and it doesn’t impose symmetry requirements on the wave function when
two particles are interchanged.

However, the quantum mechanical description, where identical particles are considered to be
indistinguishable imposes symmetry requirements on the wave function during interchanging
two particles, i-e. Interchanging two identical particles doesn’t lead the whole sytem to a new
state.

Each particle in the system has integral spin, and then the wave function of the system  must be
symmetric under interchanging two particles.

(… … …) = ( ) ------------------------------- (3.7)

Particles satisfying this symmetry condition are said to obey Bose- Einstein statistics and they
are called bosons.

On the other hand if each particle in the system has half integral spin, then the wave function of
the system satisfy anti symmetric condition during interchanging two particles.

(… … …) = ( ) ------------------------------- (3.8)

Particles satisfying this anti symmetric condition are said to obey Fermi- Dirac Statistics & they
are called fermions.

If two particles, Say i & j of the same state S are interchanged, the wave function of the system
should remain the same. At the same time, if the particles have half-integral spin condition (4)
must be satisfied. This leads to the conclusion that for a system containing two particles in the

37 | P a g e
same state the wave function should vanish. 

Thus in the Fermi-Dirac Case there exists no state of the whole gas for which two or more
particles are in the same single particle state. This is called Pauli exclusion principle.

3.5 Formulation of the statistical problem

Consider a system of gas consisting of N weakly interacting identical particles enclosed in a


container of volume V. Suppose the system is in equilibrium at temperature T. Let particles
are in state r characterized by energy .

The energy of the whole system which is supposed to be in state R is then given by

+ +…=∑ -----------------------------------------(3.9)

The partition function of the system can be given by

Z=∑ = ∑ ---------------------------------------(3.10)

The mean number of particles in a particular state r is then


̅ = ∑

= ∑

=- ---------------------------------(3.11)

̅̅̅̅=∑ = ∑

= = * ( ) +

38 | P a g e
= * +

= * +

= + --------------------------------------(3.12)

 ̅̅̅̅̅̅̅̅ = ̅̅̅̅̅̅̅̅̅̅̅ =̅̅̅ - ̅̅̅ = --------------------(3.13)


̅̅̅̅

If the particles under consideration obey Maxwell-Boltzmann statistics, partition function of the
system can be obtained by eqn.(6) after summing over all possible values of =0,1,2,3…)
for each r provided that total no. of particles in the system is constant ∑ = N.

Since the particles are considered to be distinguishable, in addition to specifying the no of


particles in each state, it is necessary to specify which particular particles are in which state.

In the case of Bose-Einstein statistics, the summation in equation (3.10) should be taken over all
possible values of for each r, and the total number of particles must be fixed. However, since
the particles are considered to be indistinguishable, specifying the number of particles in each
state is sufficient. A special case of Bose-Einstein statistics where total number of particles in the
system is not constant, i.e, where the restriction in eq.(3.14) is lifted, is called photon statistics.

Two possible values of ,( =0,1) for each r, since more than one particle on particular state is
not allowed, and total number of the particles must be fixed.

3.6 The quantum distribution functions

Consider a system of gas containing N particles. Let the lowest energy level of a single
particle is denoted by . For a system of particles that obeys BE statistics, where there is no
restriction on the number of particles in any state, the lowest energy level of the whole
system can be obtained by placing all the particles in the lowest energy level, and hence for
the lowest energy level of the whole system we can write =N

39 | P a g e
In the case of FD statistics however, where we are not allowed to have more than one particle
in any state, the lowest energy level of the whole system can only obtained by placing one
particle in each consecutive states of increasing energy starting from the lowest energy level
. For a system maintained at absolute temperature T, the mean number of particles in a
particular state s is

---------------------(3.14)
Photon statistics: The summation in eq.(3.14) should be over all possible values of for each s.
Since there is no restriction on the total number of particles, the sums in the numerator &
denominator of eq.(3.14) are the same. Hence we can write

---------------------------- (3.15)
However, the above expression can be rewritten

---------------------------- (3.16)

Now, the sum on the right-hand side of the above equation is an infinite geometric series, which
can easily be evaluated. In fact,

----------- (3.17)
Thus, eq.(3.17) gives

40 | P a g e
----------------------(3.18)
Which is called Planck distribution.

Fermi-Dirac (FD) statistics: In this case have only two values, =0, 1 for each r and there
is restriction on the total number of particles where it supposed to be fixed ∑ =N.
Let us introduce the function

-------------------------------- (3.19)
Which is defined as the partition function for N particles distributed over all quantum states,
excluding state s, according to Fermi-Dirac statistics. By explicitly performing the sum over =
0 and 1, the expression (3.14) reduces to

--------------------------- (3.20)
Which yields

---------------------- (3.21)
In order to make further progress, we must somehow relate (N−1) to (N).
Suppose that ∆N N. It follows that ln (N − ∆N) can be Taylor expanded to give

----------------- (3.22)
Where

------------------------------------- (3.23)

41 | P a g e
Taylor expand the slowly varying function in (N), rather than the rapidly varying function
(N), because the radius of convergence of the latter Taylor series is too small for the series to
be of any practical use. Equation (3.22) can be rearranged to give

------------------------------(3.24)
Since (N) is a sum over very many different quantum states, we would not expect the
logarithm of this function to be sensitive to which particular state s is excluded from
consideration. Let us, introduce the approximation that is independent of s, so that we can
write

------------------------ (3.25)
For all s, it follows that the derivative (3.23) can be expressed approximately in terms of the
derivative of the full partition function Z(N) (in which the N particles are distributed over all
quantum states). In fact,

--------------------- (3.26)
Making use of Eq. (3.24), with ∆N = 1, plus the approximation (3.25), the expression (3.21)
reduces to

-------------------------- (3.27)

This is called the Fermi-Dirac distribution. The parameter α is determined by the constraint that
∑ ̅ =N: i.e.,

---------------------- (3.28)

42 | P a g e
Note that ̅ 0 if becomes sufficiently large. On the other hand, since the denominator in
Eq. (3.27) can never become less than unity, no matter how small becomes, it follows that ̅ ≤
1. Thus,

----------------------------------- (3.29)
in accordance with the Pauli exclusion principle.
Bose-Einstein statistics: The summation in eq.(3.14) is rang over all possible values of
( =0,1,2,3, …), for each r and the total number of particles is restricted to be constant.
Applying the concept of eq.(3.6) in eq.3.14) we can write

--------------------- (3.30)
Where (N) is the partition function for N particles distributed over all quantum states,
excluding state s, according to Bose-Einstein statistics [cf., Eq. (3.19)].Using Eq. (3.24), and the
approximation (3.25), the above equation reduces to

------------------------------------------- (3.31)
Note that this expression is identical to (3.15), except that β is replaced by α +β . Hence, an
analogous calculation to that outlined in the previous subsection yields

-------------------------------------- (3.32)
This is called the Bose-Einstein distribution.

3.7 Maxwell-Boltzmann statistics

MB statistics- is a classical approach. The partition function is

43 | P a g e
----------------------------- (3.33)
Where the sum is over all distinct states R of the gas, and the particles are treated as
distinguishable. For given values of , , · · there are the number of possible ways of

placing particles in the given states becomes

In these arrangements yields distinct state for the system. Hence partition function of the system
becomes

-------------------------- (3.34)
Where the sum is over all values of = 0, 1, 2, · · · for each r, subject to the constraint that

------------------------------------------ (3.35)
Now, Eq. (3.34) can be written

-------------------------- (3.36)
Eq. (3.35) is by the result of expanding a polynomial it can be written as.

----------------------- (3.37)
In other word

--------------------- (3.38)
Note that the argument of the logarithm is simply the partition function for a single particle.
Equations (3.18) and (3.38) can be combined to give

44 | P a g e
----------------------- (3.39)
This is known as the Maxwell-Boltzmann distribution.

3.8 Photon statistics

We have the following relation for the partition function Z=∑


Where the summation is over all the possible state R of the whole system, or equivalently over

all values of =0, 1, 2, … for each r. We know that ̅ =-

Z=∑

=∑ ∑ ….

= ) )….

 = ∑ = -∑ ) -----------------------------------(3.40)

The mean number of particles in a particular state r with the corresponding energy
Is given by

̅ =- = ∑ = ------------------------- (3.41)

3.9 Bose-Einstein statistics

Let us now consider Bose-Einstein statistics. The particles in the system are assumed to be
massive, so the total number of particles N is a fixed number. Consider the expression (3.14). For
the case of massive bosons, the numbers , , · · · assume all values = 0, 1, 2, · · · for each

45 | P a g e
r, subject to the constraint that ∑ =N Performing explicitly the sum over , this expression
reduces to

------------(3.42)
Where (N) is the partition function for N particles distributed over all quantum states,
excluding state s, according to Bose-Einstein statistics
Let Z= Z(N), for a system particles the partition function is Z( ). This function is rapidly
increasing function of , and hence multiplying it by rapidly decreasing function
produces a function Z( ) which usually has a very sharp maximum. The proper choice
of the positive parameter will made the sharp maximum to occur at =N. Thus we can write

∑ ) =Z(N) -------------------------------------- (3.43)


Where Z(N) is the maximum value of the summand while N is the width of the
maximum.

--------------------(3.44)
Taking the logarithm of eq.(3.44),then, obtain an excellent approximation

----------------------- (3.45)
Where we have neglected the term ) which is utterly negligible compared to the other terms
which are of order N. Here the sum eq.(3.44) is easily performed, since it extends over all
possible numbers without any restriction. The quantity Z is called a grand partition function.
Now let evaluate Z

--------------- (3.46)
Where the sum is over all possible numbers without restriction. By regrouping terms one
obtains.

46 | P a g e
------------ (3.47)

This is just a product of simple geometric series. Hence

------------------ (3.48)
The eq.(3.44) yields

---------------- (3.49)
Our argument assumed that the parameter is to be chosen so that the function Z( ) has
its maximum for =N, i.e, so that

----------------------- (3.50)
Since this condition involves the particular value =N, itself must be a function of N. By the
virtue of eq.(3.45), the condition eq.(3.50) is equivalent to

-------------------- (3.51)
Using the expression of eq.(3.48), the relation of eq.(3.51) which determines then

47 | P a g e
------------------- (3.52)
By applying eq.(3.16) in to eq.(3.49), then obtains

---------------- (3.53)
The last term takes into account the fact that is a function of through the relation eq.(3.52).
But this term vanishes by virtue of eq.(3.51). Hence it has simplify

--------------------------- (3.54)
The obvious requirement needed to satisfy the conservation of particles. The chemical potential
of the gas is given by

Then calculation of the dispresion

---------------- (3.55)
Hence

----------------- (3.56)

48 | P a g e
Thus the relative dispersion does not become arbitrarily small even when ̅ 1.

3.10 Fermi-Dirac statistics


The partition function is given by

------------------------ (3.57)
Since the summation is over the two values, then it becomes

----------------------- (3.58)
Then

------------------------- (3.59) Except some


important sign changes, this expression is the same form with the BE case. The parameter is
again to determined from the condition of eq.(3.49).Thus,

-------------------------- (3.60)
By using other equation, we can obtain

--------------------------- (3.61)
This expression simplified as

49 | P a g e
------------------------------- (3.62)

3.10 Quantum statistics in the classical limit


The quantum statistics description of a system of ideal gas can be summarized by

̅̅̅ = ---------------------------------(3.63)

Where the +/- sign in the denominator represents FD/BE statistics & is supposed to be obtained
from the relation ∑ ̅̅̅ = ∑ = N ---------------------------------------------(3.64)

Now we want to evaluate under some limiting cases.

Consider a system of gas where the concentration is sufficient low. This condition (39) can be
satisfied only when each term in the sum over all states is sufficiently low, ̅̅̅ or
for all r.

Similarly for a system consisting of fixed number N of particles of the temperature is sufficiently
large so that , the parameter must be largeenoughto prevent the sum from exceeding
N, then we can have or ̅̅̅ .

Under these conditions equation (38) becomes

̅̅̅ = ---------------------------------------------------------(3.65)

Thus the parameter can be determined from

∑ = ∑ =N

 = N ∑ ---------------------------------------------------------(3.66)

 ̅̅̅ = N ∑
---------------------------------------------(3.67)

50 | P a g e
In the classical limit of sufficiently low concentration or sufficiently high temperature the
quantum statistics, FD & BE statistics, reduces to MB statistics

The quantum statistics the partition function satisfy the relation

Lnz = ∑ (1 ) ---------------------------------------------(3.68)

for X

ln(1+x) = x- +

Lnz = ∑

+∑

+ N ------------------------------------------------------------------------(3.69)

From Equation (41)

- = lnN – ln∑

= -lnN + ln∑

Hence equation (44) becomes

lnZ = NlnN + Nln∑ + N---------------------------------------------(3.70)

=Nln∑ – lnN ; lnN = NlnN-N

= ln – lnN! ; = ∑

 Z=

51 | P a g e
3.12 Evaluation of Partition Function

Consider a system of monatomic ideal gas. Suppose the system is in the classical limit, i.e., it has
sufficiently low concentration or it is at sufficiently high temperature. The partition function of
the system is then given by equation (45)

lnZ = N(-lnN + ln∑ + 1)

= N(-lnN + ln + 1) ----------------------------------------------------------(3.71)

Where =∑ and the sum is over all possible states of a single particle. To evaluate this
sum we need to know the energy of a single particle corresponding to the possible states.

A system of single non interacting particle of mass M, Position vector & momentum
confined in a container of volume V can be described by a wave function  ( ) of the plane
wave form

⃗⃗⃗
( ,t) =A = ( ) ------------------------------------ (3.72)

Which propagates in the direction of the wave vector ⃗ . The energy of these particles is then
⃗ ⃗
given by = =

Where the momentum of the particle given by de-Broglie relation


= ⃗ ------------------(3.73)

 ( + + ) ----------------------(3.74)

Wave function in eq.(3.72) is assumed to satisfy the periodic boundary condition, provided that
dimensions of the container are large enough to the de-Broglie wave length of the particle.

 
( )  } ------------------------------------------------ (3.75)
( ) 

52 | P a g e
Where , ,& are the dimensions of the container.


( )= = --------------------------------------------------
(3.76)

Hence to satisfy the periodic boundary condition, eq.(3.75), we must have

Where , ,& are integers

The eq.(3.74) can be rewritten as:

--------- (3.77)

Z=∑ =∑ ( + + )] ---------------------------------------(3.78)

Successive term in a sum in eq.(3.78) correspond to a very small change like = , hence a

small change b/n & +d contains = d terms which have nearly the same

magnitude .

53 | P a g e
Chapter Four

System of Interacting Particles

4.1. Lattice Vibrations and Normal Modes

Consider a system of solid consisting of N atoms. Let the mass and position vector of the i th
particle are denoted by Mi and ⃗ and let the equilibrium position of this atom is ⃗ . Since the
atoms can vibrate about their equilibrium position, the kinetic energy of vibration of the system
is given by

K= ∑ ∑ ⃗ = ∑ ∑ ̇

Where α stands for x., y and z components of vector and ̇ =  is displacement of the
atom from equilibrium position.

If we assume a vibration over a relatively small amplitude, the potential energy V = V 0

+∑ ( ) ∑ ( ) + …

V0 is the potential energy in the equilibrium configuration of the atoms. At equilibrium V must

be minimum, hence we can have ( )

 V V0 + ∑

Where, ( )

The total energy associated with vibrations of the atoms in the is then

H = V0 + ∑ ̇ + ∑

This equation can be simplified by eliminating the cross product terms in the potential energy
and this can be done changing the 3N old coordinates to new set of 3N generalized
coordinates qγ by a linear transformation.

=∑

54 | P a g e
Thus the total energy can be rewritten as

H = V0 + ∑ ( ̇ )

This expression is similar to that of the total energy of 3N independent 1D harmonic oscillators.
The total energy of 1D harmonic oscillator is given by

Hr= ( ̇ )

The corresponding energies for the possible quantum states of this oscillator are then

Eγ = (nγ + ) wγnγ = 0, 1, 2, 3….

The total energy of a system of 3N independent harmonic oscillators, where the state of the
whole system specified by {n1, n2, ….,n3N}, becomes

H = V0 +∑

= V0 +∑ ( ) wγ

= V0 + ∑ + ∑

= Nη +∑

Where, Nη = V0 + ∑ independent of

The partition function of the whole system is then

 
Z=∑ =∑

=∑  

= ∑ 

= ∑  ∑ 

= ( ) ----- ( )
   

55 | P a g e
lnZ = + ln( ) + ------ + ln ( )
   

= ( ( ) ( ))
   

= ∑ ( )
 

Angular frequencies which also called normal mode frequencies are closely spaced. Hence,
if _____ represents the number of normal modes with angular frequency in the range between w
and +d the expression for becomes,

= ∫   σ(w)d

The mean energy of the system is then


̅= = + ∫ σ( )d
 

= + ∫  σ( )d

̅ ̅
Cv = ( ) = ( ) ( )

= = = = β . =

̅
Cv =  ( )

= ( ) * ∫  +

=+ ∫ 

If << 1; =1+ + + ----

=1+

Cv =  ∫ 

56 | P a g e
= ∫

= ∫ ; where we neglect as compared to 1.

= 3N =3 NA = 3nR

For 1 mole of system, n = 1, Cv = 3R

4.2 Debye Approximation

By employing Debye approximation it is possible to calculate the number of normal mode


frequencies. In Debye approximation the discreteness of atoms in solid is neglected and solid is
treated as if it is a continuous elastic medium and these atoms are assumed to have
approximately similar mass.

The approximation of treating solid as continuous elastic medium is valid when λ>> a, where λ
is the wavelength of the vibration of the elastic medium while “a” is mean interatomic separation
in the solid.

Consider a solid which can be treated as elastic continuous medium of volume V. Let ⃗⃗⃗ (⃗⃗⃗ t)
denote the displacement of a point in this medium from its equilibrium position. This
displacement is expected to satisfy a wave equation which describes the propagation through the
medium of sound waves travelling with some effective velocity Cs.

From the previous chapter we have for a given wave vector ⃗⃗⃗ the number ∆nx of possible
integers nx for which Kx lies in the range between Kx and Kx + dKx is

∆nx = dKx

Then number of states ρ(⃗⃗⃗ )d3⃗⃗⃗ for which ⃗⃗⃗ lies in the range between ⃗⃗⃗ and ⃗⃗⃗ + d⃗⃗⃗ is then

ρd3⃗⃗⃗ = ∆nx∆ny∆nz= dKxdKydKz= d3⃗⃗⃗

57 | P a g e
The number of states for which the value of ⃗⃗⃗ , /⃗⃗⃗ /, lies in the range between K and K +
dk↑ by summing the above relation over the volume in ⃗⃗⃗ space of spherical shell of inner radius
k and outer radius K + dk,

ρx dx = 4 K2 dx

= K2 dx

Since sound wave of wave vector ⃗⃗⃗ corresponds to an angular frequency, = Cs.K, the number
of possible wave modes with frequency between w and +d is then,

σc( )d =3 (4π k2dk)

=3 (4π) ( )

=3 d

According to Debye approximation, for law frequencies w the density of modes σc( ) for the
continuous elastic medium is nearly the same as the mode density σ( ) for the actual solid.

The Debye approximation, σ( ) ≈σc( ), is valid not only for low frequencies but for all 3N
lowest frequency modes of the elastic continuum, and it is defined by

σD( )= {

Where is called Debre frequency and it is chosen so that σD(w) yields the correct total
number of 3N normal modes.

∫ =∫ = 3N

 ∫ = ∫ = = 3N

⁄ ⁄
 =( ) = ( )

58 | P a g e
4. 3. Calculation of the Partition Function for Law densities

Consider a mono atomic gas of N identical particles of mass M in a container of volume V at


temperature T. Assume the system is at sufficiently high temperature and has sufficiently law
density such that classical approximation is valid. The total energy of the system is then,

H= K + U

Where K is kinetic energy K = ∑

And U is potential energy of intermolecular interaction and is a function of relative sepation


between the two interacting molecules.

U = U(Rij) ; Rij= ∕  ∕

Approximately U is given by the sum of all interactions between a pair of molecules

U=∑ ∑ = ∑ ∑

i< j i≠ j

A semi empirical potential called Lennard-Jones potential is given by

U(K) = [( )  ( ) ]

U(R)

R0
R

U0

The classical partition function of the system is given by

 ⃗
Z= ∫

59 | P a g e
 (⃗ ⃗ ⃗ ) ⃗ ------ ⃗ ∫ 
= ∫ ------


(⃗ ⃗ ⃗ ) ⃗ ------ ⃗ ∫ 
= ∫ ------



= (∫ ⃗) ∫  ------


( ) 
= ∫ ------


= ( ) Zu

Where Zu = ∫  ------

For ideal gas U ≈ 0, for a system of high temperature β→ 0

In both of these limiting cases  → 1, hence Zu→ VN

For a system of gas whose density is not too large one can calculate the approximate value of Zu.
The mean potential energy of the gas is


̅=∫ =  lnZu
∫ 

∫ = ∫ ̅

lnZu (β) lnZu(0) = ∫ ̅

For β = 0, Zu(0) = VNlnZu(0) = N ln V

lnZu (β) = N ln V ∫ ̅

For a system of N molecules, the number of pairs in the system is

 
= = N(N  1) ≈ N2 for N>>1
 

 ̅ = N2 ̅

60 | P a g e
By approximating that the motion of any pair of molecules is not correlated appreciably with the
motion of the remaining molecules, we can consider a system of pair of molecules is in thermal
contact with a system containing the rest molecules (heat reservoir).

Hence mean pair potential ̅ is given by

 ⃗
̅ =∫ = ln∫  ⃗
∫  ⃗

∫  ⃗ =∫   ⃗ =∫ ⃗ ∫   ⃗ = V + I = V(1 + )

Where I(β) = ∫   ⃗ =∫  

 ̅ =  ln (V(1 + ))

= , -

=  ln( ) ; since V is independent of β

For I<< V, ln( )=( ) ≈


̅ = ( )=

̅ = N2 ̅ = 

lnZu (β) = N ln V ∫  dβ’

= N ln V + ∫

= N ln V + I(β); since I(0) =0

√ √
Zu (β) = VN Z= ( )

4.4. Equation of State and Varial Coefficients

61 | P a g e
Mean pressure of a system can be obtained from the partition function of the system as

̅=

The partition function for a system of interacting particles is given by

Z= ( )

ln Z = ln. ( ) / + lnZu

 =  because the first term in r.h.s of the above equation is independent of qv.

̅ = = (NlnV +

= (  )

̅
β ̅ = =  = n  In2

For ideal gas particles, equation of state is given by

̅ =N

̅
 = =n

The deviation of real gas equation from ideal gas can be considered by introducing correcting
term in the above equation. This case is discussed by KamerlingOnnes and he introduced a virial
expansion to generalize the ideal gas law.

̅
 = n + β2(T)n2 + B3(T)n3 + …

Where β2, β3, … are called virial coefficients

The second virial coefficient B2 is then

62 | P a g e
β2 =  I ∫ (   )4πR2dR

=  2π∫ (   )R2dR

Let the potential U(R) is approximately given by

U(R) {
 ( )

Where is the minimum possible separation between particles:

Then β2 =  2π∫ (   )R2dR  2π∫ (   )R2dR

For , U(R) = ∞  = 10

β2= 2π∫  2π∫ (   )R2dR

=  2π∫ (   )R2dR

If << 1,  ≈1

β2 = + 2π∫

= + 2πβ∫  ( )

=  2πβ ∫ 


=  2πβ 


=  2πβ 

β2 =  2πβ 

=  

63 | P a g e
=  

= ,  -

= b’ βa’; where b’ = and a’ = 

In the virial expansion, neglecting terms of order higher than n2 yields

̅
= n +β2n2

=n+  n2

=n+(  )n2

̅= + (  )n2

= + 

 ̅+ = (1 + = 

For very low density such that

  + ----

≈1

̅ + = =
 

 (̅ + =(  )=

n= = = ; where V = is number of moles

where = is molar volume

64 | P a g e
( ̅ )(  )

( ̅ )( 

( ̅ )(  b) = RT, where a = ,b= and R =

or( ̅ ) (V Vb) = VRT  Van der Waals’ equation of state a and b are called Van der

Waals’ constants.

65 | P a g e

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy