0% found this document useful (0 votes)
10 views118 pages

Qunatum Final

The document is an introductory course on Quantum Mechanics by Arnab Rudra from the Indian Institute of Science Education and Research Bhopal. It covers fundamental concepts such as wave mechanics, mathematical tools, and various quantum systems, along with their properties and implications. The course emphasizes the importance of dimensional analysis and the Wilsonian approach in understanding physical phenomena through quantum mechanics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views118 pages

Qunatum Final

The document is an introductory course on Quantum Mechanics by Arnab Rudra from the Indian Institute of Science Education and Research Bhopal. It covers fundamental concepts such as wave mechanics, mathematical tools, and various quantum systems, along with their properties and implications. The course emphasizes the importance of dimensional analysis and the Wilsonian approach in understanding physical phenomena through quantum mechanics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 118

Introduction to Quantum Mechanics

Arnab Rudra

Department of Physics

Indian Institute of Science Education and Research Bhopal

Bhopal Bypass Road, Bhauri


Madhya Pradesh 462066
India.

Last update: 09:27 hours, Sunday 27th October, 2024.


Contents

1 Introduction to Wave mechanics 1


1.1 Dimensional analysis and Wilsonian paradigm . . . . . . . . . . . . . . . . . . . . 1
1.2 Formalisms of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 A necessary mathematical tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Maps from vector spaces to itself . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Inner product and inner product space . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7 Unitary operators (and anti-unitary operators) . . . . . . . . . . . . . . . . . . . 14
1.8 Adjoint of an operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.9 Action of multiple operator and (anti-)commutator . . . . . . . . . . . . . . . . . 16
1.10 Postulates of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 One-dimensional Quantum systems 22


2.1 Particle in a box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 Quantum Harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3 Discussion on general time-independent potential . . . . . . . . . . . . . . . . . . 31
2.4 Free particle and plane wave solutions . . . . . . . . . . . . . . . . . . . . . . . . 34
2.5 Probability current and unitarity . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.6 Plane wave states and S matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.7 Effect of multiple non-overlapping potentials . . . . . . . . . . . . . . . . . . . . . 42
2.8 Transfer matrix in terms of reflection and transmission coefficient . . . . . . . . 44
2.9 Example of non-overlapping potential: Two delta functions . . . . . . . . . . . . 45
2.10 Quantum tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.11 Implications of Quantum tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . 54

3 Some general features of One-dimensional Quantum systems 66


3.1 Consequence of symmetry of the S matrix . . . . . . . . . . . . . . . . . . . . . . 66
3.2 A few general properties of bound states in one-dimensional system . . . . . . . . 75
3.3 Consequence of symmetry on Bound states . . . . . . . . . . . . . . . . . . . . . 78
3.4 Bound states from Scattering matrix and Transfer matrix . . . . . . . . . . . . . 82
3.5 Levinson’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

4 A bit more on Framework of Quantum Mechanics 86


4.1 Composite systems and tensor product of Hilbert spaces . . . . . . . . . . . . . . 86
4.2 Review of Symmetry in classical mechanics . . . . . . . . . . . . . . . . . . . . . 90
4.3 Symmetries in a Quantum theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.4 Uncertainty relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.5 Two-dimensional isotropic oscillators . . . . . . . . . . . . . . . . . . . . . . . . . 105

1
“I think I can safely say that nobody understands quantum mechanics.”

- Richard Feynman

Preface Quantum Mechanics is one of the foundational pillars of Physics. This is a note
for the first course in quantum mechanics. We welcome questions, comments, criticisms and
suggestions. They can be emailed to rudra@iiserb.ac.in or can be submitted here.

Acknowledgment The author is grateful to Mahesh KNB, Manoj Mandal and especially
Rahul Shaw for various inputs on the content of the note. The author would like to thank all
the students of PHY303N(2022) and PHY303N(2024) for the questions during the class. This
provided a constant source of encouragement to improve the note.

Pre-requisites The course requires one course on mechanics and mathematical methods.

IISERB(2024) QMech I - 2
Resources: Textbooks There are many textbooks on this subject.

1. J. J. Sakurai, Modern Quantum Mechanics

2. P. A. M. Dirac, The Principles of Quantum Mechanics.

3. Steven Weinberg, Lectures on Quantum Mechanics

4. Feynman, Hibbs, Quantum Mechanics and Path-integrals

Resources: Other online resources

1. Two courses in Quantum Mechanics by David tong: Course I, Course II

2. Principles of Quantum Mechanics by David Skinner

3. MIT OpenCourseWare: Quantum Physics I, Quantum Physics II, Quantum Physics III

4. Quantum Mechanics by JJ Binney


Chapter 1

Introduction to Wave mechanics

1.1 Dimensional analysis and Wilsonian paradigm


Physics is an attempt by human beings to explain diverse natural phenomena in terms of a few
general principles, which can also be presented in a mathematical form. The number of physical
phenomena is enormous, and hence, it is not practically useful if we come up with a large number
of physical laws. The success of physics lies in the fact that we have managed to explain many
things with a few mathematical laws. So before we start with Quantum Mechanics, let’s try to
understand how we analyze a physical problem ab initio.
Physics is an art of approximation based on dimensional consideration. Before going into
a general discussion on this point, we start with an example. Consider a container full of gas
molecules.

Figure 1.1: A box full of gas molecules

If we are given such a box, the most natural questions that one would ask are:
• What is the energy stored in the box?

• What is the temperature?

• What is the size of the box?

• What is the mass of the gas molecules and the box?

• What is the mean free path?

• ...
We now have a list of questions. Whenever we want to study a system, we have many questions.
In such a scenario, it is important to identify first the following things:
• Which set of questions are more important than others? If we can answer some of the
questions, the answers to the other questions can be derived from that. What is the
complete set of questions that is enough to answer all the properties of a system?

1
• What tools should we use to study those questions?

In order to address the above issues, it is often useful to identify the relevant physical parameters
and how large those parameters are. In physics, it only makes sense to talk about the largeness
or smallness of a dimensionless quantity. The value of a dimension-full parameter depends on the
choice of units and is, hence, not very useful. For example, let’s say the box size is 100milimeter
or equivalently .1 meter. We can see the size is “Large” or “Small,” depending on the units.
This is because the size of the box is a dimension-full parameter. But let’s say we measure the
ratio of the length of the box to the ratio of the room it is kept in. That ratio is always the same,
irrespective of what scale we use. In physics, we are mostly interested in these dimensionless
quantities; they play an important role in identifying the relevant tool to analyze the system.
At this point, one can ask whether these relevant dimensionless quantities depend on the system
at hand or whether some general procedure determines them. The answer to this will become
clear soon. Let’s identify the dimensionless parameters of the system given above.

1. The number of the gas molecules N

2. The ratio of the mass of the gas molecule to the ratio of the mass of the container g1 .

3. The length of the box in terms of the Compton wavelength of the particle g2 .

We choose what tools we should use depending on how large these numbers are. For example,
if g1 ≃ 1, the box will recoil whenever a gas molecule hits it. For the moment, we focus on the
regime when g1 ≃ 0. Now, depending on the other two dimensionless quantities, we determine
which tools to use. For example:

1. N small (1,2,3, a few)

(a) g2 is large then we use classical mechanics.


(b) g2 is small, then we use quantum mechanics.

2. N large

(a) g2 is large, then we use classical statistical mechanics/classical thermodynamics.


(b) g2 is small, then we use quantum statistical mechanics/quantum thermodynamics.

Classical Classical
Mechanics Stat Mech

g2

Quantum Quantum
Mechanics Stat Mech

Figure 1.2: Range of values and our tool

IISERB(2024) QMech I - 2
Depending on the situation, some of these can even be irrelevant. For example, let’s say we are
interested in what happens to the box of gas molecules if it gets hit by a ball of similar mass.
In that context, N is irrelevant.
Depending on the question, we need to determine which parameters are the most relevant and
which to ignore. In the above, we mentioned that the parameters were large and small. These
are, again, not very precise statements. The number 100 can be large or small, depending on
the person. So it is important to be precise about them. Let’s say g is a dimensional parameter
of the system (For example, N , g1 and g2 in the above example). We are mostly interested in
three regimes
g→0 , g→∞ , g≃1 (1.1)

Why are these three regimes important? We have mentioned before that physics is an art of
approximation. This means that we compute all observables as a Taylor expansion in terms of
dimensionless parameters (this technique is known as perturbation theory; we will learn it in
the context of quantum mechanics in this course). In the first case, we can Taylor expand in g
around the point g = 0; in the second case, we Taylor expand in 1g around the point g = ∞. In
the third case, the technique of Taylor expansion breaks down. We need a different method to
study the system. For practical purposes, we restrict to the first few terms in the Taylor series
because the higher-order terms in the Taylor expansion become smaller than the experimental
sensitivity. Hence, we cannot measure the effect of those terms. However, the higher order terms
in the Taylor expansion are significant in determining the correctness of a theory 1 . Experimental
advances measure a theory to higher accuracy and hence can experimentally determine the effect
of the higher order terms in the Taylor expansion. It often happens that the predictions of two
theories agree on the first one or two terms of the Taylor expansion but mismatch at higher
order.
We see that the dimensionless parameters can depend on the context/system or can be
determined from the fundamental constant. We can see that N and g1 follow from the system’s
consideration (i.e. N and g1 can be determined solely from the system’s information). In
contrast, the third one (g2 ) depends on fundamental constants of nature. In general, one should
first identify all the dimension-full parameters of the system and form dimensionless quantities
out of them and the fundamental constants of nature.
This approach to addressing a problem in physics depending on dimensionless parameters
is known as the Wilsonian approach . This started from the revolutionary work of Kenneth G.
Wilson, for which he was awarded the Nobel prize in 1982. The Wilsonian approach is a very
different reductionist approach, as put forward by the work of Dalton, Rutherford, Bohr et al. In
the reductionist approach, one key motivation of physics was to find the fundamental description
of nature and to connect every phenomenon in terms of the fundamental constituents. In the
Wilsonian paradigm, we try to find an effective description of a phenomenon depending on
dimensionless quantities.

1.2 Formalisms of Quantum Mechanics


Quantum mechanics is relevant when any dimensionless quantity involving 󰄁 in the denominator
is small (For example, consider a system with angular momentum L; 󰂓 then we can consider
󰂓
the following dimensionless quantity |L|/󰄁.). It is an extremely rich subject where almost all
1
This is often known as precision measurement; you can read about the fascinating story of electroweak
precision measurements.

IISERB(2024) QMech I - 3
key principles of physics can be learnt. Any physical theory relies on certain fundamental
assumptions or postulates; these postulates cannot be proven. They are like Euclidean postulates
in geometry. The only verification of these postulates comes from experiments. While studying
a theory, it is important to distinguish between the postulates of a theory and the mathematical
deduction from those postulates. Physics research is about questioning those postulates and
altering them to compose a wider range of phenomena. For example, Newton provided three
laws of motion describing all large objects we can see around us. But there is no mathematical
derivation of these laws. Their validity relies entirely on experimental verification.
Similarly, the discipline of quantum mechanics relies on some fundamental postulates. The
postulates of quantum mechanics depend on a particular choice of formulations. There are the
three most commonly used formalisms of Quantum Mechanics. These three formulations of
quantum mechanics are known as

1. Wave Mechanics: This approach relies on Schrödinger equations. This is analogous to the
Newtonian formulation of Classical mechanics.

2. Matrix Mechanics: This approach relies on the Canonical commutation relations, which
Heisenberg first wrote down. This is analogous to the Hamiltonian formulation of Classical
mechanics

3. Path integral: This approach was developed by Richard Feynman. It is analogous to the
Lagrangian formulation of classical mechanics.

Other formulations of quantum mechanics are also there (see here). All these formulations have
different assumptions/postulates. But they agree on all current experiments. For a wide class of
systems, we can also derive two other formulations starting from any one of these formulations.

1.3 A necessary mathematical tool


The postulates of Quantum Mechanics need certain mathematical machinery. We discuss that
first.

1.3.1 Functions, operators


Let’s consider a function f from real line R to complex plane C.

f : R −→ C (1.2)

We also assume that the function can be differentiated any number of times; it is an infinitely
differentiable (“smooth”) function. Consider the set of all such smooth functions from R to C;
let’s call that set V. V has the following property

1. addition
If f1 (x) and f2 (x) belong to the set V then f1 (x) + f2 (x) also belongs to V

f1 (x), f2 (x) ∈ V =⇒ f1 (x) + f2 (x) ∈ V (1.3)

From the definition, it is clear that the addition operation satisfies the following properties

(a) Commutative
f1 (x) + f2 (x) = f2 (x) + f1 (x) (1.4)
The order in which elements appear does not matter.

IISERB(2024) QMech I - 4
(b) Associative 󰀓 󰀔 󰀓 󰀔
f1 (x) + f2 (x) + f3 (x) = f1 (x) + f2 (x) + f3 (x) (1.5)
(c) Identity
f1 (x) + 0 = f1 (x) (1.6)
The function 0 belongs to V.
(d) Inverse
f1 (x) + (−f1 (x)) = 0 (1.7)

2. scalar multiplication
If c is a complex number (independent of x) and f (x) belongs to V, then c f (x) also belongs
to V.

(a) Identity element of scalar multiplication

1 · f (x) = f (x) (1.8)

(b) Compatible with the addition of complex numbers

∀a1 , a2 ∈ C , (a1 + a2 )f (x) = a1 f (x) + a2 f (x) (1.9)

(c) Distributive
c(f1 (x) + f2 (x)) = cf1 (x) + cf2 (x) (1.10)

These are the same properties that are required to label a set as vector space. For example,
these properties are also satisfied by coordinate vectors that you have studied in your classical
mechanics course. V is called a vector space.

1.3.2 Space of finite sequence


Now, give a more familiar example. Consider a collection of n complex numbers

(c1 , · · · , cn ) (1.11)

We can check that it also satisfies all the properties listed above. This collection is known as
n-dimensional Complex vector space Cn . We usually denote the elements of Cn as n-dimensional
column matrix 󰀳 󰀴
c1
󰁅.󰁆
󰁅 .. 󰁆 (1.12)
󰁃 󰁄
cn

1.3.3 Space of infinite sequence


Let’s now consider the space of infinite sequence2

(c1 , · · · , cn , · · · ) (1.13)

This is not a very familiar example. So, we get to give a few explicit examples first:
󰀕 󰀖
1 1 1
(1, −1, 1, −1, · · · ) , 1, , , , · · · (1.14)
2 3 n
This collection also forms a vector space. For future reference, we call it Vseq . This space has
no matrix representation.
2
We do not require the sequences to be convergent sequences.

IISERB(2024) QMech I - 5
1.3.4 space of square summable infinite sequence
Let’s consider the previous example with one extra condition

󰁛
|cn |2 = Finite (1.15)
n=1

This criterion is known as the square summability. We denote the space as ℓ2 (N).
One can show that ℓ2 (N) is also a vector space. Note that every element of ℓ2 (N) is an
element of Vseq but the converse is not true. Homework

1.4 Vector space


We have already introduced the concept of vector space. Now, we give a formal definition. A
vector space is a set V (any element of this set is called a vector) with two binary operations,
additions and scalar multiplication, which satisfies various properties listed below. We start
with addition. It is defined as
ψ+φ (1.16)
The addition + has the following property

1. addition
If ψ1 and ψ2 belong to the set V then ψ1 + ψ2 also belongs to V

ψ1 , ψ2 ∈ V =⇒ ψ1 + ψ2 ∈ V (1.17)

From the definition, it is clear that the addition operation satisfies the following properties

(a) Commutative
ψ1 + ψ2 = ψ2 + ψ1 (1.18)
The order in which elements appear does not matter.
(b) Associative 󰀓 󰀔 󰀓 󰀔
ψ1 + ψ2 〉 + ψ3 = ψ1 + ψ2 + ψ3 (1.19)

(c) Identity
ψ+0=ψ (1.20)

The vector 0 belongs to V.


(d) Inverse
∀ψ ∈ V , −ψ ∈ V (1.21)

2. scalar multiplication
If c is a complex number and ψ belongs to V, then c ψ also belongs to V. The complex
number is usually known as a “scalar.”

(a) Identity element of scalar multiplication

1·ψ =ψ (1.22)

(b) Compatible with field addition

∀a1 , a2 ∈ C , (a1 + a2 )ψ = a1 ψ + a2 ψ (1.23)

IISERB(2024) QMech I - 6
(c) Compatibility between field multiplication and scalar multiplication

(a1 · a2 )ψ = a1 · (a2 ψ) (1.24)

These properties are also satisfied by vectors that you have studied in your classical mechanics
course. In this case, V is also called a vector space. In this case, we are dealing with complex
vector space.
We have already given a few examples above. Let’s consider various examples here:

1. Consider a periodic variable θ


θ ≃ θ + 2π (1.25)

and consider all periodic functions of θ

f (θ + 2π) = f (θ) (1.26)

One can check that this forms a vector space. Homework

2. Consider the collection of all the solutions of a linear differential equation subjected to a
particular condition. This collection also forms a complex vector space. Homework

1.4.1 Dimension of a vector space


In the case of a column vector, we define the dimension of vector space as the number of different
entries. We need to define a more abstract definition of dimension so that it can be uniformly
applicable to all the cases given above.
Consider a collection of n-vectors ψ1 , · · · , ψn . These n-vectors are called linearly independent
if the only solution to the equation
󰁛 N
c i ψi = 0 (1.27)
i=1

is that all the ci s are zero.


The maximum number of linearly independent vectors in vector space is called the dimen-
sion of that vector space. If the maximum number of linearly independent vectors is greater
than any finite integer, then the space is called an infinite-dimensional space. Let’s go back to
the examples given above:

1. In the case of Cn , we can find at most n vectors which are linearly independent. So, the
dimension of the Space n. So, it is a finite-dimensional vector space.

2. Consider the collection of periodic functions on a circle. We consider a particular sub-


collection
χn (θ) = ei n θ , n∈Z (1.28)

Z is the set of integers. Let’s now consider the following equation


N
󰁛
cn χn (θ) = 0 (1.29)
n=M

The only solution to this equation is cn = 0 for all values of n. So, the number of linearly
independent vectors is greater than any finite quantity. So, this is an infinite-dimensional
vector space.

IISERB(2024) QMech I - 7
3. ℓ2 is also an infinite dimensional vector space.

4. Consider a vector Vn such that the only non-zero element is in the n position. All other
entries are zero. In this case, we consider the equation
N
󰁛
c n Vn = 0 (1.30)
n=M

Again, the only solution is that cn = 0. So, this space is again an infinite-dimensional
vector space.

Finite-dimensional vector spaces are easier to understand. One can show that n-dimensional
complex vector space is essentially the same as Cn 3 . As a result, the whole machinery of
matrices can be used for any finite-dimensional vector space. For a physicist, n-dimensional
vector space and n-column matrix are the same. As a result, many of our intuitions of vector
spaces are based on matrices. In quantum mechanics, we often counter vector spaces, which are
infinite-dimensional. For example, consider the collection of the wave function for a particle in
a box. They form a vector space. But it is an infinite dimensional vector space.
An infinite dimensional vector cannot be written in terms of matrices; There is nothing like
infinite-dimensional matrices; it is not even correct to think about infinite dimensional vector
spaces as a limit 4 of finite dimensional vector spaces. Many features of finite-dimensional vector
spaces become subtle in the case of infinite dimensions. A step toward understanding Quantum
Mechanics is to understand those subtleties.
So, in our discussion, we will emphasize various points of how infinite-dimensional vectors
are distinct from finite-dimensional vector spaces. We will not go into a lot of mathematical
details; One can check, for example, the book by Brian Hall.

Span and Basis A set of vectors (ϕ1 , · · · , ϕn ) spans a linear vector space if every vector in
the vector space can be written as a linear sum of the ϕi s.
A set of vectors is called a basis if it spans the vector space.

Examples

1. In the case of Cn , there are many choices for the basis vector. A very convenient choice is

e1 = (1, 0, · · · , 0) , e2 = (0, 1, · · · , 0) , en = (0, 0, · · · , 1) (1.31)

All these can be written in a compressed way

ei = (δ1i , δ2i , · · · , δni ) , i = 1, · · · , n (1.32)

These n vectors are linearly independent. They form a basis.

2. Consider the collection of periodic functions. Any periodic function can be written as a
linear sum of χn (θ)

󰁛
f (θ + 2π) = f (θ) =⇒ f (θ) = cn χn (θ) (1.33)
n=−∞

χn (θ)s form a basis for the space of periodic functions.


3
If V is finite-dimensional, then V is isomorphic to Cn for some integer n.
4
In some cases, it is possible to think about infinite space as a limit. But one has to be very careful about the
limit. Any random choice may not work.

IISERB(2024) QMech I - 8
Subtleties with Infinite dimensional vector space Here, we would like to elaborate on
a subtle point on Infinite dimensional vector space. In a finite-dimensional vector space (say,
dim n), if we find n linear independent vectors, then we know those n vectors also form a basis.
However, in infinite-dimensional vector space, an infinitely many linearly independent vector
may not form a complete basis. For example, consider the collection of periodic functions: e2in
(n ∈ Z) are infinitely many linearly independent vectors. However, they do not form a complete
basis.
Infinite sets have many other subtleties. We encourage the readers to see these two videos:
video 1, video 2. There is also a a very beautiful book on this theme

1.5 Maps from vector spaces to itself


Now we consider maps from vector space to itself

f : V −→ V (1.34)

In general, we can consider a map from one vector space to a different vector space. But for
our purpose, we restrict to the functions from a vector space to itself. Let’s start with a few
examples

1. We start with the case of Cn

f : Cn −→ Cn (1.35)

We list a few examples

(a)
(c1 , · · · , cn ) −→ (0, · · · , 0) (1.36)

(b)
(c1 , · · · , cn ) −→ (c21 , · · · , c2n ) (1.37)

(c)
(c1 , · · · , cn ) −→ (c1 , 2c2 , · · · , n cn ) (1.38)

(d)
(c1 , · · · , cn ) −→ (cn , c1 , · · · , cn−1 ) (1.39)

2. Let’s consider the case of Vseq

(a) Right shift


(c1 , · · · , cn , · · · ) −→ (0, c1 · · · , cn−1 , · · · ) (1.40)

(b) Left shift


(c1 , · · · , cn , · · · ) −→ (c2 , c3 · · · , cn+1 , · · · ) (1.41)

(c)
(c1 , · · · , cn , · · · ) −→ (c1 , 2c2 , · · · , n cn , · · · ) (1.42)

We leave it to the readers to check that the first two maps are also maps from ℓ2 (N) to
itself, whereas the third one is NOT a map from ℓ2 (N) to itself. Homework

IISERB(2024) QMech I - 9
1.5.1 Linear operators
We have already discussed that any element of Cn can be written as a column vector. Now we
can check that the example in (1.36), (1.38) and (1.39) as matrices, whereas the example in
(1.37) cannot be written as a matrix. This brings us to the concept of linear operator. Consider
a map f
f : V −→ V (1.43)
such that

f (αv1 + βv2 ) = αf (v1 ) + βf (v2 ) , α, β ∈ C , v1 , v2 ∈ V (1.44)

Matrices are examples of linear operators on finite dimensional vector spaces. One can check
that (1.37) is not a linear operator. (1.40), (1.41) and (1.42) are examples of linear operators in
infinite-dimensional vector spaces; however they are not matrices !!!
Consider the collection of all operators from Cn to itself; Show that it is a complex vector
space. Homework: Show this.

1.5.2 Eigenvector and eigenspectrum


Now, we discuss one more concept that is very familiar in terms of matrices: the eigenvector
and eigenspectrum of a linear operator. Operators are maps from the Hilbert space to itself

O : H −→ H (1.45)

We use the following Convention. Ket vectors must always be put on the right side of an
operator.
O|v1 〉 (1.46)
Given a linear operator O if we can find a vector |v〉 such that

O|v〉 = λ|v〉 , λ∈C (1.47)

then |v〉 is called a eigenvector with eigenvalue λ.

Examples: We go back to the examples in (1.36), (1.38) and (1.39).

• We begin with (1.36)


(c1 , · · · , cn ) −→ (0, · · · , 0) (1.48)
In this case, any vector is an eigenvector with an eigenvalue of 0.

• Then we consider (1.38)

(c1 , · · · , cn ) −→ (c1 , 2c2 , · · · , n cn ) (1.49)

In this case, the operator has n different eigenvectors with eigenvalues 1, 2, · · · , n. For
example, consider the vector (1, 0, · · · , ) with eigenvalue 1. The vector (0, 1, · · · , ) has
eigenvalue 2.

• Infinite dimensional vector space, for any linear operator, we can always find an eigenvector
and an eigenvalue.

• In infinite dimensions, a linear operator may not have an eigenvector or an eigenvalue. For
example, consider the right shift operator in (1.40) or the left shift operator in (1.41).

IISERB(2024) QMech I - 10
1.6 Inner product and inner product space
A Complex inner product is a function V × V to complex plane C which satisfies certain
properties. We denote the inner product of two vectors ψ and φ as

ψ, φ −→ (ψ, φ) (1.50)

The inner product satisfies the following properties

positive-definiteness (φ, φ) ≥ 0 , ∀φ ∈ V with (φ, φ) = 0 iff φ=0


󰂏
conjugate symmetry (ψ, φ) = (φ, ψ) , ∀φ, ψ ∈ V
(1.51)
Linearity (ψ, c φ) = c (ψ, φ) , ∀φ, ψ ∈ V c∈C
Additivity (ψ, φ + χ) = (ψ, φ) + (ψ, χ) , ∀ψ, φ, χ ∈ V

Note that we demanded linearity only in the second argument. There are a few corollaries of
the four properties

• The inner product defines a real (positive definite) norm 5 . From the second property, it
follows that
(φ, φ) = (φ, φ)󰂏 (1.52)

• Two vectors φ, χ are called orthogonal if

(φ, χ) = 0 (1.53)

A Complex inner product space (or pre-Hilbert space) is a complex vector space V equipped
with a complex inner product.

Examples: Finite-dimensional vector spaces Let’s consider a finite-dimensional vector


space. Consider two vectors

U = (u1 , · · · , un ) , V = (v1 , · · · , vn ) (1.54)

Let’s try to write an inner product on this vector space. The inner product is some function
󰀓 󰀔
(U, V ) = f {u󰂏i }, {vi } (1.55)

We start with the easiest function, the constant function

(U, V ) = Constant (1.56)

But this is not allowed since the inner product between any vector and 0 vector is zero. So, we
make another guess

(U, V ) = α1 u󰂏1 v1 + β1 (u󰂏1 )2 (v1 )2 , α1 , β1 ∈ C (1.57)

An inner product has to be linear in the second argument. So β1 = 0. In fact, the linearity
condition rules out any higher powers of vi s. So we are left with the choice

(U, V ) = α1 u󰂏1 v1 (1.58)


5
Norm allows us to assign length to a vector; inner product also allows us to define the angle between two
vectors. Every normed vector space does not have an inner product, but every inner product vector space does
have a norm.

IISERB(2024) QMech I - 11
We consider U and V such that the first component is zero, but the other components are not.
In this case, the above inner product is zero even though the vectors are not 0. So again, this is
not an allowed inner product even though it obeys linearity in the second argument. We make
minimal modifications to the proposal so that it does not spoil the linearity.

(U, V ) = α1 u󰂏1 v1 + α2 u󰂏2 v2 + · · · + αn u󰂏n vn (1.59)

This relation obeys conjugate symmetry if αi are real. And it is positive definite only if all the
αi are positive. We arrive at a function which obeys all the properties of inner products
N
󰁛
(U, V ) = αi u󰂏i vi , αi ∈ R , αi > 0 (1.60)
i=1

Examples: Infinite-dimensional vector spaces Consider the collection of periodic func-


tions. In that case, we can define an inner product
ˆ 2π
(φ, ψ) = dθ φ󰂏 (θ) ψ(θ) (1.61)
0

We can check that it satisfies all the properties of an inner product. Homework

Exercise: Let’s modify the above definition slightly; we introduce a function f (θ)
ˆ 2π
(φ, ψ) = dθ f (θ)φ󰂏 (θ) ψ(θ) (1.62)
0

f (θ) is periodic function. What are the conditions on f (θ) so that it is a good inner product?
Homework

Inner product and angle between two vectors In the case of 3-dimensional vectors, we
know the angle between two vectors is given by

󰂓·B
A 󰂓
cos θ = (1.63)
|A||B|

In an inner product space, this notion can be generalized for any two vectors (even if they are
infinite-dimensional vectors)
(U, V )
󰁳 (1.64)
(U, U )(V, V )

1.6.1 Cauchy-Schwarz inequality


󰂓 · B|
|A 󰂓 = |A||B|| cos θ| ≤ |A||B| (1.65)

θ is the angle between the two vectors. The equality holds only when θ = 0, π, i.e. A and B,
are along the same direction.
This is a special of a more general inequality known as the Cauchy-Schwarz inequality.
Consider two vectors φ and χ in an inner product space. The Cauchy-Schwarz inequality states
that
(φ, φ)(χ, χ) ≥ |(φ, χ)|2 (1.66)

with equality satisfied if φ and χ are linearly dependent.

IISERB(2024) QMech I - 12
Proof We start with two-dimensional real vector space to prove this identity

X = (x1 , x2 ) , Y = (y1 , y2 ) (1.67)

Then we can see that

(X, X)(Y, Y ) − (X, Y )2 = (x21 + x22 )(y12 + y22 ) − (x1 y1 + x2 y2 )2 = (x2 y1 − x1 y2 )2 ≥ 0 (1.68)

It is easy to generalize it to n-dimensional real vector space

X = (x1 , x2 , · · · , xn ) , Y = (y1 , y2 , · · · , yn ) (1.69)

Let’s compute the same quantity again in n dimensional space


󰀓󰁛
N 󰀔󰀓 󰁛
N 󰀔 󰀓󰁛
N 󰀔2
2
(X, X)(Y, Y ) − (X, Y ) = x2i yi2 − x i yi (1.70)
i=1 i=1 i=1

When we expand the expression, the terms of the form x2i yi2 cancel. So we are left with
N
󰁛 N
󰁛 N
󰁛 N 󰁛
󰁛 N
x2i yj2 − 2 (xi yi )(xj yj ) = (xi yj − xj yi )2 ≥ 0 (1.71)
i=1 j=1,j∕=i i,j=1 i=1 j>i

This concludes our proof for any finite-dimensional real vector space. We leave it as an exercise
to extend this proof for complex vector spaces colour red Homework.
Now, we generalize it to any inner product space. Consider two vectors φ and χ. In order
to prove (4.111), consider the vector φ + c χ (c is a complex number); from the property of the
inner product, it follows that the norm is non-negative

(φ + c χ, φ + c χ) ≥ 0 =⇒ (φ, φ) + |c|2 (χ, χ) + c(φ, χ) + c󰂏 (χ, φ) ≥ 0 (1.72)

The above inequality is true for any value c. Let’s choose c = −(χ, φ)/(χ, χ). Putting this in
the above identity, we get
(φ, φ)(χ, χ) − |(φ, χ)|2 ≥ 0 (1.73)
Note that the equality will only be if there is a choice of c such that

(φ + c χ, φ + c χ) = 0 (1.74)

From the property of an inner product, it follows that

φ + cχ = 0 (1.75)

So the equality follows only if φ and χ are linearly dependent.


Later, we will see many interesting applications of this identity. For example, we will see that
the uncertainty relations in Quantum Mechanics follow from the Cauchy-Schwarz inequality.

Some notation Here, we introduce a notation, which is known as the Dirac Bra-Ket notation.
It is very widely used in Quantum Mechanics literature. In this notation, a vector is denoted as
follows
ψ −→ |ψ〉 (1.76)
|〉 is called a ket. The inner product of two vectors is written as follows

(ψ, φ) −→ 〈ψ|φ〉 (1.77)

〈| is called a bra.
For example, in the Bra-ket notation, the Cauchy-Schwarz inequality is written as

〈ψ|ψ〉〈φ|φ〉 ≥ |〈ψ|φ〉| (1.78)

IISERB(2024) QMech I - 13
1.7 Unitary operators (and anti-unitary operators)
Let’s define the following two things

|αU 〉 = U |α〉 , (|αU 〉)† = 〈αU | (1.79)

1. Unitary
An operator is called a unitary operator if

〈βU |αU 〉 = 〈β|α〉 (1.80)

2. Anti-unitary
An operator is anti-unitary if

〈βU |αU 〉 = (〈β|α〉)∗ = 〈α|β〉 (1.81)

Properties of Linear, Unitary operators Let U be an operator and |α〉 be a ket vector,
and O be another operator. The action of U on |α〉 and O is defined as
U
|α〉 −→ |αU 〉 = U |α〉 (1.82)
U −1
O −→ OU = U OU (1.83)

Then U is a unitary operator if it satisfies

〈βU |αU 〉 = 〈β|α〉 (1.84)

For unitary operator

U † = U −1 (1.85)

This can be inferred by noting the following thing

〈β|U † U |α〉 = 〈β|α〉 (1.86)


〈β|U † O U |α〉 = 〈β|(U † OU )|α〉 = 〈β|OU −1 |α〉 (1.87)

Now, we will prove the following identity

〈α|O|β〉 = 〈αU |OU |βU 〉 (1.88)

1.8 Adjoint of an operator


Now we introduce the concept of adjoint of an operator. Consider an operator O; the denote
the adjoint of an operator as O† and it is defined as

(v1 , O† v2 ) = (Ov1 , v2 ) , ∀v1 , v2 ∈ V (1.89)

We will demonstrate with this a few examples: We begin with the example of Cn . Consider two
vectors in Cn
v1 = (c1 , · · · , cn ) , v2 = (d1 , · · · , dn ) (1.90)
An operator from Cn to Cn is an n × n matrix with complex entries. Then

Ov1 = (O1i ci , O2j cj , · · · , Onk ck ) (1.91)

IISERB(2024) QMech I - 14
i, j and k are dummy indices; we follow Einstein’s summation convention: one needs to sum
over all possible values of a repeated dummy index. Then

(Ov1 , v2 ) = (Oji ci )󰂏 dj = (Oji )󰂏 (ci )󰂏 dj = c󰂏i (Oji


󰂏
dj ) (1.92)

We compare this equation with the definition of adjoint ((1.89)), and we get

(O† )ij = Oji


󰂏
(1.93)

Thus, this reproduces the result of hermitian adjoin for complex-valued matrices

O† = (O󰂏 )T (1.94)

Let’s now consider a vector space which is not finite-dimensional. Consider the collection of all
normalisable differentiable functions from R to C. Given two such functions, we can define the
inner product to be ˆ ∞
(f, g) = dxf 󰂏 (x)g(x) (1.95)
−∞
Let’s now consider the operator O = d/dx. We want to compute it’s adjoint
󰀕 󰀖 󰀏∞ 󰀕 󰀖
ˆ ∞
d 󰂏 󰀏 ˆ ∞
d
󰂏 󰀏 󰂏
(Of, g) = dx f (x) g(x) = f (x)g(x)󰀏 − dxf (x) g(x) (1.96)
−∞ dx 󰀏 −∞ dx
−∞

For normalisable functions f (±∞) = g(±∞) = 0. Then, comparing the above equation with
the definition of adjoint ((1.89)), we obtain
󰀕 󰀖†
d d
=− (1.97)
dx dx
Now, we define the concept of the self-adjoint operator. An operator is called a self-adjoint
operator if it is equal to its adjoint

self-adjoint : O = O† (1.98)

In the case of finite-dimensional vector spaces (i.e. Cn ), a hermitian operator is a self-adjoint


operator. In infinite-dimensional cases, an example of an adjoint operator is
d
i (1.99)
dx
In the infinite-dimensional model, there are many subtleties that define the self-adjoint operator.
We did not go into those subtleties here. We hope to come back to those issues.

1.8.1 Eigenvalues and eigenvectors of a Self-adjoint operator


Let’s now consider the eigenvalue and eigenvector of a self-adjoint operator.

Ov1 = λv1 (1.100)

Then we use the definition of the self-adjoint operator ((1.89))

(v1 , O† v1 ) = (Ov1 , v1 ) =⇒ λ(v1 , v1 ) = λ󰂏 (v1 , v1 ) =⇒ λ󰂏 = λ (1.101)

This implies that the eigenvalue of a self-adjoint is real. Let’s now consider two eigenvectors
with different eigenvalues

Ov1 = λv1 , Ov2 = ξv2 , λ, ξ ∈ R , λ ∕= ξ (1.102)

IISERB(2024) QMech I - 15
Again we use (1.89),

(v1 , O† v2 ) = (Ov1 , v2 ) =⇒ ξ(v1 , v2 ) = λ(v1 , v2 ) =⇒ (ξ − λ)(v1 , v2 ) = 0 (1.103)

Since ξ ∕= λ, we obtain that (v1 , v2 ) = 0. This implies that two eigenvectors with different
eigenvalues are necessarily orthogonal.
It is also possible to show that the collection of all eigenvectors of a self-adjoint operator is

1.9 Action of multiple operator and (anti-)commutator


Until now, we have considered the action of a single operator. Let’s now consider the action of
two different operators O1 and O2 . Before discussing it for a general vector space, let’s do it
for a particular case. We consider the space of normalizable functions from R to C. Then the
action of two operators is given by

(O1 ◦ O2 ) ◦ f (x) = O1 ◦ (O2 ◦ f (x)) (1.104)

Since each of the operators is a map from V to itself, if we consider their action, as given above,
then that is also a map from V to itself. So, the product of two operators is also an operator.
Let’s now see whether the final result depends on the order of operation or not. We start with
an example: O1 = x and O2 = x2 . Then
󰀃 󰀄
O1 ◦ O2 ◦ f (x) = O1 ◦ x2 f (x) = x3 f (x)
(1.105)
O2 ◦ O1 ◦ f (x) = O2 ◦ (x f (x)) = x2 (x f (x)) = x3 f (x)
So in this case, O1 ◦ O2 = O2 ◦ O1 . Let’s consider another example: O1 = x and O2 = d/dx.
Then
󰀕 󰀖
d d
O1 ◦ O2 ◦ f (x) = O1 ◦ f (x) = x f (x)
dx dx
(1.106)
d d
O2 ◦ O1 ◦ f (x) = O2 ◦ (x f (x)) = (x f (x)) = x f (x) + f (x)
dx dx
Clearly, we can see that O1 ◦ O2 ∕= O2 ◦ O1 ;
If the actions of two operators are the same irrespective of the order of operation, then it
is said that those two operators commute. Otherwise, we say that those two operators do not
commute. In the case of linear operators, we compute the following quantity
󰁫 󰁬
O1 , O2 = O1 O2 − O2 O1 (1.107)

This is called a commutator, and whenever the action of two operators does not depend on the
order of operation, this quantity vanishes. In the example given above, we can see
󰁫 d󰁬
x, = −1 (1.108)
dx
In order to compute a commutator, it is a good idea to consider its action on a function. For
example, let’s say we want to compute [d/dx, eax ]. Then
󰁫d 󰁬 d ax d
, eax f (x) = (e f (x)) − eax (f (x)) = aeax f (x) (1.109)
dx dx dx
󰁫 󰁬
So we conclude that d/dx, eax = a eax . For more than one operator, we can consider various
commutators, but all of them can be reduced to a commutator of two operators. For example,
consider the following quantity
󰁫 󰁬 󰁫 󰁬 󰁫 󰁬
O1 , O2 O3 = O1 , O2 O3 + O2 O1 , O3 (1.110)

IISERB(2024) QMech I - 16
We leave it to the reader to prove this. This is known as the Leibniz identity of the operators.
For a more general commutator, we can use the above identity repeatedly. For example, consider
the following commutator 󰁫 󰁬
O1 , O2 O3 O4 (1.111)
Let’s first consider O3 O4 as a single operator (try to show that the product of two linear operators
is also a linear operator) and use the above identity
󰁫 󰁬 󰁫 󰁬
O1 , O2 O3 O4 + O2 O1 , O3 O4 (1.112)
Now, for the second commutator, we can use the above identity again. Commutators satisfy
another very important property, which is known as the Jacobi identity
󰁫 󰁫 󰁬󰁬 󰁫 󰁫 󰁬󰁬 󰁫 󰁫 󰁬󰁬
O1 , O2 , O3 + O2 , O3 , O1 + O3 , O1 , O2 = 0 (1.113)

1.10 Postulates of Quantum Mechanics


1. The state of a system at any instant of time corresponds to an element |α〉 of a Vector
space.
󰁥
2. Any observable of the system A corresponds to a linear Hermitian operator A.

3. In any measurement, the observed value for any observable A is one of the eigenvalues of
󰁥
the operator A.

4. When we make a measure on a general state |ψ〉, the probability observe the value a for
the operator A󰁥 is
|〈a|ψ〉|2 (1.114)
where |a〉 is the eigenstate of A with eigenvalue a.

5. The time evolution for the system is governed by the equation



i󰄁 |ψ(t)〉 = H|ψ(t)〉 (1.115)
∂t
H is a self-adjoint operator, and it is called Hamiltonian. In the case of the wave function,
this reduces to the Schrödinger equation.

Take-away lessons:

1. A Quantum Mechanical system ≡ a Hilbert space along with a self-adjoint operator


(commonly known as the Hamiltonian), which describes the dynamics. The dynamics
of the system are captured by the Schrdinger equation.
Usually, a Hilbert space has more than one self-adjoint operator; A Hamiltonian is
an operator whose spectrum is bounded from below. The same Hilbert space can
admit two different self-adjoint operators whose spectrum is bounded from below. The
same Hilbert space with two different choices for Hamiltonian describes two different
quantum mechanical systems.

2. A state ≡ a vector in the Hilbert space

3. Energy ≡ Eigenvalue of the Hamiltonian

IISERB(2024) QMech I - 17
1.10.1 A simple example
We begin with an example of a simple quantum mechanical system. We consider the vector
space to be C2 . Any Hermitian operator is a Hamiltonian for the system.
󰀣 󰀤
Bz Bx − iBy
H= , Bx , By , Bz ∈ R (1.116)
Bx + iBy Bz

In physics, this system is known as a spin 1/2 particle in a magnetic field. The name is not
important. The purpose was to demonstrate how to define a quantum system. The Schrodinger
equation for this system is
󰀣 󰀤
∂ c1 (t)
Hψ(t) = i󰄁 ψ(t) , ψ= , c1 (t), c2 (t) ∈ C (1.117)
∂t c2 (t)

We leave it to the readers to find the eigenvalues for H and then to solve the Schrodinger
equation Homework.
In physics experiments, we often have more experimentally observed quantities apart from
the energy. In Quantum Mechanics, an experimentally observed quantity is an Eigenvalue of a
Self-adjoint operator. For example, momentum is an experimentally observed quantity, and it
is a self-adjoint operator. We will consider more examples in the next chapter.

IISERB(2024) QMech I - 18
More exercises from chapter I
1. Take collection of all continuous real function f (x) (f 󰂏 = f ) of one real variable such that
ˆ
dxf (x)2 = Finite (1.118)

(a) Show that the collection of all such functions forms a vector space.
(b) Which of the operators are well defined on the whole of V ?
d
x , , eαx , eα|x| (1.119)
dx
2. Unitary and anti-unitary operators

(a) Show that the product of any (finite) number of linear unitary operators is also linear,
unitary.
(b) Show that the product of an even (finite) number of anti-linear, anti-unitary operators
is linear, unitary.
(c) Show that the product of an odd (finite) number of anti-linear, anti-unitary operators
is anti-linear, anti-unitary.

3. Consider a periodic variable θ


θ ≃ θ + 2π (1.120)
and complex-valued function of this variable f (θ). Show that all such functions form a
vector space.
What are the conditions on f (θ) such that i∂θ is a self-adjoint operator?
Is it a bounded operator?

4. Consider Square integrable functions of half-line


ˆ ∞
x≥0 : f (x) : dx|f (x)|2 = Finite (1.121)
0

All such functions obey the following property

lim f (x) = 0 (1.122)


x→∞

Find the condition on the function so that i∂x is a self-adjoint operator.


Show that all such functions form a Hilbert space.

5. Consider functions on an interval x ∈ [a, b].


What are the conditions on the function such that −∂x2 is a self-adjoint operator?
Show that all such functions form a Hilbert space.

6. Adjoint for two or more operators


In the class, we defined the adjoint of an operator. Let’s consider a collection of operators
Oi ; their adjoint exists, and it is denoted as Oi† . Then show that

(Oi Oj )† = Oj† Oi† , (Oi Oj Ok )† = Ok† Oj† Oi† (1.123)

generalize it for an arbitrary number of operators

IISERB(2024) QMech I - 19
7. Self-adjoint operators
Consider the following operator
d
P =i (1.124)
dx
An operator is self-adjoint on functions defined in the interval [a, b]
ˆ b 󰀓 󰀔󰂏 ˆ b 󰀓 󰀔
dx O ◦ f g= dxf 󰂏 O ◦ g (1.125)
a a

Find the condition on the functions f and g such that P is self-adjoint on the space of
functions defined on a) half space [0, ∞) b) interval [a, b]
Check it also for P 2 .

8. Eigenvectors of Linear Hermitian operator


In the class, we discussed the eigenvalues and eigenvectors of Linear Hermitian operators.
We showed that the eigenvalue is always real, and two eigenvectors with different eigen-
values are always orthogonal. This exercise is to complete the analysis when two or more
eigenvalues are the same.
Let fi s be eigenvectors with same eigenvalue and we define gij as
ˆ ∞
gij = dxfi󰂏 (x)fj (x) (1.126)
−∞

Show that gij is a hermitian matrix. We form a linear combination of the following form

fi = cij fj (1.127)

here cij are complex numbers. We define g󰁨ij as


ˆ ∞
g󰁨ij = dxfi󰂏 (x)fj (x) (1.128)
−∞

Find the relation between gij and g󰁨ij . Show that you can always choose cij s such that g󰁨ij
is a diagonal matrix with real entries.

9. Compute the following commutators


󰁫 d2 󰁬 󰁫 d2 󰁬 󰁫 dm 󰁬 󰁫 d2 󰁬
x, , x2 , , xn , , eg(x) , (1.129)
dx2 dx2 dxm dx2
m, n are non-negative integers. g(x) is some function of x which is infinitely differentiable.

10. Prove the following identities


󰁫 󰁬 󰁫 󰁬
O1 , O2 = − O2 , O1
󰁫 󰁬† 󰁫 󰁬 󰁫 󰁬
O1 , O2 = O2† , O1† = − O1† , O2†
󰁫 󰁬 󰁫 󰁬 󰁫 󰁬
O1 + O2 , O3 = O1 , O3 + O2 , O3
󰁫󰁫 󰁬 󰁬 󰁫󰁫 󰁬 󰁬 󰁫󰁫 󰁬 󰁬
O1 , O2 , O3 + O2 , O3 , O1 + O3 , O1 , O2 = 0
󰁫 󰁬 󰁫 󰁬 󰁫 󰁬 󰁫 󰁬
O1 O2 · · · On , O = O1 , O O2 · · · On + O1 O2 , O O3 · · · On + · · · + O1 · · · On−1 On , O
(1.130)

IISERB(2024) QMech I - 20
11. From the definition of a Linear operator, show that a linear operator commutes with a
complex number.

12. Let’s define three following operators

L1 = −i(y∂z − z∂y ) , L2 = −i(z∂x − x∂z ) , L3 = −i(x∂y − y∂x ) (1.131)

In the class, we have computed [L1 , L2 ]; Compute the following commutators


󰁫 󰁬 󰁫 󰁬 󰁫 󰁬 󰁫 󰁬
L2 , L3 , L3 , L1 , L2 , x2 +y 2 +z 2 , L3 , x2 +y 2 +z 2 (1.132)
󰁳
Consider a central potential V (r) (r = x2 + y 2 + z 2 ). Compute the following commuta-
tor 󰁫 󰁬 󰁫 󰁬 󰁫 󰁬
L1 , V (r) , L2 , V (r) , L3 , V (r) (1.133)

13. Consider two operators O1 and O2 such that there commutator is not zero. Let ψ1 , · · · , ψn , · · ·
are the complete set of eigenvectors for O1 . Are they also eigenvectors for O2 ?
Explain your answer through examples.

14. Let O be an operator (not necessarily self-adjoint). Show that O† O is a self-adjoint


operator with non-negative eigenvalues.

IISERB(2024) QMech I - 21
Chapter 2

One-dimensional Quantum systems

In the last chapter, we saw that a quantum system is defined by a pair: a Hilbert space and
a self-adjoint operator (which is also known as the Hamiltonian). In this chapter, we discuss
a large class of quantum systems. Unlike the Qbit system that we introduced in the previous
chapter, these systems also have a classical description and in that sense they are special as they
can be studied in both classical mechanics and in quantum mechanics. The readers should keep
in mind that this is not true for a generic Quantum/Classical system. For all these system the
Hilbert space is the collection of normalisable functions from real line to complex plane.
ˆ ∞
Ψ : R −→ C , |Ψ(x)|2 dx = finite (2.1)
−∞

The inner product in this vector space is given by


ˆ ∞
(Ψ1 , Ψ2 ) = Ψ󰂏1 (x)Ψ2 (x)dx (2.2)
−∞

and the self-adjoint operator is given by

󰄁2 d 2
H(x) = − + V (x) , V 󰂏 (x) = V (x) (2.3)
2m dx2
These systems are broadly known as one-dimensional systems; in classical mechanics, these
describes motion of a particle in one dimension in presence of a potential V (x). From the
perspective of Quantum Mechanics, the name “one-dimensional” may sound misleading as the
corresponding Hilbert space is infinite-dimensional. The function Ψ is known as the wave
function. The broad class of quantum systems where the Hilbert space consists of functions
is also known as the wave mechanics. We rewrite the postulates of Quantum Mechanics in the
language of Wave functions.

1. The information of a closed, isolated quantum system is encoded in wave function Ψ(󰂓x, t),
which is a function of space 󰂓x and time t.
The probability of finding the system in a differential volume dV at a point 󰂓x is given by

|Ψ(󰂓x, t)|2 dV (2.4)

At any point in time, the total probability over all space points is 1. This implies that the
wave function is normalized
ˆ
|Ψ(󰂓x, t)|2 dV = 1 , t ∈ (−∞, ∞) (2.5)

22
2. Every observable of the system corresponds to a linear self-adjoint operator.
In the next section, we will explain what a linear self-adjoint operator is.

3. In any measurement, the observed value for the observables is one of the eigenvalues of
the corresponding operator.

4. Consider a system described by normalized wave function Ψ. Then the average expectation
value of the observable corresponding O(󰂓x, t) is
ˆ
Ψ󰂏 (󰂓x, t) O(󰂓x, t) Ψ(󰂓x, t)dV (2.6)

5. The time evolution of the system is governed by the Schrödinger equation



H(󰂓x, t)Ψ(󰂓x, t) = i󰄁
Ψ(󰂓x, t) (2.7)
∂t
H(󰂓x, t) is called the Hamiltonian. It is a linear self-adjoint operator, and for a system,
the Hamiltonian is determined from an experimental setup. It is analogous to the force
term in Newton’s law. The force acting on a system is an experimental input to those
equations.
These postulates are true for a wide class of systems but not for all systems1 .
We now discuss Quantum Mechanics in a one-dimensional space. In this case, we consider
the Hamiltonian of the following form
󰄁2 d 2
H(x, t) = − + V (x, t) (2.8)
2m dx2
V (x, t) is potential which depends on x and t . First, we consider a time-independent potential
d
V (x, t) = 0 (2.9)
dt
In this case, the Schrödinger equation is given by
󰀕 󰀖
󰄁2 d 2 ∂
− 2
+ V (x) Ψ(x, t) = i󰄁 Ψ(x, t) (2.10)
2m dx ∂t
For a time-independent potential, the Schrödinger equation admits separable solutions

Ψ(x, t) = f (t) ψ(x) (2.11)

For a solution of the above form, the dependence on x and t is there in two different functions.
We put this back in the above equation, and after a few simple steps, we get
󰀕 󰀖
1 󰄁2 d 2 1 ∂
− 2
+ V (x) ψ(x) = i󰄁 f (t) (2.12)
ψ(x) 2m dx f (t) ∂t
The left-hand side of the equation does not depend on t, and the right-hand side does not depend
on x. Hence both sides must equal to some constant. Let us call it E
󰀕 󰀖
󰄁2 d 2 ∂
− 2
+ V (x) ψ(x) = Eψ(x) , i󰄁 f (t) = Ef (t) (2.13)
2m dx ∂t
The first equation is known as the time-independent Schrödinger equation. E is the eigenvalue
of the Hamiltonian operators; since Hamiltonian is a self-adjoint operator, its eigenvalue is real
and observable. E denotes the energy of the system.
Before we go into analyzing a particular system, we list down a few properties of the wave
function. These properties follow from the postulates of Wave mechanics.
1
In technical terms, these are valid only for systems whose wave function can be written as a function of
spacetime co-ordinate. This is true for many systems but not for all quantum systems.

IISERB(2024) QMech I - 23
1. Two wave functions which are proportional to each other up to a complex number represent
the same states. This is because all the physical observables can be obtained from the
action of Linear self-adjoint operators. Since two wave functions are proportional to each
other, the action of any linear operator gives the same answer for both of them. Hence it
is not possible to distinguish them by experiment.
Two wave functions ψ1 (x) and ψ2 (x) represent two completely different states if the inner
product of their wave functions is zero
ˆ ∞
dx ψ1󰂏 (x)ψ2 (x) = 0 (2.14)
−∞

2. Wave function is continuous at any point

lim ψ(x) = lim ψ(x) = ψ(a) , ∀a ∈ R (2.15)


x→a+ x→a−

Continuity of the wave function is important for probability interpretation in postulate 1.

3. Wave function should be normalizable


ˆ ∞
dx|ψ(x)|2 = N 2 (2.16)
−∞

here N 2 is a finite real number. If this is the case, we can always find a normalized wave
function which is consistent with the probability interpretation.

4. Consider the time-independent Schrödinger equation and integrate it around a local neigh-
bourhood (x − 󰂃, x + 󰂃) of x = a (󰂃 > 0)
ˆ a+󰂃 󰀥 󰀦 ˆ a+󰂃
󰄁2 d 2
dx − ψ(x) + V (x) ψ(x) = E dx ψ(x) (2.17)
a−󰂃 2m dx2 a−󰂃

ψ(x) is always continuous so that probability is unambiguous at every point. Let’s take
the limit 󰂃 → 0. Then the above integral implies that ψ ′ (x) can have a discontinuity at
x = a depending on the functional form of the potential. The discontinuity is given by
󰀏 󰀏
dψ 󰀏󰀏 dψ 󰀏󰀏
ˆ a+󰂃
2m
󰀏 − 󰀏 = lim dx V (x) ψ(x) (2.18)
dx 󰀏 + dx 󰀏 − 󰄁2 󰂃→0 a−󰂃
x=a x=a

For example, consider potential that has a Dirac delta function at x = a.

V (x) = λ δ(x − a) (2.19)

From (2.18) we get


󰀏 󰀏
dψ 󰀏󰀏 dψ 󰀏󰀏 2m
󰀏 − 󰀏 = λ ψ(a) (2.20)
dx 󰀏 dx 󰀏 󰄁2
x=a+ x=a−

The delta function gives rise to a discontinuity in the first derivative of the wave function,
unless the wave function vanishes at that point.

To summarise, the wave function in one dimension is continuous and piece-wise differentiable.

IISERB(2024) QMech I - 24
2.1 Particle in a box
We consider a particle in a box
L
V (x) =0 , |x| ≤
2 (2.21)
L
=∞ , |x| >
2
Since the potential is time-independent, we consider the time-independent Schrödinger equa-

x = − L2 x= L
2

Figure 2.1: Particle in a box

tion, which is given by


󰄁2 d 2
− ψ(x) + V (x) ψ(x) = E ψ(x) (2.22)
2m dx2
Since the potential is infinite outside the box, the wave function should vanish there
L
ψ(x) = 0 , |x| > (2.23)
2
and the wave function should always be continuous (so that it has a well-defined first derivative).
As a result, we demand

ψ(−L/2) = 0 = ψ(L/2) (2.24)

This is also known as the Dirichlet boundary condition. The solution of (2.22) subjected to the
above boundary condition is given by
󰁵 󰀕 󰀕 󰀖󰀖
2 x 1
ψn (x) = sin nπ − , n ∈ Z+ (2.25)
L L 2
n is called the quantum number. We can check that the solutions are orthonormalized
ˆ L/2
ψn󰂏 1 (x) ψn2 (x) dx = δn1 n2 (2.26)
−L/2

The energy is given by


n2 π 2 󰄁2
En = , n ∈ Z+ (2.27)
2mL2
Z+ denotes the collection of all positive integers. Now we list a few properties of the system:

IISERB(2024) QMech I - 25
1. The energy level, in this case, is discrete; n in (2.27) is a positive integer.

2. In classical mechanics, the lowest possible value for the energy of the system is 0; this
is not allowed in quantum mechanics. The lowest possible energy quantum mechanics is
when n = 1
π 2 󰄁2
(2.28)
2mL2
For n = 0, the wave function vanishes identically, and hence the total probability turns
out to be 0. So, n = 0 is not an allowed state.

3. The wave function of the system is a real function.

4. The wave functions are either an even function or an odd function

ψn (−x) = (−1)n−1 ψn (x) (2.29)

As we increase n, the even functions and the odd functions appear alternatively.

5. Let’s plot some of the wave functions. The wave functions for the first few states can be
found in fig. (2.13). One can take the help of Mathematica to plot the wave function of a
general state.
A node of a wave function is a point where the wave function vanishes, but its first
derivative does not vanish. From the plot, we can see that the wave function of nth state
has n − 1 nodes.

6. If there are more than one wave functions that satisfy the time-independent Schrödinger
equation with the same energy but are mutually orthogonal, then we call them degenerate
energy eigenstate. In this case, we can see that the energy levels are non-degenerate, i.e.
for any particular energy, there is only one state.

7. Let’s now compute the gap between two energy levels

π 2 󰄁2
En+1 − En = (2n + 1) (2.30)
2mL2
So the relative increment in energy is
En+1 − En 2n + 1
= (2.31)
En n2
For large values of n, this gap is smaller than the experimental sensitivity. In that case,
we cannot experimentally verify the discrete jump between two energy levels. The energy
levels will appear to be continuous. For this reason, the large quantum number limit is
often referred to as a quantum system’s classical limit. Note that a dimensionless quantity
controls the classical limit of the system.

8. In Classical mechanics, the particle is equally probable everywhere in the box. This is
not true in quantum mechanics. For example, in the ground state, the particle is more
probable at the centre of the box; in the first excited state, the probability of finding the
particle at the centre is zero.
However, for large quantum numbers, the wave function changes very rapidly. In that
limit, if we choose any region in the box, the probability of finding the particle becomes
uniform again.

IISERB(2024) QMech I - 26
n=1 n=2 n=3

Figure 2.2: Wave functions for a particle in a box

Students are encouraged to read the following two articles by V Balakrishnan to learn about
subtle aspects of Particle in a box.

• Particle in a Box: A Basic Paradigm in Quantum Mechanics - Part 1

• Particle in a Box: A Basic Paradigm in Quantum Mechanics - Part 2

Time-dependent solutions Upto this point, we have discussing the solution to the time-
independent Schrodinger equation. Let’s now discuss the solution to the time-dependent Schrodinger
equation
En
Ψn (x, t) = e−i 󰄁 t ψn (x) (2.32)

En is given in (2.27). These wave functions satisfy orthonormalibility condition at any point in
time ˆ ∞
dx|Ψn (x, t)|2 = 1 , ∀t ∈ (−∞, ∞) (2.33)
−∞
Since the Schrodinger equation is linear, partial differential equation, any linear combination of
solutions is also a solution to the equation. So most general solution to the Schrodinger equation
is

󰁛
Ψ(x, t) = cn Ψn (x, t) (2.34)
n=1

This is normalizable wave function provided


󰁛
|cn |2 = Finite (2.35)
n
󰁳󰁓
2
In such case, we can just divide the wave function by n |cn | to obtain a normalised wave
function. To put it in a different way, a state of this system is an element of the following
sequence 󰁛
(c1 , c2 · · · ) , |cn |2 = Finite (2.36)
n

IISERB(2024) QMech I - 27
So the Hilbert space for this system is simply ℓ2 (N)2 . Let’s now consider the action of Hamilto-
nian

π 2 󰄁2 󰁛 2
HΨ(x, t) = n cn Ψn (x, t) (2.37)
2mL2
n=1
So the Hamiltonian correspond to the following operator

π 2 󰄁2
H : (c1 , c2 , · · · ) −→ (c1 , 4c2 , · · · , n2 cn , · · · ) (2.38)
2mL2

2.2 Quantum Harmonic oscillator


Let us consider a Quantum Harmonic oscillator in one dimension. The potential, in this case,
is given by
1
V (x) = kx2 (2.39)
2
So the Hamiltonian is given by

󰄁2 d 2 1
H(x) = − 2
+ kx2 (2.40)
2m dx 2
It is possible to find the wave function and energy levels of this system by solving the differential
equation. However, we solve it in a different way: the method of factorization3 . We begin by
defining the following quantity 󰁵
k
ω= (2.41)
m
in classical harmonic oscillator, this is the frequency of oscillator. In Quantum Mechanics, we
keep referring it as frequency. Then we can rewrite the Hamiltonian as

󰄁ω 󰀓 󰄁 d2 mω 2 󰀔 󰄁ω 󰀓 d2 2
󰀔
H(x) = − + x = − + x̃ (2.42)
2 mω dx2 󰄁 2 dx̃2
where we have defined 󰁵

x̃ = x (2.43)
󰄁
󰁳 mω
We can check that x̃ is dimensionless because 󰄁 has dimension of inverse length. Now we
explain the method of factorization, which was original discovered Erwin Schrodinger. Let’s
define the operator
1 󰀓d 󰀔 1 󰀓 d 󰀔
A= √ + x̃ =⇒ A† = √ − + x̃ (2.44)
2 dx̃ 2 dx̃
Then we can see that
1 d2 1 1 1 d2 1 1
AA† = − 2
+ x̃2 + , A† A = − 2
+ x̃2 − (2.45)
2 dx̃ 2 2 2 dx̃ 2 2
Comparing this with the Hamiltonian in (2.40) we get
󰁫 󰁬 󰀕 󰀖
† 1 †
A, A =1 , H = 󰄁ω A A + (2.46)
2
2
It is possible to show that the Hilbert space for any Quantum system with discrete energy levels is ℓ2 (N).
This is a known result in mathematics: any Hilbert Space with countable basis is isomorphic to ℓ2 (N). Thus all
quantum system with discrete energy level share the same Hilbert space; the only difference between them is the
choice of Hamiltonian. This provides an unified view for large class quantum systems.
3
Please see the book by Shi-Hai Dong, “Factorization Method in Quantum Mechanics” if you want to know
more about this.

IISERB(2024) QMech I - 28
The method of factorization relies on identifying one operator (and it’s adjoint) such that the
Hamiltonian can be written as product of those operators (plus some real number). Now we
compute two more things that will useful later. We begin with
󰀗 󰀘 󰁫 󰁬 󰀗 󰀘
1 1
[H, A] = 󰄁ω A† A + , A = 󰄁ω A† A, A + 󰄁ω , A (2.47)
2 2
The second term is a commutator of an operator with a complex number and thus it will be
zero. Let’s consider the first term then
󰁫 󰁬 󰁫 󰁬
A† A, A = A† [A, A] + A† , A A = −A (2.48)

thus we get
[H, A] = −󰄁ωA (2.49)
Following similar steps one can show that

[H, A† ] = 󰄁ωA† (2.50)

Now we want to find energy eigenstates in this system. Before we begin the analysis, we
start by making a few comments: H is a linear sum of two self-adjoint operators 󰄁ωA† A and
󰄁ω/2. Moreover, these self-adjoint operators commute each other and thus it is possible to find
simultaneous eigenvectors of these two operators.
In order find eigenvalues, we start by proving that the eigenvalue of A† A is always non-
negative. Let’s assume that ψ is an eigenfunction of A† A

A† Aψ = λψ , λ∈R (2.51)

Then consider the quantity


λ(ψ, ψ) = (ψ, A† Aψ) (2.52)
We define ψ̃ = Aψ and put it back in the above equation

(ψ, A† Aψ) = (ψ, A† ψ̃) = (Aψ, ψ̃) = (ψ̃, ψ̃) (2.53)

From (2.52) and (2.53)


λ(ψ, ψ) = (ψ̃, ψ̃) (2.54)
From assumption we know that ψ is a non-zero vector and hence (ψ, ψ) > 0 (again this follows
from the definition of inner product). From the definition of inner product we know (ψ̃, ψ̃) ≥ 0

(ψ, ψ) ≥ 0 , (ψ̃, ψ̃) ≥ 0 =⇒ λ ≥ 0 (2.55)

Moreover,
λ = 0 =⇒ ψ̃ = 0 =⇒ Aψ = 0 (2.56)
In this the expression of A is given in (2.44) and putting it back in the above expression we can
see that there is solution to this equation (we denote the solution as ψ0
󰀕 󰀖 󰁵
d −α x 2 mω
+ x̃ ψ0 (x) = 0 =⇒ ψ0 (x) ∝ e 2 , α= (2.57)
dx̃ 󰄁
ψ0 has the lowest energy in the system; It is the ground state. This state has energy 󰄁ω/2. The
wave function is not normalised; but it is normalisable. We can evaluate the gaussian integral
easily ˆ ∞ 󰁵
−αx2 π
e = (2.58)
−∞ α

IISERB(2024) QMech I - 29
So the normalized wave function is
󰁵
α − α x2
ψ0 (x) = e 2 (2.59)
π

Just like in the case of Particle in a box, we see that the ground state is an even function.
Now we have to determine other eigenvalues and eigenfunctions of H(x). In order to deter-
mine that, consider state ψ with energy E

Hψ = Eψ (2.60)

Then let’s consider the state Aψ and compute its energy

H (Aψ) = ([H, A] + AH) ψ = (E − 󰄁ω)Aψ (2.61)

Here we have used [H, A] = −󰄁 ωA. This means that the action of A on a wave function lowers
its energy. Similarly, we can show that action A† increases its energy.
󰀓 󰀔
H A† ψ = (E + 󰄁ω)A† ψ (2.62)

A can act on a state ψ multiple times to lower the energies more and more; in this way, the
system can have arbitrarily low energy.

E + 󰄁ω
A A†
E
A A†
E − 󰄁ω

Figure 2.3: Ladder operators

This situation can be stopped if action of A on a state gives 0 (A annihilates the system)

Aψ0 = 0 (2.63)

We already know such a wave function from (2.57).

ωblue > ωred

Figure 2.4: Ground state wave function of Quantum Harmonic oscillators

IISERB(2024) QMech I - 30
To find other excited states, we consider the action of A† because it increases energy.
󰀕 󰀖
1 † n 1
ψn (x) = √ (A ) ψ0 (x) , H(x)ψn (x) = 󰄁ω n + ψn (x) (2.64)
n! 2

the factor of 1/ n! is for the purpose of normalisation. The wave function of nth state is
󰁵
1 α − α x2
ψn (x) = √ e 2 Hn (x̃) (2.65)
2n n! π
Hn (x̃) are Hermite polynomials; their definition is the following

2 dn 󰀓 −z 2 󰀔
Hn (z) = (−1)n ez e (2.66)
dz n
For the particle in a box, we listed a number of properties for the system. We can check that
the properties are also satisfied by the states in the Quantum Harmonic oscillator. Homework:
Check those statement !
A most general state of the system is
󰁛 󰁛
Ψ(x) = cn ψn (x) , |cn |2 = Finite (2.67)
n=0 n

We also leave it to the readers to check that the Hilbert space in this case is again ℓ2 (N). Let’s
now consider the action of creation operator
󰁛 √ 󰁛 √
A† Ψ(x) = cn n + 1ψn+1 (x) , AΨ(x) = cn nψn−1 (x) (2.68)
n=0 n=0

So these two operators act on ℓ2 (N) in the following way


√ √
A† : (c0 , c1 , · · · , cn , · · · ) −→ (0, c0 , 2c1 , · · · , n + 1cn , · · · )
√ √ (2.69)
A : (c0 , c1 , · · · , cn , · · · ) −→ (c1 , 2c2 , · · · , ncn , · · · )

We leave it as a homework to Plot the wave function of various energy eigenstates. Homework.

2.3 Discussion on general time-independent potential


In Classical mechanics, according to Newton’s laws of motion, force dictates the dynamics.
In the case of the Schrödinger equation, the solutions are controlled by the potential V (󰂓x, t).
Firstly, note that in Quantum dynamics, the fundamental role is played by the potential, not the
force. In Quantum Mechanics, potential governs physics; there are cases (for example, solenoids)
where there is non-zero potential even though there is no force. In classical physics, the non-
zero potential (with zero force) has no effect. But they lead to an observable effect in Quantum
Mechanics (read about the Aharanov-Bohm effect). At this point, we restrict our attention to
time-independent potential in one space dimension (Later in the course and in QMech II, we
will discuss what happens when we lift these two assumptions). Currently, our attention is on
the potential V (x).
We focus on the separable solution of the Schrödinger equation, and hence we need to find
a solution to the time-independent Schrödinger equation. Any solution of the time-independent
Schrödinger equation is a function of two time-independent quantities: a) Energy of the solu-
tion/state, b) the Potential appearing the Schrodinger equation. At this point, one can ask what

IISERB(2024) QMech I - 31
the allowed form of the potential is. We want the wave function to be continuous, and that puts
restriction on the potential . For example, consider the following function

f (x) = 1 , x ∈ Rational numbers


(2.70)
=0 , otherwise

We start with a list of “good” potential

1. λ δ(x − a)

2. Step function Θ(x − a)

3. sin ax
1
4. x2 +a2

In order to understand a generic potential, we first make a list of checks: Firstly, we check if
limx→±∞ V (x) exists or not. For example, if take V (x) = sin(ax) then this limit does not exist.
We restrict to the case when both the limit exists. Let’s say

lim V (x) = V− , lim V (x) = V+ (2.71)


x→−∞ x→∞

We denote the maxima and the minima of the potential by Vmax and Vmin . Then

− ∞ < Vmin ≤ V+ , V− ≤ Vmax ≤ ∞ (2.72)

We restrict to the case where Vmin is a finite quantity (apart from the cases where the potential
has delta function contributions); otherwise, the system is not stable and hence is not physical.
The solution of the Schrödinger equation in a generic potential can be broadly classified into
two categories

1. Bound state
If the energy of a state is less than V+ , then such a state can never reach x = ±∞. Such
a state is known as a bound state. They exist iff

V+ , V− ≥ E > Vmin (2.73)

2. Scattering state
If the energy of a state is greater than V+ (and/or V− ) , then such a state can travel to
x = ±∞. Such a state is known as a scattering state. This exists only if

E ≥ V+ , V− (2.74)

This means finite energy scattering solutions exists iff

V+ , V− < ∞ (2.75)

Depending on the value of V+ , V− , Vmin , Vmax , we can classify potentials in four broad categories:

1. All four are equal.


V− = Vmin = V+ = Vmax (2.76)

In the literature, this is known as the free particle.

IISERB(2024) QMech I - 32
2. Three of them are the same
This can happen in two ways:

V− = Vmin = V+ , V− = Vmax = V+ (2.77)

We refer to them as potential well and potential barriers (see fig. 2.5). Later we will deal
with two special cases of this: rectangular potential well and rectangular potential barrier.

Figure 2.5: Potential barrier

3. Two of them are the same


This can happen in two ways

(a)
V− = Vmin , V+ = Vmax (2.78)

There are many examples of this. One is drawn in fig 2.6. Another example is

V0 tanh(x) (2.79)

Figure 2.6: (Right-)step potential

(b)
V+ = Vmin , V− = Vmax (2.80)

One such example can be found in fig. 2.7.

4. All four are different


All four of them can be different. See fig. 2.8 for a example.

We make two more simplifying assumptions

IISERB(2024) QMech I - 33
V

Figure 2.7: (Left-)step potential

Figure 2.8: All four are different

1. We assume V+ and V− are the same.

V+ = V− (2.81)

It is possible to relax this assumption. We will discuss later what happens when we lift
this assumption.

2. V (x) − V+ is non-zero only in a bounded interval of the real line; it is zero otherwise.
When this assumption is true, we can make a very special type of experiment. Moreover,
in nature, many potentials (but not all) satisfy these criteria. So this is an assumption
which is motivated by experiment.
1
V (x) = x2 +a2
is an example of potential for which this is not true.

Scattering experiments: When the potential is non-zero in a finite interval, we can do a


special type of experiment.
In this case, there is no translation symmetry, and hence one does not expect momentum to
the conserved. But energy is conserved, and from scattering states using the energy-momentum
relation, we can conclude that magnitude of k cannot change. Only the sign can change. So
the incoming and the outgoing states have the same energy up to a sign; please note that this
is a special feature. It is not true in higher dimensions. There is another conservation law:
conservation of probability.
Now we discuss scattering in one dimension. For one-dimensional systems, it is often possible
to compute the exact S matrices, which also helps us to understand some general properties
of the S matrix in Quantum Mechanics. We start with discussing plane wave solutions to
Schrödinger equation in one dimension.

2.4 Free particle and plane wave solutions


Consider the case when the potential is zero

V (x) = 0 (2.82)

IISERB(2024) QMech I - 34
In this case, the Schrödinger equation takes the form

󰄁2 d 2
− ψ(x) = Eψ(x) (2.83)
2m dx2
For E < 0, any solution of the above equation has a probability unbounded from above, and
hence they do not represent any physical states. The solution of the equation is of the form
󰁵
ikx 2mE
ψk (x) = e , k=± (2.84)
󰄁2

These are called plane-wave solutions, and the plane wave solutions are NOT square integrable4 ;
they are only delta function normalizable; i.e. they satisfy
ˆ ∞
ψk󰂏′ (x) ψk (x)dx = δ(k − k ′ ) (2.85)
−∞

Physically this means that these wave functions are not wave functions of any state that can be
observed since the probability associated with them is not bounded. However, note that we can
always take a “collection” of plane wave states
ˆ ∞
ψ(x) = dk g(k) eikx (2.86)
−∞

This state is normalizable if ˆ ∞


dk |g(k)|2 = finite (2.87)
−∞
So even if the plane wave states are not normalizable, it is possible to construct a linear super-
position of plane wave states that is normalizable and represent a physical state. Even though
every linear combination of plane wave solutions does not represent a physical state, some com-
binations can represent a physical state. It depends on whether the function g(k) in (2.86)
satisfies the condition in (2.87) or not. Mathematically, the plane wave solutions do not form
a Hilbert space. They form Rigged Hilbert space. Many structures (for example, adjoint of
operators) have to be appropriately extended in the case of Rigged Hilbert space. At this point,
we put aside those subtleties. The plane wave solutions are also valid for a constant potential
V (x) = V0 . In that case, the solutions in (2.84) is valid with k given by
󰁵
2m
k=± (E − V0 ) (2.88)
󰄁2
These plane wave solutions play an important role in understanding scattering.
The wave function given in (2.84) is also an eigenstate of the momentum operator


󰁥 ψk (x) = −i󰄁
p ψk (x) = 󰄁k ψk (x) (2.89)
∂x
The state represented by the wave function has momentum 󰄁k.
4
a function is called square-integrable if
ˆ ∞
|ψ(x)|2 dx = finite
−∞

The wave function of any physical state should be square-integrable; this follows from the probability interpretation
of the wave function.

IISERB(2024) QMech I - 35
2.5 Probability current and unitarity
Let’s consider the time-dependent Schrödinger equation again

󰄁2 ∂ 2 ∂
− 2
Ψ(x, t) + V (x, t)Ψ(x, t) = i󰄁 Ψ(x, t) (2.90)
2m ∂x ∂t
We consider the complex conjugate of this equation, and we get

󰄁2 ∂ 2 󰂏 ∂
− 2
Ψ (x, t) + V (x, t)Ψ󰂏 (x, t) = −i󰄁 Ψ󰂏 (x, t) (2.91)
2m ∂x ∂t
Now we multiply (2.90) by Ψ󰂏 (x, t) and (2.91) by −Ψ(x, t) from the left and then add them to
get 󰀥 󰀦 󰀥 󰀦
󰄁2 󰂏 ∂2 ∂2 󰂏 ∂ 󰂏
− Ψ (x, t) 2 Ψ(x, t) − Ψ(x, t) 2 Ψ (x, t) = i󰄁 Ψ (x, t)Ψ(x, t) (2.92)
2m ∂x ∂x ∂t
The above equation can be written as
∂ ∂
ρ(x, t) + jx (x, t) = 0 (2.93)
∂t ∂x
where ρ and jx (x) is defined as
󰀥 󰀦
󰄁 ∂ ∂
ρ(x, t) = Ψ󰂏 (x, t)Ψ(x, t) , jx (x, t) = i Ψ(x, t) Ψ󰂏 (x, t) − Ψ󰂏 (x, t) Ψ(x, t)
2m ∂x ∂x
(2.94)
Eqn (2.93) takes the form of a continuity equation. One can also do this for the higher dimen-
sional Schrödinger equation. In that case, (2.93) will be replaced by

󰂓 · 󰂓j(󰂓x, t) = 0
∂t ρ(󰂓x, t) + ∇ (2.95)

the expression of ρ is given in (2.94). The expression for 󰂓j(x, t) changes and it is given by
󰀥 󰀦
󰂓j(󰂓x, t) = i 󰄁 󰂓 (󰂓x, t) − Ψ (󰂓x, t)∇Ψ(󰂓
󰂏 󰂏 󰂓 x, t)
Ψ(󰂓x, t)∇Ψ (2.96)
2m

The physical meaning of this continuity equation follows from the probability interpretation of
quantum mechanics. ρ(󰂓x, t) is the probability density to find the particle at position x at time
t. 󰂓j(󰂓x, t) is the probability current. The above equation tells that if the probability at any point
changes, then the change in probability can be accounted for from the probability current at
that point.
A local-continuity equation is always associated with some conservation law (though the
converse is not necessarily true). In this case, the continuity equation is associated with the
conservation of probability. ρ(x, t) is the probability density to find a particle at position x and
time t. The probability of finding the particle at that point can change with time only if there is
a non-zero flux of the probability current at that point. Local conservation is a powerful tool to
analyze a system; it implies that the ρ(x, t) cannot suddenly decrease at one point and increase
at some other point (even though such a scenario will keep the total probability the same).
For time-independent potentials, we can consider solution to the time-independent Schrödinger
equation given in (2.11). If we substitute it in (2.95) we get
󰀥 󰀦
󰂓 · 󰂓j(󰂓x) = 0 󰂓j(󰂓x) = i 󰄁 󰂓 (󰂓x) − ψ (󰂓x)∇ψ(󰂓
󰂏 󰂏 󰂓 x)
∇ , ψ(󰂓x)∇ψ (2.97)
2m

IISERB(2024) QMech I - 36
󰂓j(x) is time-independent from the definition, and it is divergence-less. Consider the special case
of one dimension; in this, the equation reduces to
󰀥 󰀦
d 󰄁 d 󰂏 d
j(x) = 0 , j(x) = i ψ(x) ψ (x) − ψ 󰂏 (x) ψ(x) (2.98)
dx 2m dx dx

We can integrate it over a range


ˆ xb
d
dx j(x) = 0 =⇒ j(xa ) = j(xb ) (2.99)
xa dx
So in one dimension, for a time-independent solution, there is a constant probability current at
every point. The current is coming from −∞ (+∞) and going towards +∞ (−∞).

2.6 Plane wave states and S matrix


Consider a one-dimensional quantum mechanical system with a potential V (x) in a localized
region. This means that V (x) = 0 outside an interval in the real line. We have drawn a
representative potential in fig 2.9. The region (interval) where the potential is different from the
asymptotic values (0 in this case) is II, and the region left (right) where the potential is 0 is I
(III).
For example, Consider the case of a single delta function

λ̃ δ(x − a) (2.100)

In this case, Region I, II and III are given by

I : x ∈ (−∞, a) , II : x=a , III : x ∈ (a, ∞) (2.101)

Region I Region II Region III

Figure 2.9: Generic potential in one dimension

The solution to the Schrödinger equation with definite energy in regions I and III is just a
plane wave.

ψI (x) = AI eikx + BI e−ikx , k≥0 (2.102a)


ikx −ikx
ψIII (x) = AIII e + BIII e , k≥0 (2.102b)

IISERB(2024) QMech I - 37
We have already discussed that plane wave solutions do not represent a physical state. So why
are you considering them here? We consider this because we know that they form a basis for
writing down physical states. We have written down such example in (2.86) and (2.87). Any
state with energy E > 0 can be written down by a suitable choice of g(k). The plane wave states
form a basis for the physical states. Hence if we understand what happens to a plane wave state
in the presence of potential, we can figure out what would happen to any physical state under
the influence of the potential.
In ψI (x), the first (second) term has positive (negative) momenta and hence represents the
wave functions that are travelling towards (away) the potential. That’s why the first (second)
term is called an incoming (outgoing) wave function. For ψIII (x), we can see that the second
term represents the incoming wave function, and the first term represents the outgoing wave
function.
We define two different basis states. Later we will see the utility of these states.
󰀣 󰀤 󰀣 󰀤
AI BI
χin = , χout = (2.103)
BIII AIII

Often we also need left and right states


󰀣 󰀤 󰀣 󰀤
AI AIII
χI = , χIII = (2.104)
BI BIII

The probability current (defined in (2.98)) in the region I and III is given by
1 󰁫 󰁬
jI = 󰄁 kI |AI |2 − |BI |2 (2.105a)
m
1 󰁫 󰁬
jIII = 󰄁 kIII |AIII |2 − |BIII |2 (2.105b)
m
Conservation of probability current, given in (2.99), implies (and using the fact that kI = kIII )
we get

|AI |2 + |BIII |2 = |BI |2 + |AIII |2 (2.106)

This suggests that if the norm of the vectors in (2.103) are defined as

χ†in χin , χ†out χout (2.107)

Then both of them have the same norm

χ†in χin = χ†out χout (2.108)

Two vectors in (2.103) must be related by a unitary transformation

χout (k) = S χin (k) , S†S = 1 (2.109)

This unitary matrix is called the S matrix (or scattering matrix). It is a 2 × 2 matrix
󰀣 󰀤
S11 S12
S= (2.110)
S21 S22

In the absence of any potential, the region II is nonexistent and hence

ψI (x) = ψIII (x) (2.111)

IISERB(2024) QMech I - 38
So, the S matrix simply becomes
󰀣 󰀤
0 1
(2.112)
1 0

We differ from the definition given in some places


󰀣 󰀤 󰀣 󰀤󰀣 󰀤
AIII S11 S12 AI
= (2.113)
BI S21 S22 BIII

The advantage of this definition is that the S-matrix, in the absence of any interaction,
simply becomes identity matrix 12×2 .

S depends on the potential and the momentum of the plane wave state. Often we denote
that functional dependence in the following way

S(V, k) (2.114)

Again we will go back to the case of delta functions. Now we are interested in scattering
solutions.

ψI (x) = AI eikx + BI e−ikx , ψIII (x) = AIII eikx + BII e−ikx (2.115)

For the future purpose, it is useful to define

2mλ̃ 󰄁2 λ
λ= =⇒ λ̃ = (2.116)
󰄁2 2m
The continuity of the wave function at x = a gives

AI eika + BI e−ika = AIII eika + BIII e−ika (2.117)

From eqn (2.18) we get the condition on derivative gives


󰁫 󰁬
ik AIII eika − BIII e−ika − AI eika + BI e−ika = λ(AI eika + BI e−ika ) (2.118)

The above two equations give


󰀣 󰀤󰀣 󰀤 󰀣 󰀤󰀣 󰀤
eika e−ika AIII eika e−ika AI
ika −ika
= ika −ika
(2.119)
ik e −ik e BIII (λ + ik)e (λ − ik)e BI

this simplifies to
󰀣 󰀤 󰀣 󰀤󰀣 󰀤󰀣 󰀤
AIII 1 −ike−ika −e−ika eika e−ika AI
= − ika ika ika −ika
BIII 2ik −ike e (λ + ik)e (λ − ik)e BI
󰀣 󰀤󰀣 󰀤
1 2k − iλ −iλ e−2ika AI
= 2ika
(2.120)
2k iλ e 2k + iλ BI

Here we define the following matrix


󰀣 󰀤
1 2k − iλ −iλ e−2ika
M(k; λ, a) = , det M = 1 (2.121)
2k iλ e2ika 2k + iλ

Note that this matrix relates the left basis and right basis as defined in (2.104). This definition
can be generalised to any potential.

IISERB(2024) QMech I - 39
2.6.1 Transfer matrix
The S-matrix relates in-states to out-states. We can define another matrix which relates the
left state to the right states
󰀣 󰀤 󰀣 󰀤󰀣 󰀤
AIII M11 M12 AI
= (2.122)
BIII M21 M22 BI

We define the following two matrices


󰀣 󰀤 󰀣 󰀤
M11 M12 1 0
M= , Ω= (2.123)
M21 M22 0 −1

M is called the transfer matrix. Using (2.104), the above equation (2.122) can be written as

χIII = M χI (2.124)

From (2.105a) (2.105b), it follows that the natural norm on the left and the right vector is the
Lorentzian norm (This is essentially a statement of unitarity)
󰀣 󰀤󰀣 󰀤
󰀓 󰀔 1 0 AI
jI = χ†I Ω χI = 󰄁 kI A󰂏I BI󰂏 (2.125a)
0 −1 BI
󰀣 󰀤󰀣 󰀤
󰀓 󰀔 1 0 AIII

jIII = χIII Ω χIII = 󰄁 kIII BIII AIII
󰂏 󰂏 (2.125b)
0 −1 BIII

From conservation probability and energy conservation, we know

jI = jIII , kI = kIII (2.126)

This implies that the M matrix satisfies

M† Ω M = Ω (2.127)

2.6.2 Relation to S matrix


From the transfer matrix (2.122) we get

AIII = M11 AI + M12 BI (2.128a)


BIII = M21 AI + M22 BI (2.128b)

From the second equation, we get


M21 1
BI = − AI + BIII (2.129)
M22 M22
Putting this in the first equation, we get
det M M12
AIII = AI + BIII (2.130)
M22 M22
The relation between M and S is given by
󰀣 󰀤 󰀣 󰀤
S11 S12 1 −M21 1
= (2.131)
S21 S22 M22 det M M12

IISERB(2024) QMech I - 40
Later in section 2.7, we will see that the transfer matrix obeys the product rule; if we know the
transfer matrix for two potentials separately, then we know the transfer matrix for the combined
system. This, in turn, implies that if we know the S matrix of two potentials separately, then
we know the S matrix for the combined system.
Again, we go back to the case of the delta function potential. In that case, the transfer
matrix is defined in (2.121). From this, we can determine the S-matrix
󰀳 󰀴
−iλ e2ika 2k
1 󰁅 󰁆
S(k; λ, a) = 󰁃 󰁄 (2.132)
2k + iλ −2ika
2k −iλ e

2.6.3 Physical meaning of S: Reflection and transmission coefficients


Now we write the S-matrix in terms of transmission and reflection
󰀣 󰀤 󰀣 󰀤
S11 S12 R11 T12
= (2.133)
S21 S22 T21 R22

Now we will explain various terms

1. Consider an incoming wave from Region I. It can either be reflected back to the region I
or will be transmitted to region III.

(a) If the wave is reflected back, we denote it as (I−→I). The corresponding probability
for the wave to be reflected is R11
(b) If the wave is transmitted to region III, we denote it as (I−→III). The corresponding
probability for the wave to be transmitted is T21

2. Consider an incoming wave from Region III

(a) (III−→III) the probability for the wave to be reflected is R22


(b) (III−→I) the probability for the wave to be transmitted is T12

There are four different processes. In the presence of symmetries of the potential, some of these
processes are mapped into each other. We discuss this in section 3.1.
Again, we go back to the case of the delta function potential.the S matrix is defined in (??).
In this case, the reflection and transmission coefficients are
−iλ 2k
R= e2ika , T = (2.134)
2k + iλ 2k + iλ
It is easy to check a very special property of delta function potential

S(k; λ, a) S(k; −λ, −a) = S(k; −λ, −a) S(k; λ, a) = 12×2 (2.135)

2.6.4 Bound states in Single delta function


Bound states have normalizable wave functions. Considering equation (2.20), we consider a
wave function of the following form.
󰁫 󰁬
ψ(x) = N exp − κ|x − a| (2.136)

IISERB(2024) QMech I - 41
At this point, κ is an unknown constant. The wave function is continuous at x = a, and the
discontinuity in the derivative is
dψ 󰀏󰀏 dψ 󰀏󰀏
󰀏 +− 󰀏 = −2κ N (2.137)
dx x=a dx x=a−
Comparing this with (2.20) we get
mλ̃
κ=− (2.138)
󰄁2
The solution is normalizable only if λ̃ < 0. So there is only one bound state, and that is only
for α < 0. The energy of the state is (bringing back 󰄁, and mass)

mλ̃2
E=− (2.139)
2󰄁2

2.7 Effect of multiple non-overlapping potentials


Up to this point, we considered potentials which are zero outside a finite interval. Let’s now
consider when more than one such potential Vi (x) is present, and they are non-overlapping.
Let’s say
Vi (x) ∕= 0 iff x ∈ [ai , bi ] (2.140)

Then there is no intersection between any two such interval

∀i ∕= j , [ai , bi ] ∩ [aj , bj ] = ∅ (2.141)

Alternatively, we can write the non-overlapping condition as

∀i ∕= j , Vi (x)Vj (x) = 0 , ∀x ∈ (−∞, ∞) (2.142)

In such a situation, it is possible to compute the S matrix for the following potential
󰁛
V (x) = Vi (x) (2.143)
i

From the knowledge of the S matrix of individual potentials. Since the potentials are non-
overlapping, let’s choose the label i in an ordered fashion

i < j =⇒ bi < aj (2.144)

if i < j then the potential Vi (x) is left to the potential Vj (x).


In order to find the S matrix, we proceed systematically.

2.7.1 Shifting the potential


Now we want to consider the action of translation on the S-matrix. More precisely, let’s say
we know that S-matrix - S[V ] for a potential V (x). We want to know how it is related to the
S-matrix - S[U ] for a potential U (x) which is simply given by

U (x) = V (x − a) (2.145)

IISERB(2024) QMech I - 42
We want to find the relation between S[U ] and S[V ] (also between M[U ] and M[V ]). Let’s
define a new co-ordinate y = x − a. Then our potential is simply V (y). From (2.102a) and
(2.102b) we get the wave functions to be

ψI (y) = AI eiky + BI e−iky , k≥0 (2.146a)


ψIII (y) = AIII eiky + BIII e−iky , k≥0 (2.146b)

In terms of these wave function, the scattering matrix and transfer matrix is simply S and M;
the effect of translating potential has been nullified by choice of co-ordinate. Let’s now write in
terms of x coordinate.

ψI (x) = AI eik(x−a) + BI e−ik(x−a) , k≥0 (2.147a)


ik(x−a) −ik(x−b)
ψIII (x) = AIII e + BIII e , k≥0 (2.147b)

The in states and the out states transform as

χin −→ χ′in = U [−a, k] χin (2.148a)


χout −→ χ′out = U [a, k] χout (2.148b)

where U is given by
󰀣 󰀤
eika 0
U [a, k] = −ika
, U −1 [a, k] = U [−a, k] (2.149)
0 e

Using the definition of S matrix

χout = Sχin , χ′out = S ′ χ′in (2.150)

and above transformations rules give the relation

S[U, k] = U [a, k] S[V, k] U [a, k] (2.151)

Similarly, we can write down relations for the transfer matrix

χ′I = U [−a, k]χI , χ′III = U [a, k]χIII (2.152)

So the transfer matrix transforms as

M = U [−a, k] M U [a, k] (2.153)

5 Againwe go back to the case of delta function. The transfer matrix for delta function located
at x = 0 is M(k; λ, 0). From these expressions in (2.121) we can check that

M(k; λ, a) = U [−a, k] M(k; λ, 0) U [a, k] , S(k; λ, a) = U [a, k]S(k; λ, 0)U [a, k] (2.154)

which is consistent with (2.153) and (2.151). The expression for U [a, k] can be found in (2.149).
5
For any arbitrary 2 × 2 matrix
󰀣 󰀤󰀣 󰀤󰀣 󰀤 󰀣 󰀤
eika 0 a b eika 0 a e2ika b
=
0 e−ika c d 0 e−ika c d e−2ika

IISERB(2024) QMech I - 43
2.7.2 Two non-overlapping potentials
Let’s now consider the case when there are two such potentials (see fig 2.10). Let’s say the
potential is given by
V (x) = V1 (x) + V2 (x) (2.155)
V1 (x) is non-zero only in region II, and V2 (x) is non-zero only in region IV. In this case, we want
to find the transfer matrix that relates region V to region I.

χV = M[V ]χI (2.156)

Since the potential in region III is zero, from (2.124) it follows that

χIII = M[V1 ] χI , χV = M[V2 ] χIII =⇒ χV = M[V2 ] M[V1 ] χI (2.157)

Comparing (2.156) and third equation (2.157) we get

M[V ] = M[V2 ] M[V1 ] (2.158)

Please note that the transfer matrix in the product appears in a particular order. If we have
multiple potentials as given in (2.143) (with the convention given in (2.144)), the transfer matrix
for the full potential is given by

M[V ] = M[Vn ] · · · M[V2 ] M[V1 ] (2.159)

Region I Region II Region III Region V

Region IV

Figure 2.10: Generic potential in one dimension

2.8 Transfer matrix in terms of reflection and transmission co-


efficient
The S matrix can be written in terms of reflection and transmission coefficient, and this allows
us to write the transfer matrix in terms of reflection and transmission coefficient
󰀣 󰀤 󰀣 ′
󰀤
r′ t t′ − rrt rt
S= ′ =⇒ M = ′ (2.160)
t r − rt 1
t

IISERB(2024) QMech I - 44
Let’s now consider a series of potentials. Their S matrices and hence the transfer matrices are
given by 󰀣 󰀤 󰀣 󰀤
r r′
t′i − iti i rtii ri′ ti
Mi = r′ 1
⇐⇒ S[Mi ] = ′ (2.161)
− tii ti
t i r i

Then consider a case when Vi and Vj are present, and Vj are left to the Vi . Then the transfer
matrix is given by
󰀳 󰀓 󰀔 󰀴
(ri ri′ − ti t′i ) rj rj′ − tj t′j − ri rj′ ri ri′ (−rj ) + ti t′i rj + ri
1 󰁅 󰁆
Mi Mj = 󰁃 rj′ (ri′ rj − 1) − ri′ tj rj′ 1 − ri′ rj 󰁄 (2.162)
ti tj

Then the S matrix of the combined potential is given by is given by


󰀳 r ′ tj t′ 󰀴
i j ′ ti tj
′ r + rj ′r
󰁫 󰁬 󰁅 1−r i j 1−r i j 󰁆
S Mi Mj = 󰁅 󰁃
󰁆
󰁄 (2.163)
′ ′
ti tj ′
t i t i rj
1−r′ rj 1−r′ rj
+ r i
i i

All these components have a physical meaning. Let’s equate it to the transmission and reflection
coefficient of the combined system.
󰀳 r ′ tj t′ 󰀴
′ ti tj
i

j
+ r j ′ 󰀣 󰀤
󰁅 1−ri rj 1−ri rj 󰁆 r ′ t ij
󰁅 󰁆= ij
(2.164)
󰁃 󰁄 t ′
′ ′
ti tj ′
t i t i rj ij rij
1−r′ rj 1−r′ rj
+ ri
i i

We know that the transfer matrix satisfies a product relation (see (2.157). However, this is not
true for S matrices. It is clear from the expression in (2.163) and in (2.161) that
󰁫 󰁬 󰁫 󰁬 󰁫 󰁬
S Mi Mj ∕= S Mi S Mj (2.165)

2.8.1 Usefulness of transfer matrix


The transfer matrix is extremely useful. We list the advantages here

1. For non-overlapping potential, we can construct the transfer matrix from the information
of the transfer matrix from individual potentials. The transfer matrix is simply the product
of individual transfer matrices as shown in (2.163). The product relation is not true for S
matrices!

2. It is possible to find the bound states from the transfer matrix; we will discuss that in
section 3.4.

2.9 Example of non-overlapping potential: Two delta functions


Now we consider the consider which consists of two delta function

V (x) = λ1 δ(x − a1 ) + λ2 δ(x − a2 ) , a1 ≤ a2 (2.166)

IISERB(2024) QMech I - 45
2.9.1 Bound states
At first, we analyze the bound states. Bound states have normalizable solutions. Let us consider
the wave function of the form
󰀻
󰁁
󰁁
󰁁 NI eκ(x−a1 ) x < a1
󰁁
󰁁
󰁁
󰀿
ψ(x) = NII+ eκx + NII− e−κx a1 < x < a2 (2.167)
󰁁
󰁁
󰁁
󰁁
󰁁
󰁁
󰀽 N e−κ(x−a2 ) x>a
III 2

Condition at x = a1 Continuity at x = a1 gives

NI = NII+ eκ a1 + NII− e−κ a1 (2.168)

Now we want to find the condition on the derivative


󰁫 󰁬
κ NII+ eκ a1 − NII− e−κ a1 − NI = λ1 N I
󰁫 󰁬
=⇒ κ NII+ eκ a1 − NII− e−κ a1 = (λ1 + κ) NI (2.169)

Using (2.168) in the above equation we get


󰀥 󰀦
2κ + λ1 −2κ a1
NII+ = − e NII− (2.170)
λ1

Condition at x = a2 Continuity at x = a2 gives

NIII = NII+ eκ a2 + NII− e−κ a2 (2.171)

Conditions on the derivative give


󰁫 󰁬
κ − NIII − NII+ eκ a2 + NII− e−κ a2 = λ2 NIII
󰁫 󰁬
=⇒ κ NII+ eκ a2 − NII− e−κ a2 = −(κ + λ2 ) NIII (2.172)

Using (2.171) in the above equation we get


󰀥 󰀦
2κ + λ2 2κ a2
NII− = − e NII+ (2.173)
λ2

From (2.170) and (2.173) we get


󰀥 󰀦󰀥 󰀦
2κ + λ1 −2κ a1 2κ + λ2 2κ a2
e e =1 (2.174)
λ1 λ2

Note that for λ2 = 0 we get back (2.138). The solution to the above equation depends on the
following three dimensionless parameters
λ2 a2
, , λ1 a 2 (2.175)
λ1 a1
Then in order to find the bound states, we need to solve
󰁫 󰁬
4κ2 + 2κ(λ1 + λ2 ) + λ1 λ2 1 − e−2(a2 −a1 )κ = 0 (2.176)

Now we would like to study various limits first before solving this equation

IISERB(2024) QMech I - 46
1. Coincident limit: a2 − a1 = 0 Let the distance between the two delta function go to
zero, i.e. it reduces to a single delta function of strength λ1 +λ2 . In that case, the equation
reduces to 󰁫 󰁬 λ1 + λ2
2κ 2κ + (λ1 + λ2 ) = 0 =⇒ κ = − (2.177)
2
So in the coincident limit, there is one bound state iff λ1 + λ2 < 0.

2. Infinite separation limit: We now consider the other extreme limit a2 − a1 → ∞. In


this case, the equation reduces to

(2κ + λ1 )(2κ + λ2 ) = 0 (2.178)

So we have two bound states, in this case, provided λ1 , λ2 < 0. If the two deltas have
opposite signs, then there is only one bound state.
From this analysis, it is clear that as we change a2 − a1 from zero to ∞, a new bound state
appears !. We can show this phenomenon in Mathematica.

3. Opposite strength : Let’s now consider the case when

λ1 = −λ2 = λ a2 − a1 = a (2.179)

eqn (2.174) reduces to


󰀓 4κ2 󰀔 −4κ a
1− e =1 (2.180)
λ2
It has solutions iff
1
|λ| > (2.181)
|a|

So this can have at most one bound state, and that too beyond a critical distance between
the potentials.

2.9.2 Scattering states


We use Mathematica to compute the transfer matrix and the S matrix of this system. Check
that (2.163) is valid in this case. Also check that the S matrix is Unitary.

2.10 Quantum tunneling


We have already analyzed the case of the double delta function. Consider the situation when
the strengths of both delta functions were negative. In that case, we found a peculiar situation
in which the number of bound states changes as the separation between the delta functions
changes. In particular, there is one bound state for zero separation and two bound states for
infinite separation. A natural question that may arise in this situation is why the number of
bound states changes as a function of separation. In this section, we will try to understand
that question. The answer lies in an intrinsically quantum phenomenon known as Quantum
tunnelling. In order to understand Tunnelling, we start with step potentials, and then we will
consider potential barriers.

IISERB(2024) QMech I - 47
2.10.1 Step potentials
Consider the potential of the following form
󰀻
󰁁
󰀿0 x < x󰂏
V (x) = (2.182)
󰁁
󰀽
V x > x󰂏
We call it right-step potential because the step is towards the right. Similarly, one can consider
left-step potential.

x=a

Figure 2.11: Particle in an interval with step potential

This potential has no bound state. It only has scattering states. But there are two types of of
scattering states solutions
• E≥V

• E<V

Scattering solutions: E > V The wave function takes the following form
󰁫 󰁬
ψI (x) = AI eikI x + BI e−ikI x x < x󰂏 (2.183)
󰁫 󰁬
ψII (x) = AII eikII x + BII e−ikII x x > x󰂏 (2.184)
The continuity of ψ(x) and ψ ′ (x) at x = x󰂏
󰁫 󰁬 󰁫 󰁬
AI eikI x󰂏 + BI e−ikI x󰂏 = AII eikII x󰂏 + BII e−ikII x󰂏 (2.185a)
󰁫 󰁬 󰁫 󰁬
kI AI eikI x󰂏 − BI e−ikI x󰂏 = kII AII eikII x󰂏 − BII e−ikII x󰂏 (2.185b)
So, in this case, the transfer matrix is given by
󰀣 󰀤
1 eix󰂏 (kI −kII ) (kI + kII ) e−ix󰂏 (kI +kII ) (kII − kI )
Trightstep [kI , kII , x󰂏 ] = (2.186)
2kII eix󰂏 (kI +kII ) (kII − kI ) e−ix󰂏 (kI −kII ) (kI + kII )
From the definition, it is clear that
(Trightstep [kI , kII , x󰂏 ])−1 = Trightstep [kII , kI , x󰂏 ] (2.187)
Then we can compute the S matrix
󰀣 󰀤
1 e2ix󰂏 kI (kI − kII ) 2eix󰂏 (kI −kII ) kII
Srightstep [kI , kII , x󰂏 ] = (2.188)
(kI + kII ) 2eix󰂏 (kI −kII ) kI e−2ix󰂏 kII (kII − kI )
Consider the special case when a = 0.
󰀣 󰀤
1 (kI − kII ) 2kII
Srightstep [kI , kII , x󰂏 ] = (2.189)
(kI + kII ) 2kI (kII − kI )
From these equations, we get
kI − kII kI
RI ≡ BI = , TII ≡ AII = 2 (2.190)
kI + kII kI + kII

IISERB(2024) QMech I - 48
Probability currents We choose the initial wave to come only from x = −∞. Then

AI = 1 , BII = 0 (2.191)

The incident reflected and transmitted probability current is given by

jincident = 2󰄁 kI (2.192a)
2
jreflected = 2󰄁 kI |RI | (2.192b)
jtransmitted = 2󰄁 kII |TII |2 (2.192c)

It is easy to check that

jincident = jreflected + jtransmitted (2.193)

Note that when E = V (i.e. kII = 0), the transmitted current vanishes.
The limit V → ∞ gives quantum mechanics on a half-line with Dirichlet boundary condition.

Scattering states solutions: E < V We are interested in the solutions of the time-independent
Schrodinger equation with
E<V (2.194)
In this case we define the following variables
󰁵 󰁵
2mE 2m(V − E)
k= , κ= (2.195)
󰄁2 󰄁2
Pathwise solution is given by

ΨL (x) = AL eikx + BL e−ikx , ΨR (x) = AR e−κx + BR eκx

We can see that this is related to (2.184) by

kII −→ iκ (2.196)

Another important feature of this solution is that, it is non-zero in region II; this implies that
the probability to find the particle in region II is non-zero. Again this is sharp departure from
classical behaviour. In classical mechanics, if a particle has energy less than the height of the
potential barrier then it remains confined in region I only.
At x = x󰂏 , we impose continuity of the wave function and the continuity of derivative to
obtain
AL eikx󰂏 + BL e−ikx󰂏 AR e−κx󰂏 + BR eκx󰂏 AL eikx󰂏 + BL e−ikx󰂏 ik(AR e−κx󰂏 + BR eκx󰂏 )
= =⇒ =
ik(AL eikx󰂏 − BL e−ikx󰂏 ) −κ(AR e−κx󰂏 − BR eκx󰂏 ) (AL eikx󰂏 − BL e−ikx󰂏 ) −κ(AR e−κx󰂏 − BR eκx󰂏 )
(2.197)
Then from the above equation

AL eikx󰂏 −(κ − ik)AR e−κx󰂏 + (κ + ik)BR eκx󰂏


= (2.198)
BL e−ikx󰂏 (κ + ik)AR e−κx󰂏 − (κ − ik)BR eκx󰂏
Now we define 󰁳 k
(κ + ik) = k 2 + κ2 eiθ(k,κ) , tan θ(k, κ) =(2.199)
κ
Putting this back into the above equation we write AL and BL in terms of AR and BR

AL eikx󰂏 AR e−κx󰂏 −iθ(k,κ) − BR eκx󰂏 +iθ(k,κ)


= − (2.200)
BL e−ikx󰂏 AR e−κx󰂏 +iθ(k,κ) − BR eκx󰂏 −iθ(k,κ)

IISERB(2024) QMech I - 49
We can also write AR and BR in terms of AL and BL
AR e−κx󰂏 + BR eκx󰂏 −κ(AL eikx󰂏 + BL e−ikx󰂏 ) AR e−κx󰂏 −(κ − ik)AL eikx󰂏 − (κ + ik)BL e−ikx󰂏
= =⇒ =
AR e−κx󰂏 − BR eκx󰂏 ik(AL eikx󰂏 − BL e−ikx󰂏 ) BR eκx󰂏 −(κ + ik)AL eikx󰂏 − (κ − ik)BL e−ikx󰂏
(2.201)
Now we want to substitute (2.199), to get

AR e−κx󰂏 AL eikx󰂏 −iθ(k,κ) + BL e−ikx󰂏 +iθ(k,κ)


= (2.202)
BR eκx󰂏 AL eikx󰂏 +iθ(k,κ) + BL e−ikx󰂏 −iθ(k,κ)

A few cases for consideration:

1. Right region is extended to infinity


In this case, BR = 0. Hence, (2.200) reduces to

AL eikx󰂏
= −e−2iθ(k,κ) (2.203)
BL e−ikx󰂏

2. V → ∞: In this case, (2.200) reduces to

AL eikx󰂏
= −1 (2.204)
BL e−ikx󰂏

Property of angle defined in (2.199) Let’s consider the angle θ(k, κ) defined in (2.199)
󰁵
k E
tan θ(k, κ) = = (2.205)
κ V −E
We can see that it is a strictly increasing function of E. We can also see that
π
0≤E≤V =⇒ 0 ≤ θ(k, κ) ≤ (2.206)
2
We can also take the limit V → ∞ at constant E. In that case, we get

lim θ(k, κ) = 0 (2.207)


V →∞

2.10.2 Left-step potential


We now consider a left-step potential which is given by
󰀻
󰁁
󰀿V x < x̂
V (x) = (2.208)
󰁁
󰀽
0 x > x̂

This is similar to the previous potential with I − II interchanged. We are first interested in the
solutions with
E>V (2.209)
We can compute the transfer matrix and the S matrix from the previous formulae
󰀣 󰀤
1 e ix̂(kI −kII ) (k + k ) e−ix̂(kI +kII ) (k − k )
I II II I
Tleftstep [kI , kII , x̂] = (Trightstep [kII , kI , x̂])−1 =
2kII eix̂(kI +kII ) (kII − kI ) e−ix̂(kI −kII ) (kI + kII )
(2.210)
Then we can compute the S matrix
󰀣 󰀤
1 e2ix̂kI (kI − kII ) 2eix̂(kI −kII ) kII
Srightstep [kI , kII , x̂] = (2.211)
(kI + kII ) 2eix̂(kI −kII ) kI e−2ix̂kII (kII − kI )

IISERB(2024) QMech I - 50
Let’s now look for the solutions with
E<V (2.212)

In this case we define the following variables


󰁵 󰁵
2mE 2m(V − E)
k= , κ= (2.213)
󰄁2 󰄁2
Pathwise solution is given by

ΨL (x) = AL e−κx̂ + BL eκx̂ , ΨR (x) = AR eikx̂ + BR e−ikx̂ (2.214)

Then we get from (2.200), interchanging L and R.

AR eikx̂ AL e−κx̂−iθ(k,κ) − BL eκx̂+iθ(k,κ)


= − (2.215)
BR e−ikx̂ AL e−κx̂+iθ(k,κ) − BL eκx̂−iθ(k,κ)

Then from (2.202)


AL e−κx̂ AR eikx̂−iθ(k,κ) + BR e−ikx̂+iθ(k,κ)
= (2.216)
BL eκx̂ AR eikx̂+iθ(k,κ) + BR e−ikx̂−iθ(k,κ)

A few cases for consideration:

1. Left region is extended to infinity


In this case, AL = 0. Hence, (2.215) reduces to

AR eikx̂
= −e2iθ(k,κ) (2.217)
BR e−ikx̂

2. V → ∞: In this case we use (2.207) in (2.215) to get

AR eikx̂
= −1 (2.218)
BR e−ikx̂

2.10.3 Potential barrier


Consider the following potential
󰀻
󰁁 ℓ
󰀿0 |x| > 2
V (x) = (2.219)
󰁁
󰀽 ℓ
V |x| > 2

Here V is a positive number. We have depicted the potential in fig. 2.12. The potential has
parity symmetry

V (−x) = V (x) (2.220)

The solution of the Schrodinger equation in the presence of this potential has two different types
of solutions depending on the energy

E<V , E>V (2.221)

IISERB(2024) QMech I - 51
V

x = − 2ℓ x= ℓ
2

Figure 2.12: Particle in real line with potential barrier

Scattering solution for E > V

We start by considering the case E > V . The momentum in the region I and II are given by
󰁵 󰁵
2mE 2m(E − V )
kI = , kII = (2.222)
󰄁2 󰄁2
We start by writing down the transfer matrix. This is simply a product of two transfer matrices.
Again we use Mathematica for the computation; we write down the results here. The transfer
matrix is given by
󰁫 ℓ󰁬 󰁫 ℓ󰁬
Tbarrier [kI , kII , a] = Tleftstep kII , kI , Trightstep kI , kII , − (2.223)
2 2
We can compute this using Mathematica. It is given by
󰀣 󰀤
1 (kI + kII )2 e−iℓ(kI −kII ) − (kI − kII )2 e−iℓ(kI +kII ) −2i(kI − kII )(kI + kII ) sin(kII ℓ)
4kI kII 2i(kI − kII )(kI + kII ) sin(kII ℓ) (kI + kII )2 eiℓ(kI −kII ) − (kI − kII )2 eiℓ(kI +kII )
(2.224)
From the transfer matrix, we can compute the S matrix.
󰀳 (k −k )(k +k )
󰀴
I II I II 2ikI kII
kI2 +2ikI kII cot(kII ℓ)+kII2
(kI2 +kII2 ) sin(kII ℓ)+2ikI kII cos(kII ℓ) 󰁄 −ikI ℓ
󰁃 e (2.225)
2ikI kII (kI −kII )(kI +kII )
(kI2 +kII2 ) sin(kII ℓ)+2ikI kII cos(kII ℓ) kI2 +2ikI kII cot(kII ℓ)+kII
2

The reflection probability is given by


2 )2 sin2 (k ℓ)
(kI2 − kII II
2 + (k 2 − k 2 )2 sin2 (k ℓ)
(2.226)
4kI2 kII I II II

The transmission probability is given by

4kI2 kII
2
2 + (k 2 − k 2 )2 sin2 (k ℓ)
(2.227)
4kI2 kII I II II

We can see that transmission and reflection probabilities are functions of three quantities: ℓ,
kI and kII ; from eqn (2.222) we know that kI and kII can be written as function of energy E
and depth of the potential V . Another important feature is that even if E > V , the reflection
coefficient is generically non-zero. This is not true in classically. In classical mechanics, a particle
is never reflected back if it’s energy is more that the height of the potential.
Let’s first consider functional dependence on ℓ. If we vary ℓ (keeping E and V unchanged)
then sin2 (kII ℓ) varies between 0 and 1. The transmission probability is maximum when

kII ℓ = nπ (2.228)

IISERB(2024) QMech I - 52
In this case the transmission probability is 1 but reflection probability is 0. The complete wave
is transmitted to the other side. Note that, the condition also depends on the energy of the
wave. For any potential, there are certain frequencies which are completely transmitted; but
this is never true for all possible values of the frequencies.
We already know that the minimum value of the reflection probability is zero. The maximum
value of the reflection probability is
(kI2 − kII
2 )2
(2.229)
(kI2 + kII
2 )2

1
(kI2 −kII
2 )2

(kI2 +kII
2 )2

π/kII 2π/kII

Figure 2.13: Transmission coefficient of a potential barrier

We want to analyze what happens when we keep ℓ unchanged but vary the dimensionless
ratio E/V . But we will do this after finding the solutions for E < V . This dependence can be
found in the next section

Parity-symmetric arrangement This is a parity-symmetric potential; we can even compute


the parity even and parity odd partial waves.
󰀳 󰀴
(kI +kII )eikII ℓ +kI −kII
0
󰁃 (kI −kII )eikII ℓ +kI +kII ik ℓ
󰁄 e−ikI ℓ (2.230)
0 − −(k I +kII )e II +kI −kII
(k −k )e ik II ℓ +k +k
II I I II

Tunnelling solution for E < V

Let’s first consider the case when E < V . In classical mechanics, the Particle starts from the
region I; then, it remains in region I for all. The Particle will not have enough energy to go to
region II and hence also to region III. To understand the behaviour of the system in the Quantum
system, we need to solve the time-independent Schrodinger equation in this background. We
define 󰁵 󰁵
2mE 2m(V − E)
k= , κ= (2.231)
󰄁 2 󰄁2
Then the transfer matrix is given by
󰀣 󰀃 󰀄 󰀤
1 −e−ikℓ (2κk cosh (κℓ) + i(k 2 − κ2 ) sinh (κℓ)) −i κ2 + k 2 sinh (κℓ)
󰀃 󰀄
2κk i κ2 + k 2 sinh (κℓ) eikℓ (2κk cosh (κℓ) − i(k 2 − κ2 ) sinh (κℓ))
(2.232)
the S matrix is given by
󰀣 2 2 󰀤
k +κ 2ikκ
k2 +2ikκ coth (κℓ)−κ2 (k−κ)(k+κ) sinh (κℓ)+2ikκ cosh (κℓ)
2ikκ k2 +κ2 e−ikℓ (2.233)
(k−κ)(k+κ) sinh (κℓ)+2ikκ cosh (κℓ) k2 +2ikκ coth (κℓ)−κ2

IISERB(2024) QMech I - 53
Our convention is that

E −→ −|E| =⇒ k −→ iκ , κ>0 (2.234)

The transmission probability is

(2kκ)2
(2.235)
(2kκ)2 + (k 2 + κ2 )2 Sinh2 (κ ℓ)
Putting the values of k and κ, we get
4E(V − E)
󰀕√ 󰀖 (2.236)
2 2 2m(V −E)
4E(V − E) + V Sinh 󰄁 ℓ

So the particle has a non-zero probability of being in region III. Classically this is not possible.
This is called tunnelling. This is purely a Quantum Mechanical phenomenon. It has many
drastic consequences.

Figure 2.14: Tunnelling solution

Transmission probability as a function of energy In this section we will analyze the


transmission probability as a function of energy (without changing ℓ). The expression for the
transmission probability is given in (2.236) for E < V and in (2.227) for E > V .

4E(V − E)
|T (E)|2 = 󰀕√ 󰀖 , E<V (2.237)
2 2 2m(V −E)
4E(V − E) + V Sinh 󰄁 ℓ

4E(E − V )
= 󰀕√ 󰀖 , E>V (2.238)
2 2 2m(E−V )
4E(E − V ) + V Sin 󰄁 ℓ

2.11 Implications of Quantum tunneling


Quantum tunnelling is the phenomenon when a quantum particle can be found in a region where
a classical particle cannot go. This is an inherently quantum phenomenon. There is no classical
way to understand it. It is also not possible to understand it intuitively. The only possible
understanding is the calculations that we presented above and experimental verification of this.

IISERB(2024) QMech I - 54
This is the underlying reason why nuclei undergoes the alpha decay. However, it leads to many
important phenomenon. For example,

• there is formation band in solids due to Quantum Tunnelling.

• Nuclear alpha decay is possible due to Quantum Tunnelling.

In this section, we provide simple examples to convey these points.

2.11.1 Finite (asymmetric) potential well


Consider the following potential

V (x) = VI x≤a Region I (2.239)


= 0 a<x<b Region II
= VIII b≤x Region III

We have depicted it in fig. 2.15

VI VIII

x=a x=b

Figure 2.15: Particle in a one-dimensional asymmetric finite well

We are assuming VI , VIII > 0. In the limit, VI , VIII −→ ∞, we get back the situation of
quantum particle in a box.
Let’s briefly discuss the situation in classical mechanics. In classical mechanics, the minimum
energy configuration is the minima of the potential (In this case, it is the bottom of the potential).
In particular, the energy of the minimum energy configuration do not depend on the height of
the potential. In the following analysis, we will see that in Quantum Mechanics, the situation
changes drastically.
First we focus on a solution when

0 < E < VI ≤ VIII (2.240)

This situation denotes the situation of a bound state. In this case we define the following
variables 󰁵 󰁵 󰁵
2m(VI − E) 2mE 2m(VIII − E)
κI = , k= . , κIII = (2.241)
󰄁2 󰄁2 󰄁2
From (2.240), we can see that
󰁵 󰁵
2mVI 2mVI
E = 0 =⇒ k = 0 , κI = , E = VI =⇒ k = , κI = 0 (2.242)
󰄁 2 󰄁2
Then the piecewise solution is given by

ΨI (x) = AI e−κI x + BI eκI x


ΨII (x) = AII eikx + BII e−ikx (2.243)
−κIII x κIII x
ΨIII (x) = AIII e + BIII e

IISERB(2024) QMech I - 55
󰁨L = 0 (BIII = 0 ). Now we
Normalizability of the wave function ΨI (x) (ΨIII (x)) demands that A
impose continuity Ψ(x) and Ψ′ (x) at x = a and at x = b.
1. At x = a, We use the result (2.217), with L = I and R = II and x̂ = a
AII
= −e2iθ(k,κI ) e−2ika (2.244)
BII
2. At x = b, We use the result (2.203), with L = II and R = III and x󰂏 = b
AII
= −e−2iθ(k,κIII ) e−2ikb (2.245)
BII
The length of the well is given by L = b − a. Combining these two equations, we get
e2ikL = e−i2(θI +θIII ) (2.246)
where
θI = θ(k, κI ) , θIII = θ(k, κIII ) (2.247)
k is increasing function of E and κI & κIII is a decreasing function of E. So, θI and θIII are
increasing function of E.
π
E = 0 =⇒ θI = 0 , E = VI =⇒ θI = (2.248)
2
The solution of this equation is
nπ 1
k= − (θI + θIII ) , n ∈ Z+ (2.249)
L L
The unknown variable in the above equation is E; k, θI and θIII depends on energy and other
externally provided parameters. The restriction (n ∈ Z+ ) comes from the fact that k is positive
quantity.

Case I: Box Let’s consider the case when


VI = VIII = ∞ =⇒ κI = κIII = ∞ =⇒ θI = θIII = 0 (2.250)
This is the situation of quantum particle in a box. In this case, (2.246), reduces to

e2ikL = 1 =⇒ k= , n ∈ Z+ (2.251)
L
We denote these solutions as

k box (n) = (2.252)
L

Case II: Half well Let’s consider the case when


VI = ∞ =⇒ κI = ∞ , VIII = V =⇒ κIII = κ (2.253)
For future purposes, we refer to this situation as Half well. In this case, (2.249), reduces to
󰀣 󰀤
hwell box 1 −1 k hwell
k (n) = k (n) − tan 󰁳 (2.254)
L V − (k hwell )2
Let’s consider the case, n = 1; this 󰀕correspond to 󰀖 the minimum energy configuration (i.e.
k hwell
Quantum Ground State). Since tan−1 √ hwell 2
is a positive quantity, it implies
V −(k )

k hwell (1) < k box (1) (2.255)


So the ground state energy of the half-well is less than that of the box. This is very different
from what we know from classical mechanics. The minimum energy configuration do depend on
the height of the potential.

IISERB(2024) QMech I - 56
Case III: We focus on the case when

VI = VIII = V =⇒ κI = κIII = κ (2.256)

We refer to this situation as symmetric well. In this case, (2.249), reduces to


󰀣 󰀤
2 k well
k swell (n) = k box (n) − tan−1 󰁳 (2.257)
L V − (k well )2

From (2.252), (2.254) and (2.257), we can clearly see

k swell (n) < k hwell (n) < k box (n) ∀n (2.258)

This provides an hierarchy of energy for the ground state energy.

A few observations We can see that the ground state depends on the height of the potential
and the length of the region
Egs (VI , VIII , L) (2.259)

1. Egs is a strictly increasing function of VI and VIII . This is striking different from classical
mechanics.

2. From the expression in (2.249), we can see that Egs decreases as L increases.

x=a x=b

Figure 2.16: Wave function in symmetric double well

2.11.2 Double well


In order to under this, we consider the situation depicted in fig. 2.17. The potential has the
following form

V (x) = V x≤a
= 0 a<x<b
= V󰁨 b≤x≤c
= 0 c<x<d
= V d≤x (2.260)

The situation is depicted in fig. 2.17. We are interested in solution when

0 < V ≤ V󰁨 < E < V (2.261)

In this case we define the following variables


󰁵 󰁵 󰁶
2mE 2m(V − E) 2m(V󰁨 − E)
k= , κ= , 󰁨=
κ (2.262)
󰄁2 󰄁2 󰄁2

IISERB(2024) QMech I - 57
0 I II III IV

Figure 2.17: Double well

Then the piecewise solution is given by

Ψa (x) = Aa e−κx + Ba eκx , a ∈ 0, IV


Ψa (x) = Aa eikx + Ba e−ikx , a ∈ I, III (2.263)
−κ̃x κ̃x
Ψa (x) = Aa e + Ba e , a ∈ II

We start imposing various boundary conditions:

1. At x = a, we have a right step potential. So, we use the result (2.217), with L = 0 and
R = I and x̂ = a.
AI
= −e2iθ(k,κ) e−2ika (2.264)
BI
We define
AI eika−iθ(k,κ) = −BI e−ika+iθ(k,κ) = NI (2.265)

2. At x = d, we have a left step potential. So, we use the result (2.203), with L = III and
R = IV and x̂ = d.
AIII
= −e−2iθ(k,κ) e−2ikd (2.266)
BIII
We define
AIII eikd+iθ(k,κ) = −BIII e−ikd−iθ(k,κ) = NIII (2.267)

3. Let’s now focus on x = b

lim ΨI (x) = lim ΨII (x) , lim Ψ′I (x) = lim Ψ′II (x) (2.268)
x→b− x→b+ x→b− x→b+

From these set of equations, we get


󰀓 󰀔 󰀓 󰀔 󰀓 󰀔
AI eikb +BI e−ikb = AII e−κ̃b +BII eκ̃b , ik AI eikb −BI e−ikb = −κ AII e−κ̃b −BII eκ̃b
(2.269)
Taking the ratio,

eik(b−a)+iθ − e−ik(b−a)−iθ AII e−κ̃b + BII eκ̃b


= 󰀓 󰀔 (2.270)
ik(eik(b−a)+iθ + e−ik(b−a)−iθ ) −κ̃ AII e−κ̃b − BII eκ̃b

Let’s define b − a = LI

AII e−κ̃b + BII eκ̃b −κ̃ sin(kLI + θ) AII −2κ̃b κ̃ sin(kLI + θ) − k cos(kLI + θ)
󰀓 󰀔= =⇒ e =
AII e−κ̃b − BII eκ̃b k cos(kLI + θ) BII κ̃ sin(kLI + θ) + k cos(kLI + θ)
(2.271)

IISERB(2024) QMech I - 58
4. Similarly from x = c, we get

lim ΨII (x) = lim ΨIII (x) , lim Ψ′II (x) = lim Ψ′III (x) (2.272)
x→c− x→c+ x→c− x→c+

From these set of equations, we get


󰀓 󰀔 󰀓 󰀔 󰀓 󰀔
AIII eikc +BIII e−ikc = AII e−κc +BII eκc ik AIII eikc −BIII e−ikIII c = −κ AII e−κc −BII eκc
,
(2.273)
We take the ratio of the above two equations and use the definition d − c = LIII to get

AII e−κ̃c + BII eκ̃c κ sin(kLIII + θ) AII −2κ̃c κ̃ sin(kLIII + θ) + k cos(kLIII + θ)


󰀓 󰀔= =⇒ e =
AII e−κ̃c − BII eκ̃c k cos(kLIII + θ) BII κ̃ sin(kLIII + θ) − k cos(kLIII + θ)
(2.274)

We define c − b = LII . Then from (2.270) and (2.274) we get


󰀕 󰀖󰀕 󰀖
κ̃ sin(kLIII + θ) + k cos(kLIII + θ) κ̃ sin(kLI + θ) + k cos(kLI + θ)
= e−κ̃LII (2.275)
κ̃ sin(kLIII + θ) − k cos(kLIII + θ) κ̃ sin(kLI + θ) − k cos(kLI + θ)

Case I Let’s consider the case when

LI = LIII = L (2.276)

In this case, (2.275) reduces to


󰀕 󰀖2
κ̃ sin(kL + θ) + k cos(kL + θ)
= e−κ̃LII (2.277)
κ̃ sin(kL + θ) − k cos(kL + θ)

Case IA Let’s consider the simple case LII = ∞. In this case, we define θ.
nπ 1
tan(θ) = k/κ =⇒ tan(kL) = − tan θ = tan(nπ − θ) =⇒ k = − θ (2.278)
L L
In this limit, it reduces to the case of asymmetric well (Case III of sec 3.3.2). Consider the RHS
of (2.277). We define
κ̃
f (κ̃) = e− 2 LII (2.279)

This factor is essentially coming from Tunnelling wave function in region II. This decreases as
we increase LII ; it also decreases if we increase the height of the potential in region II. Then
(2.277) becomes 󰀕 󰀖
κ̃ sin(kL + θ) + k cos(kL + θ)
= ±f (κ̃) (2.280)
κ̃ sin(kL + θ) − k cos(kL + θ)
Let’s first focus on the solution with plus sign

(1 − f (κ̃))κ̃ sin(kL + θ) + (1 + f (κ̃))k cos(kL + θ) = 0 (2.281)

We can re-arrange the equation to get

(1 + f (κ̃))k
tan(kL + θ) = − (2.282)
(1 − f (κ̃))κ̃

Let’s define
k (1 + f (κ̃))k
tan θ󰁨 = , tan θ󰁥 = (2.283)
κ̃ (1 − f (κ̃))κ̃

IISERB(2024) QMech I - 59
From the definition of tan θ in (2.278) we can see that

0 < f (κ) < 1 =⇒ tan θ󰁥 > tan θ󰁨 =⇒ θ󰁥 > θ󰁨 (2.284)

We denote the solution to the above equation as k(n, +); + denotes the choice that we have
made in (2.282)
nπ 1
k(n, +) = − (θ󰁥 + θ) (2.285)
L L
In the case of infinite separation LII → ∞, f (κ̃) goes to zero

LII → ∞ =⇒ f (κ̃) → 0 =⇒ θ󰁥 → θ󰁨 (2.286)

In that limit, the problem reduces to asymmetric well.


Since θ󰁥 is in general greater than θ,
󰁨 the value of ground state k is lower for double well
compared to single. well. We denote the solution to (2.282) as k (dw) . Then it is clear that

k (dw) (n, +) < k (asw) (n) < k hwell (n) < k box (n) ∀n (2.287)

k (asw) is the solution for the asymmetric well.


In (2.280), there were two solutions. We focused on the solution with + sign in (2.281). We
can consider the solution with the minus sign. In this case, we can again do a similar analysis.
We denote the solution in this case as k (dw) (n, −). It is possible to show

k (dw) (n, −) > k (asw) (n) ∀n (2.288)

Moreover, In the case of infinite separation,

LII −→ ∞ =⇒ f (κ̃) −→ 0 =⇒ k (dw) (n, −) −→ k (dw) (n, +) = k (asw) (n) (2.289)

Role of tunnelling We saw that there two sets of solutions: k (dw) (n, +) and k (dw) (n, −). In
the limit of infinite separation, Both of these solutions become the same, and they are equal
to the solution of the asymmetric well. However, for finite separation k (dw) (n, +) is less than
κ̃
k (asw) (n) and k (dw) (n, −) is greater than k (asw) (n). This energy splitting is due to f (κ̃) = e− 2 LII .
This factor is coming from the tunnelling solution in region II 6 .
Let’s try to understand this better. In the infinite separation, we have two independent
potential wells. Thus, with this limit, the situation is reduced to two independent potential
wells. For finite separation, one would naively expect doubly degenerate bound states. But due
to tunnelling, the energy splits. One of the states has lower energy than that of a single well,
and the other one has more energy. One way to understand energy splitting is the following:
We can take a symmetric and anti-symmetric combination of the ground states. The symmetric
combination has no node, but the anti-symmetric combination has one node. Hence, the anti-
symmetric combination has higher energy. To summarise, the ground state energy of a double
well is lower than the ground state energy of a single well due to quantum tunnelling. The effect
of tunnelling is often known as the hopping term in condensed matter physics.
This is the origin of the formation of bands in many body systems. A band is a collection of
many energy levels, and the gap between them is very small. The gap is so small that it appears
as continuous.
6
This effect is also known as instanton correction to energy levels/“Non-perturbative” correct of energy levels.
The basic reason is the same; these jargons are more used in the Path integral formalism on in Quantum Field
Theory

IISERB(2024) QMech I - 60
Figure 2.18: Ground state wave function in symmetric double well

Figure 2.19: Wave function of first excited state in symmetric double well

Connection to Quantum Chemistry The phenomenon that we explained above is the


essential reasoning behind chemical bonding. Now we will explain the connection. We will
explain the various words that appear in the chemistry literature to the phenomenon that
explained above.

• “Atom” stands for a Quantum particle in a potential such that the potential has only with
one minimum.
The location of the minima is known as the “Nuclei”.

• “Molecule” stands for the situation when a Quantum particle is in a potential with two or
more minima.

• “Orbital” means eigenstates/wave function of energy eigenstates.


We can also define atomic (molecular) orbital as Wave functions of energy eigenstate for
atom (molecule).
The word “Overlap of orbitals” simply means Overlap of Wave functions.

• In case of Double well potential, we found that the ground state function is a new state
which resembles the shape of the linear superposition of the ground state of the each well.
It is a new state of the double well and it has lower energy than the ground state energy
of the individual well. This situation is known as “Hybridisation”. It essentially means
that Molecular orbitals have lower energy than atomic orbitals.

• From the wave functions of double well (in the fig. 2.18 and in the fig. 2.19) we can
see that modulus of the wave function has maxima in multiple places. This means that
the Quantum particle can be found in any of those wells. Unlike classical particle, it is
not restricted to a single well. This situation is often denoted by the word “Delocalised
electron”.

• Example of the double well also helps to understand bonding and Anti-bonding molecular
orbitals. In this case, the energy eigenstate of (left/right) potential well are the atomic
orbitals. We have plotted two molecular orbitals (i.e. the energy eigenstates of the double

IISERB(2024) QMech I - 61
well) in fig. 2.18 and in fig. 2.19. From the graphs, the molecular orbital in fig. 2.18 (fig.
2.19) resembles the symmetric (anti-symmetric) linear superposition of the two atomic
orbitals (i.e. the ground state energy wave functions of the individual wells). Consider the
molecular orbitals in fig. 2.18. This orbital has lower energy (eigenvalue) than that the
atomic orbital. This is known as the bonding molecular orbital. The molecular orbitals in
fig. 2.19 have more energy than that of the atomic orbitals. It is known as anti-bonding
molecular orbitals in the Chemistry Literature.
From the above discussion, we can see that the role of Tunnelling in Chemical bonding. We
have already mentioned that the effect of Tunnelling is known as “Hopping” in the condensed
matter literature. Bands form in solid due to hopping. We are not elaborating that here. The
reader is requested to consult any standard condensed matter textbook for this.

2.11.3 A model of alpha decay


In this part, we consider a simple model to explain alpha decay and role of quantum mechanics.
Consider a potential which is depicted in fig 2.20. It has the following functional form

V (x) = ∞ , x≤0 (2.290)


= V , a≤x≤b (2.291)
= 0 , 0≤x≤a ∪ b≤x≤∞ (2.292)

Let’s consider a particle in region I with energy E < V . First we discuss the situation in classical
mechanics. In Classical mechanics, the particle will be stuck there and can never come out in
region III. However, as we will derive below, the situation is different in Quantum Mechanics.
We begin by defining the variables
󰁵 󰁵
2mE 2m(V − E)
k= , κ= (2.293)
󰄁 2 󰄁2
The solution of the Schrodinger equation in Region I (subjected to the boundary condition at
the left end) is

ΨI (x) = AI sin kx
ΨII (x) = AII e−κx + BII eκx (2.294)
ikx −ikx
ΨIII (x) = AIII e + BIII e

Now we impose continuity of Wave function and it’s derivative x = a and x = b. At x = a we


get
sin ka AII e−κa + BII eκa
=− (2.295)
k cos ka κ(AII e−κa − BII eκa )
We start by defining tan θ = k/κ.
AII e−κa + BII eκa sin ka cos θ AII e−κa sin(ka − θ)
−κa κa
= − =⇒ κa
=− (2.296)
(AII e − BII e ) cos ka sin θ BII e sin(ka + θ)
Let’s now consider the point x = b. In this case,
AII e−κb + BII eκb 1 AII e−κb + BII eκb κ AII e−κb
− = −i =⇒ = =⇒ = e2iθ (2.297)
κ(AII e−κb − BII eκb ) k (AII e−κb − BII eκb ) ik BII eκb
We can combine (2.296) and (2.296) to find solution for k (i.e. E). From the structure of the
equation, it is clear that it has no real solutions for k. Thus there is no energy eigenstate of this
form. Such a configuration is not possible in Quantum Mechanics.

IISERB(2024) QMech I - 62
V

I II III

Figure 2.20: Particle in an interval with step potential

More exercises from chapter II


1. Quantum Harmonic oscillator

(a) In the class, we listed the various properties of a quantum particle in a box. They
are listed in sec 1.1.
Check whether they are true for Quantum Harmonic Oscillator or not.
(b) Plot the wave function of the first few excited states of Quantum Harmonic Oscillator
using Mathematica.
(c) Consider the Large Quantum number limit of the system and comment on its be-
haviour and the behaviour of a classical harmonic oscillator.

2. Filling up the gaps for Quantum Harmonic oscillator


For the Quantum Harmonic oscillator, we defined three operators
󰁵 󰁵 󰁵 󰁵
󰄁2 d 2 1 2 󰄁 d k † 󰄁 d k
H(x) = − 2
+ kx , A= + x , A =− + x
2m dx 2 2m dx 2󰄁 2m dx 2󰄁
(2.298)

Show that A is the adjoint of A. Show that
󰁫 󰁬 󰀓 ω󰀔 󰁫 󰁬 󰁫 󰁬
A, A† = ω , H = 󰄁 A† A + , H, A = −󰄁 ωA , H, A† = 󰄁 ωA†
󰁫 󰁬 󰁫 2󰁬
n
H, A = −n󰄁 ωA n
, H, (A ) = n󰄁 ω(A† )n
† n

(2.299)

Find the form of the wave function as a function of x for the energy eigenstates.

3. Radial part of the hydrogen atom and factorization method


The Hamiltonian for the radial part of the hydrogen atom is given by

ℓ(ℓ + 1) C
Hℓ = −∂r2 + − (2.300)
r2 r

IISERB(2024) QMech I - 63
The spectrum of this Hamiltonian can be derived using the factorization method. Let us
define two operators
(ℓ + 1) (ℓ + 1)
Aℓ = ∂ r − + cℓ , A†ℓ = −∂r − + cℓ (2.301)
r r
Compute
Aℓ A†ℓ , A†ℓ Aℓ (2.302)
Show that if we choose 2cℓ (ℓ + 1) = C, then Hℓ − A†ℓ Aℓ and Hℓ+1 − Aℓ A†ℓ are just real
numbers; determine those numbers.
Can you derive the spectrum of a hydrogen atom from this?

4. Spherical harmonic oscillator


The Hamiltonian of a spherical harmonic oscillator is given by
ℓ(ℓ + 1)
Hℓ = −∂r2 + + ω 2 r2 (2.303)
r2
The spectrum of this Hamiltonian can be found using the factorization method. To do
that, we define two operators
(ℓ + 1) (ℓ + 1)
Aℓ = ∂ r − + ωr , A†ℓ = −∂r − + ωr (2.304)
r r
Show that
A†ℓ Aℓ = Hℓ − (2ℓ + 3)ω , Aℓ A†ℓ = Hℓ+1 − (2ℓ + 1)ω (2.305)
Can you derive the spectrum of the theory from this algebra?

5. Laplace-Runge Lenz and commutators


Let’s consider Kepler’s potential in 2 space dimensions.
󰁴
1 Z
H = (p21 + p22 ) − , r= q12 + q22 (2.306)
2 r
Here
pi = −i∂qi (2.307)

(a) Compute the commutator of H with the following operators

L z = q 1 p2 − q 2 p1 (2.308)
1 Zq1
A1 = − (Lz p2 + p2 Lz ) + (2.309)
2 r
1 Zq2
A2 = (Lz p1 + p1 Lz ) + (2.310)
2 r
(b) Show that
󰀕 󰀖
1
A21 + A22 =2 L2z + H + Z2 (2.311)
4

(c) Compute the following commutators


󰁫 󰁬 󰁫 󰁬 󰁫 󰁬
Lz , A1 , Lz , A2 , A1 , A2 (2.312)

6. In (2.164), we computed the reflection and transmission coefficients for two potentials.
Derive the same expression from the multiple reflections and transmissions between the
regions confined between the potentials.

IISERB(2024) QMech I - 64
7. In sec 3.2, we have analyzed the case of two delta function potentials. We also computed
the S matrix using Mathematica. Repeat the analysis for three delta function potentials.

8. Consider the following Hamiltonian


󰀣
󰄁2 ∂2 ∂2 󰀔 1
H=− + + (k1 x2 + k2 y 2 ) , k1 , k2 > 0 (2.313)
2m ∂x2 ∂y 2 2

x and y are independent variables.


Find the energy eigenstates of this theory. Find the corresponding wave functions and
energy eigenstates.

9. Consider a one-dimensional system subjected to the following potential


1 2
V (x) = kx x≥0 (2.314)
2
= ∞ x<0 (2.315)

Find the energy eigenvalue of this system and find the Wave functions of the energy
eigenstates.

10. Consider a one-dimensional system subjected to the following potential

V (x) = λδ(x) , |x| ≤ ℓ (2.316)


= ∞ |x| < ℓ (2.317)

Is there any energy eigenstate with negative energy eigenvalue? If yes, find the wave
function.

11. Three dimensional box


Consider a three-dimensional box; the potential is of the following form

V (x, y, z) = 0 , |x| < L1 , |y| < L2 , |z| < L3 (2.318)

and it is infinite otherwise.


Find the energy eigenstate and energy eigenvalue. Find the wave function of the energy
eigenstates.
Consider the case when L1 = L2 = L3 . Does something special happen?

12. In sec 3.3.2, we have consider a potential well. We found the S matrix and bound states.
Take the limit in which the potential well becomes the delta function well. Show that the
S matrix and the bound state expression for the potential well gives the S matrix and
bound states for the delta function well.

IISERB(2024) QMech I - 65
Chapter 3

Some general features of


One-dimensional Quantum systems

In this chapter, we will discuss some general features of one dimensional systems, based on
broader principles like symmetry, unitary and complex analyticity. The content of this chapter
is not part of examinable material.
At first we will consider the consequences of symmetry on the scattering matrix.

3.1 Consequence of symmetry of the S matrix


A unitary S matrix is a function of four real functions of k. Let’s now see the consequence of
various symmetries on the S matrix. We have already restricted to the case where the potential
is time-independent; this leads to an important symmetry known as the time-reversal symmetry.
We will also explore what happens if the potential is a even function of x.

3.1.1 Parity symmetry and the S matrix


We begin with the Parity symmetry. Let us now consider a parity-symmetric potential.

V (x) = V (−x) (3.1)

Since the kinetic term is invariant under parity, the Hamiltonian has a parity symmetry for
symmetric potential. Parity can be used to map region I to the region III and vice-versa.
󰀣 󰀤 󰀣 󰀤 󰀣 󰀤
AI A′I BIII
−→ ′
=
BIII BIII AI
󰀣 󰀤󰀣 󰀤
0 1 AI
= (3.2)
1 0 BIII

We can write this equation (also a similar equation for out states) as

χin → χ′in = Pχin , χout → χ′out = Pχout (3.3)

where P is defined as 󰀣 󰀤
0 1
P= , P2 = 1 (3.4)
1 0
Let’s now start from the definition of the S matrix and multiply by P

χout = Sχin =⇒ Pχout = (PSP) Pχin =⇒ χ′out = (PSP) χ′in (3.5)

66
So, under parity transformation S-matrix transforms as
󰀣 󰀤󰀣 󰀤󰀣 󰀤 󰀣 󰀤
0 1 S11 S12 0 1 S22 S21
= (3.6)
1 0 S21 S22 1 0 S12 S11

Hence for a symmetric potential, the S matrix satisfies


󰀣 󰀤 󰀣 󰀤
S11 S12 S22 S21
V (x) = V (−x) =⇒ = (3.7)
S21 S22 S12 S11

So the consequence of parity symmetry is

S11 = S22 , S12 = S21 (3.8)

S matrix has 4 complex parameters to begin with. The above relation reduces the number
of independent complex parameters to 2. These coefficients are also related to each other by
unitarity (which is one condition on these variables). So, the S matrix depends on one complex
(two real) parameter for Parity symmetric potential. This property remains true in any one
dimensional quantum system irrespective of the form of the potential. Whenever the system has
a symmetry, it restricts the form of the S matrices. In the QMech II course, we will see this for
three-dimensional S matrices for systems with spherical symmetry.
We can understand the consequence of parity symmetry using the understanding given in
sec 2.6.3. The action of parity is the following

(I → I) −→ (III → III) =⇒ R11 → R22 (3.9a)


(I → III) −→ (III → I) =⇒ T12 → T21 (3.9b)
(III → I) −→ (I → III) =⇒ T21 → T12 (3.9c)
(III → III) −→ (I → I) =⇒ R22 → R22 (3.9d)

Since two processes related by parity transformation should be the same, we obtain

R11 = R22 ≡ R , T12 = T21 ≡ T (3.10)

So in case of a parity symmetric potential, the two reflection coefficients are the same and the
two transmission coefficients are the same.

Action of parity on Transfer matrix We already explored the consequence of parity sym-
metry on the S matrix. Let’s now see its implications on the transfer matrix. The transfer
matrix is defined as
󰀣 󰀤 󰀣 󰀤󰀣 󰀤
AIII M11 M12 AI
= (3.11)
BIII M21 M22 BI

Under parity transformation the states transform in the following way


󰀣 󰀤 󰀣 󰀤 󰀣 󰀤󰀣 󰀤
AIII BI 0 1 AI
−→ = (3.12)
BIII AI 1 0 BI

and similarly
󰀣 󰀤 󰀣 󰀤 󰀣 󰀤󰀣 󰀤
AI BIII 0 1 AIII
−→ = (3.13)
BI AIII 1 0 BIII

IISERB(2024) QMech I - 67
So under parity transformation the transfer matrix goes to
󰀣 󰀤 󰀣 󰀤
0 1 0 1
M −→ M (3.14)
1 0 1 0

So if parity is a symmetry then


󰀣 󰀤 󰀣 󰀤
−1 0 1 0 1
M = M (3.15)
1 0 1 0

M−1 is given by
󰀣 󰀤−1 󰀣 󰀤
M11 M12 1 M22 −M12
= (3.16)
M21 M22 det M −M21 M11
and 󰀣 󰀤󰀣 󰀤󰀣 󰀤 󰀣 󰀤
0 1 M11 M12 0 1 M22 M21
= (3.17)
1 0 M21 M22 1 0 M12 M11
So we get
det M = 1 , M12 = −M21 (3.18)

One can use (2.131) to get back (3.8).

3.1.2 Parity symmetric basis


In the previous section, we analyzed the consequence of parity symmetry in one dimensional
systems. We have seen that under parity transformation

AI −→ BIII , BIII −→ AI (3.19)

A very important property of Quantum dynamics is linearity. If we have two (or more) solutions
to the Schrodinger equation, then any linear combination of them is also a solution to the
Schrodinger equation. In this particular case, we can use this property to form the following
two linear combinations
1 1
√ (AI + BIII ) , √ (AI − BIII ) (3.20)
2 2
such that the first one parity invariant and the second one only changes by a sign. The states in
(3.19) are not eigenstates of Parity; but thanks to the linearity property of Quantum Mechanics,
we can consider states which are eigenstates of the Parity operator. This is true in general. We
can write the wave functions on a Parity eigenvalue basis for a symmetric potential. The wave
function can only be parity symmetric (i.e. even function) or antisymmetric (i.e. odd function).
For example consider the following wave function
󰀫
eikx , x<0
Ψ(x) = −ikx
(3.21)
e , x>0

This wave function is in-coming in both region I and III. It is also invariant under parity
symmetry. It is an eigenfunction of Parity with eigenvalue 1. The above wave function can
simply written as

χsym
in (x) = e
−ik|x|
(3.22)

IISERB(2024) QMech I - 68
We can also construct another wave function of the following form
󰀫
eikx , x<0
Ψ(x) = −ikx
(3.23)
−e , x>0

This wave function is again in-coming in both the region and it is linear independent from the
wave function in (3.21). It is an eigen function of the Parity operator with eigenvalue −1. This
wave function can also be written as1

χanti-sym
in (x) = sgn (x) e−ik|x| (3.25)

One can similarly construct parity eigen-value basis for out-going wave functions. They are
given by

χsym
out (x) = e
ik|x|
(3.26)
χanti-sym
out (x) = −sgn (x) eik|x| (3.27)

Note that the parity symmetry relates the left states to the right states. So, the left-right
decomposition does not commute with parity transformation. Then the full wave function (in
the region where the potential is zero) takes the form

Ψ(x) = Cin χsym sym anti-sym


in (x) + Cout χout (x) + Din χin (x) + Dout χanti-sym
out (x) (3.28)

So the parity-symmetric in-state and out-state are given by


󰀣 󰀤 󰀣 󰀤
Cin Cout
, (3.29)
Din Dout

The S-matrix on this basis is given by


󰀣 󰀤 󰀣 󰀤󰀣 󰀤
Cout S++ S+− Cin
= (3.30)
Dout S−+ S−− Din

We denote the S matrix in the parity-symmetric basis as SP .

Change of basis Now, we want to find the relation between the two bases. In region I, it
takes the form
1 1
ΨI (x) = √ (Cin − Din ) eikx + √ (Cout + Dout ) e−ikx (3.31)
2 2
In region III it takes the form
1 1
ΨIII (x) = √ (Cin + Din ) e−ikx + √ (Cout − Dout ) eikx (3.32)
2 2
Then comparing this with eqn (2.102a) and (2.102b) we get
󰀣 󰀤 󰀣 󰀤󰀣 󰀤 󰀣 󰀤 󰀣 󰀤󰀣 󰀤
AI 1 1 −1 Cin BI 1 1 1 Cout
=√ , =√ (3.33)
BIII 2 1 1 Din AIII 2 1 −1 Dout
1
sgn stands for sign function or signum function. It is defined as
󰀻
󰁁
󰀿 1 x>0
sgn (x) = 0 x=0 (3.24)
󰁁
󰀽−1 x<0

IISERB(2024) QMech I - 69

The 1/ 2 is to ensure that the the transformation is a orthogonal transformation. It is possible
to show that
󰀣 󰀤 󰀣 󰀤󰀣 󰀤󰀣 󰀤
S++ S+− 1 1 1 S11 S12 1 −1
SP ≡ =
S−+ S−− 2 1 −1 S21 S22 1 1
󰀣 󰀤
1 S11 + S22 + S12 + S21 −S11 + S22 + S12 − S21
= (3.34)
2 S11 − S22 + S12 − S21 −S11 − S22 + S12 + S21

Now consider parity-symmetric potential. In that case, using (3.8) we get


󰀣 󰀤 󰀣 󰀤
S11 + S12 0 T +R 0
= (3.35)
0 S12 − S22 0 T −R

The S matrix is diagonal on this basis. This is expected. The potential has the Parity symmetry.
Then it would preserve the parity eigenvalue of the incoming wave function; the parity of the
incoming wave and the outgoing wave must be the same. A parity-symmetric potential cannot
mix symmetric and antisymmetric wave functions.
We saw that the S matrix is diagonal in the parity-symmetric basis. From the unitarity of
the S matrix, we know that S † S = 1. From these two conditions, it follows that the diagonal
entries can only be a phase
󰀣 󰀤
e2iδ+ (k) 0
, δ+ (k) , δ− (k) ∈ R (3.36)
0 e2iδ− (k)

We can see that the S matrix depends on two real variables (δ± ) for a parity-symmetric potential.

3.1.3 Example I: Single Delta function


Consider the particular case when the potential is at a = 0. In this case, We obtain from (3.8)
and (3.10) that
−iλ 2k
R= , T = (3.37)
2k + iλ 2k + iλ
So the phase shift
2k − iλ
eiδ+ (k) = T + R = (3.38)
2k + iλ
eiδ− (k) = T −R=1 (3.39)

The parity odd scattering wave is unaffected by the delta function potential.

3.1.4 Example II
Now we focus on the special case when the potential takes the following form
󰁫 󰁬
V (x) = λ δ(x + a) + δ(x − a) , a>0 (3.40)

In this case, the system has a Z2 symmetry

Z2 : x → −x (3.41)

In this case, the system has 2 free parameters. Due to this symmetry, all the wave function of
any bound state is either an even or an odd function. Similarly, we write the S matrix parity
eigenbasis, and in this case, we find the S matrix to be a pure phase.

IISERB(2024) QMech I - 70
S-matrix: Partial wave In this, the wave function and scattering states can be written in
the parity eigenvalue basis.
󰀳 󰀴
2ik+λ(1+e−2iak )
0
󰁅 2ik−λ(1+e2iak ) 󰁆
󰁃 2ik+λ(1−e−2iak ) 󰁄
(3.42)
0 2ik−λ(1−e 2iak
)
If we put the separation a = 0, then we get
󰀣 󰀤
ik+λ
ik−λ 0
(3.43)
0 1

This is same as the partial wave for single delta function with strength 2λ (see (3.38) and (3.39)).

3.1.5 Time-reversal symmetry and the S matrix


The following transformation is known as the time-reversal transformation

T : t → −t (3.44)

If the Hamiltonian is invariant under this transformation, i.e. it satisfies

H(x, t) = H(x, −t) (3.45)

then the system is called time-reversal symmetric. We are currently considering time-independent
potential, and the Hamiltonian is time-independent
󰄁2 d 2
H(x, t) = − + V (x) (3.46)
2m dx2
Such a Hamiltonian is time-reversal symmetric.

Time-reversal symmetry in classical system


Here we briefly discuss time-reversal symmetry in the classical system. We start with
Newton’s equation of motion

d󰂓x d󰂓
p
p󰂓 = m , = F󰂓 (󰂓x, t) (3.47)
dt dt
Then under time-reversal symmetry

T : p󰂓 −→ −󰂓
p , F󰂓 (󰂓x, t) −→ F󰂓 (󰂓x, −t) (3.48)

Since the direction of motion flips under time-reversal transformation, it is also known as
motion-reversal transformation. If F (󰂓x, t) = F󰂓 (󰂓x, −t), the second equation is also invariant
under time-reversal transformation. In that case, if 󰂓x(t) is a solution to the equation of
motion, 󰂓x(−t) is also a solution to the equation of motion.
Let’s consider a few examples: we start with free particles. In that case, the solution of
the equation of motion is
󰂓x = 󰂓v t + 󰂓x0 (3.49)

We replace 󰂓v by −󰂓v , then it still remains a solution to the equation of motion. Next, we
consider the motion of a charged particle in the electromagnetic field

d2 󰂓x 󰂓 + 󰂓v × B)
󰂓
m = e(E (3.50)
dt2

IISERB(2024) QMech I - 71
This is the Lorentz force law. The equation is invariant under time-reversal symmetry
provided electric and magnetic field transform in the following way

T : 󰂓 B)
(E, 󰂓 −→ (E,
󰂓 −B)
󰂓 (3.51)

So the direction of the magnetic field should change under time-reversal symmetry. There
is an intuitive way to understand it. Consider a magnetic field created by the current.
Under time-reversal symmetry, the direction of motion flips and the direction of current also
flips. This implies that the direction of the magnetic field must change under time-reversal
symmetry. Note that every classical system is not invariant under time-reversal symmetry.
For example, consider the following equation of motion

d2 󰂓x d󰂓x 󰂓
m 2
+γ + F (󰂓x) = 0 (3.52)
dt dt
This equation is not invariant under time-reversal symmetry.
All the laws of physics at the sub-atomic level are invariant under time-reversal trans-
formation. However, this is not true in the macroscopic world. We do not see an old person
becoming young again or a dead animal becoming alive again. This is one of the important
puzzles in physics. This puzzle is deeply connected with the second law of thermodynamics
which says that entropy can never decrease.

Let’s now try to understand what happens to the solution of the Schrodinger equation under
time-reversal symmetry. Let Ψ(x, t) be a solution of the Schrödinger equation


i Ψ(x, t) = H(x, t)Ψ(x, t) (3.53)
∂t
We take the complex conjugate of the above equation
∂ 󰂏 ∂
−i Ψ (x, t) = H(x, t)Ψ󰂏 (x, t) =⇒ i Ψ󰂏 (x, t) = H(x, t)Ψ󰂏 (x, t) (3.54)
∂t ∂(−t)

Now, we make a variable substituion t → −t and use H(x, −t) = H(x, t) to obtain

∂ 󰂏
i Ψ (x, −t) = H(x, t)Ψ󰂏 (x, −t) (3.55)
∂t
From this equation we can conclude that for a time-reversal symmetric Hamiltonian if Ψ(x, t)
is a solution of Schrödinger equation then Ψ󰂏 (x, −t) is also a solution of Schrödinger equation.
Under time-reversal symmetry

t −→ −t =⇒ Ψ(x, t) −→ Ψ󰂏 (x, −t) (3.56)

Now we want to understand what is the consequence of time-reversal symmetry on scattering.


The wave function in regions I and II are given in (2.115). Under time-reversal symmetry
󰀓 󰀔 󰀓 󰀔
T : ψI (x), ψIII (x) −→ ψI󰂏 (x), ψIII
󰂏
(x) (3.57)

We compare the co-efficient of eikx to conclude that


󰀓 󰀔 󰀓 󰀔
T : AI , BI , AIII , BIII −→ BI󰂏 , A󰂏I , BIII
󰂏
, A󰂏III (3.58)

IISERB(2024) QMech I - 72
Hence, the action of time-reversal symmetry on the in- states and the out-state are given by
󰀣 󰀤 󰀣 󰀤 󰀣 󰀤 󰀣 󰀤
AI BI󰂏 BI A󰂏I
−→ , −→ (3.59)
BIII A󰂏III AIII 󰂏
BIII

From the definition of the S matrix, it follows that


󰀣 󰀤 󰀣 󰀤
A󰂏I BI󰂏
󰂏
=S (3.60)
BIII A󰂏III

Now consider the definition of the S-matrix is (2.109); Multiply both sides by S † and then
complex conjugate to get
󰀣 󰀤 󰀣 󰀤
A󰂏I T BI󰂏
󰂏
=S (3.61)
BIII A󰂏III

Comparing these two equations (3.60) and (3.61) we get

S = ST (3.62)

Time reversal symmetry implies S-matrix is a symmetric matrix.

S12 = S21 (3.63)

We summarize our conclusion: 1) Unitarity (conservation of probability current), 2) Time re-


versal symmetry of the potential implies the S-matrix is symmetric.
We use the physical picture given in sec. 2.6.3. The action of time-reversal is the following

(I → I) −→ (I → I) (3.64a)
(I → III) −→ (III → I) (3.64b)
(III → I) −→ (I → III) (3.64c)
(III → III) −→ (III → III) (3.64d)

So processes involving reflection remain unaltered by time-reversal symmetry.

T 2 Applying time-reversal symmetry twice We have seen that under time-reversal t flips
sign
T : t −→ −t (3.65)

This means that if we apply the time-reversal transformation twice (we denote it as T 2 ), then
it is the same as doing nothing.
T2 : t −→ t (3.66)

In the case of scattering, we can check from equation (3.58) that if we apply time-reversal
symmetry twice, then it is the same as doing nothing
󰀓 󰀔 󰀓 󰀔
T2 : AI , BI , AIII , BIII −→ AI , BI , AIII , BIII (3.67)

This fact is often mathematically denoted as

T2 = 1 (3.68)

IISERB(2024) QMech I - 73
Action of time-reversal symmetry on the transfer matrix We now analyse the conse-
quence of time-reversal invariance on the transfer matrix. We know that under time-reversal
symmetry 󰀣 󰀤 󰀣 󰀤 󰀣 󰀤 󰀣 󰀤
AI BI󰂏 AIII 󰂏
BIII
−→ , −→ (3.69)
BI A󰂏I BIII A󰂏III
If time-reversal is a symmetry then
󰀣 󰀤 󰀣 󰀤
󰂏
BIII BI󰂏
=M (3.70)
A󰂏III A󰂏I
We take complex conjugation of the above equation to obtain
󰀣 󰀤 󰀣 󰀤 󰀣 󰀤󰀣 󰀤 󰀣 󰀤󰀣 󰀤
BIII B I 0 1 A III 0 1 AI
= M󰂏 =⇒ = M󰂏 (3.71)
AIII AI 1 0 BIII 1 0 BI
Comparing this with (??) we get
󰀣 󰀤 󰀣 󰀤
0 1 0 1
M󰂏 󰂏
= M =⇒ M11 = M22 󰂏
M12 = M21 (3.72)
1 0 1 0
So for a system with time-reversal symmetry the transfer matrix has the following form
󰀣 󰀤
M11 M12
󰂏 󰂏
(3.73)
M12 M11
And in this case det M = 1

Summary In one dimensions, S matrix is a 2 × 2 matrix with complex entries. From


unitarity, it follows that the S matrix has four real parameters.
In the presence of time-reversal symmetry (which is always there for a time-independent
potential) S matrix has three parameters. In the presence of parity symmetry, the number
of independent parameters reduces to two. These two real numbers are related to the phase
shift of parity even and parity odd sector.

Summary of S matrix and Transfer matrix

1. S matrix and T matrix are 2 × 2 matrices with complex entrices


󰀣 󰀤 󰀣 󰀤
S11 S12 M11 M12
, (3.74)
S21 S22 M21 M22

So both of them has 4 complex (8 real) parameters.


These matrices are inter-related
󰀣 󰀤 󰀣 󰀤 󰀣 󰀤 󰀣 󰀤
S11 S12 1 −M21 1 M11 M12 1 − det S S22
= , =
S21 S22 M22 det M M12 M21 M22 S12 −S11 1
(3.75)

These equations gives the relations between the determinant of these two matrices
M11 S21
det S = − , det M = (3.76)
M22 S12

IISERB(2024) QMech I - 74
2. Restrictions from Unitarity
Unitarity constraints these two matrices

S † S = 1 = SS † , M† Ω M = Ω (3.77)

So these reduces the number of parameters to 4 real (2 complex).

3. Restrictions from Time-reversal


In a time-reversally symmetric theory, S becomes a symmetric matrix

S12 = S21 (3.78)

The restriction on the transfer matrix is given by

󰂏 󰂏
det M = 1 , M11 = M22 , M12 = M21 (3.79)

4. Restrictions from Parity


In a theory with Parity symmetry, the S matrix has 2 real parameters

S11 = S22 , S12 = S21 (3.80)

These conditions are equivalent to the following condition for transfer matrix

det M = 1 , M12 = −M21 (3.81)

3.2 A few general properties of bound states in one-dimensional


system
In this section, we discuss some general properties that any bound state in one dimensional2
potentials satisfy. Bound states are the states that cannot escape to infinity. Their wave function
is square-integrable.

Theorem I : In one-dimensional systems, there is no degeneracy for bound states.


We prove this using the method of contradiction. Lets consider two states ψI and ψII with
same energy. Then, from the time-independent Schrödinger equation follows

d2
− ψI (x) + V (x)ψI (x) = E ψI (x) (3.82)
dx2
d2
− 2 ψII (x) + V (x)ψII (x) = E ψII (x) (3.83)
dx
We multiply (3.82) by ψII (x) and (3.83) by by ψI (x) and then take the difference to obtain

d󰁫 ′
󰁬
ψI (x) ψII (x) − ψII (x) ψI′ (x) = 0 (3.84)
dx
2
To be precise, we are considering quantum system on real line. One can also consider quantum system on
circle (S 1 ) and in that case, the theorems discussed in this system does not hold.

IISERB(2024) QMech I - 75
We can integrate both sides to get
󰁫 󰁬

ψI (x) ψII (x) − ψII (x) ψI′ (x) = constant (3.85)

Since we are considering the case of a real line, we can take the limit x → ∞. For bound states,
the wave function vanishes , and the first derivative remains bounded in the limit x → ±∞ and
hence the constant vanishes. In that case,

ψI′ (x) ψ ′ (x) ψI (x)


= II =⇒ = constant (3.86)
ψI (x) ψII (x) ψII (x)

In Quantum mechanics, if two value functions proportional to each other then they describe the
same state. So, ψI (x) and ψII (x) describe the same state.

Theorem II : Every wave function in one-dimensions can be chosen to be real.


Let ψ be a solution of the time-independent Schrödinger equation with energy E. Then the
real part ψR and the imaginary part ψI satisfies the Schrödinger equation with the same energy
But in one dimension, there is no degeneracy of states. This implies ψR and ψI describe the
same state.
ψI = c ψR , c∈R (3.87)

Any wave function is equivalent to this wave function. The original wave function ψ = (1 + ic)ψR
is phase equivalent to this wave function.

Theorem III : If a one-dimensional potential is even under Parity

V (−x) = V (x) (3.88)

then the wave functions are either even or odd under parity.
This is easy to demonstrate. The symmetry of the potential implies

V (−x) = V (x) =⇒ H(−x) = H(x) (3.89)

So if ψ(x) is an eigenfunction of H(x) with energy E, then ψ(−x) is also an eigenfunction of


H(x) with energy E. Since there is no degeneracy, they should describe the same state. Then

ψ(x) = c ψ(−x) (3.90)

from the above eqn, it follows that

ψ(−x) = c ψ(x) = c2 ψ(−x) =⇒ c2 = 1 =⇒ c = ±1 (3.91)

So every wave function is either even or odd.


In this case, reflection is a symmetry of the Hamiltonian, and we can see that it has interesting
consequences on the wave function. Later we will see more such situations.

If the wave function vanishes at a point, but the first derivative does not vanish at that
point, then the point is called a node.

IISERB(2024) QMech I - 76
Theorem IV: Node theorem Consider a one-dimensional time-independent potential. We
can arrange the eigenfunctions with increasing energy value, starting from 1 for the lowest energy
state. Let’s say the energy of state ψn is En

Hψn = En ψn (3.92)

Since we have arrange the state with increasing value of energy it implies that

i > j =⇒ Ei > Ej (3.93)

In such case, the node theorem says that ψn (x) has n − 1 nodes. It is possible to refine this
statement further. ψn has at least one node between two consecutive nodes of ψn−1 and hence
ψn has at least one more node than ψn .
We will not prove it. We refer readers to the lectures by Zwiebach in youtube. Using node
theorem, we can show a few more interesting stuff for a parity-symmetric potential given in
(3.88)

1. Odd wave functions always have nodes at x = 0. From the node theorem, we know that
the ground state cannot have any node and hence the ground state is always parity even.

2. An even (odd) wave function has an even (odd) number of nodes. From node theorem, it
follows that even and odd functions alternatively appear. For nth state

ψn (−x) = (−1)n−1 ψn (x) (3.94)

In the absence of gravitational force, energy does not have any absolute meaning; only the
difference in energy is physically relevant. In such a case, we can always choose the potential
with its global minimum value of 0. That means that the energy of the lowest energy state in
classical theory is 0. In such a context, we have the following theorem.

Theorem V: Ground state in Quantum theory has non-zero positive energy. To rephrase it,
the difference between the lowest energy Configuration in quantum theory and classical theory
is a positive real number.
We will not give a proof here. However we discuss the reason why one would expect such
a statement to be true in Quantum Mechanics. The quantum Hamiltonian is linear sum of
−d2 /dx2 and V (x). The classical lowest energy state is obtained by minimising the potential
term. If we try to become more and more localised in position space then the contribution
from −d2 /dx2 becoming bigger. So there are two competing term in the Quantum Hamiltonian
and we have to minimise this competing effect to obtain the lowest energy state. As a result,
the minimum energy in Quantum Mechanics is greater than the minimum energy in classical
mechanics.

Consider a potential V (x); for simplicity we assume that V (±∞) = 0. As long as there
exists some values of x such that
V (xc ) < 0 (3.95)
there are classical bound state(s). In previous theorem we found that, the ground state of the
Quantum system is always greater than the classical lowest energy state. So it seems a possibility
that if there is a classical bound state, the lowest energy state in Quantum mechanics can be
positive and hence is no bound state in Quantum Mechanics. The next theorem discusses that
situation.

IISERB(2024) QMech I - 77
Theorem VI: As long as the following condition is satisfied
ˆ ∞󰁫 󰁬
V (x) − V (±∞) dx < 0 (3.96)
−∞

the quantum potential always has at least one bound state. When the above condition becomes
equality, one needs to carefully analyse to conclude whether there is a bound state or not.
For example, in section (3.2) we have seen that when the two delta functions have equal and
opposite strength the bound state exists only when the distance between them is greater than
some critical distance.
Note that the condition in (3.95) is necessary but not sufficient condition for existence of a
bound state in Quantum Mechanics.

3.3 Consequence of symmetry on Bound states


We have already seen the consequence of parity symmetry on the scattering states. Now we
want to understand the same thing for bound states. A bound state has a normalisable wave
function ˆ ∞
dx|ψ(x)|2 = Finite (3.97)
−∞
We looking for bound states in a parity symmetric potential

H(x)ψ(x) = Eψ(x) , E∈R (3.98)

Since, H(−x) = H(x) it follows that the ψ(−x) also satisfies the above eigenvalue equation with
the same eigenvalue.
H(x)ψ(−x) = Eψ(−x) (3.99)
Now we take linear combination of the above wave functions

ψ± (x) = ψ(x) ± ψ(−x) (3.100)

From the definition, it follows that ψ+ (x) is an even function and ψ− (x) is an odd function.

ψ+ (−x) = ψ+ (x) , ψ− (−x) = −ψ− (x) (3.101)

Then from (3.98) and (3.99), it follows that

H(x)ψ± (x) = Eψ± (x) (3.102)

So the bound state wave functions can be made in such a that it is either even function or odd
function.
Now we use the result that in one dimensional systems, there is no degeneracy and the wave
functions can be made purely real. In the above case, we found that there are two orthogonal
states ψ± with the same energy eigenvalue. The only wave to avoid this situation is iff

ψ(x) = ±ψ(−x) (3.103)

In that case, either ψ+ or ψ− is zero. So in one dimensional systems, whenever we write down
real wave functions for a parity symmetric potential, they are either even function or an odd
function. We have already this fact for particle in a box and for Harmonic oscillator.
An odd function on R always has a zero at the origin. Any other zero of an odd function
will come in pair. Similarly for an even function, the zeroes always come in pair. We use this

IISERB(2024) QMech I - 78
simple properties of even/odd function along with the Node theorem. From node theorem, we
already know that the ground state has no node. Thus the ground state wave function for a
parity symmetric potential is always an even function. The first excited state has one node and
thus it must be an odd function. Extending this argument we can show that even wave function
and odd function appear in alternation for an one dimensional parity symmetric potential.

3.3.1 Example I: Double delta function


Let’s consider the bound states for parity symmetric arrangement of two delta functions in
(3.40).

Bound states In this case (2.174) reduces to


󰀥 󰀦
2κ + λ −2κ a
e = ±1 (3.104)
λ
Depending on the ±,
NII− = ∓NII+ , NI = ∓ NIII (3.105)
So + sign in (??) correspond to odd wave function and − sign in (3.105) correspond to even
wave function.
2κ + λ
Parity even bound state : e−2κ a = − (3.106)
λ
2κ + λ
Parity odd bound state : e−2κ a = + (3.107)
λ
1. Parity even state(s)
If we consider the minus sign, then the above equation has a solution iff
λ<0 (3.108)

2. Parity odd state(s)


If we consider the plus sign, then the above equation then it has a solution iff
λ<0 , λa > 1 (3.109)
The critical distance is important because of tunnelling. In quantum mechanics, there is
tunnelling between the two delta potentials, and it lowers the energy of the ground state
and increases the energy of the first excited state . The contribution of tunnelling decreases
with the distance between the potentials. Beyond the critical distance, the contribution
from the tunnelling is large, and the “first excited bound” state becomes a scattering state
and becomes part of the continuum.

3.3.2 Example II: Bound states of a potential well


Consider the following potential
󰀻
󰁁 ℓ
󰀿 0 |x| > 2
V (x) = (3.110)
󰁁
󰀽 ℓ
−V |x| < 2
We have depicted the potential in fig. 3.1. The potential has parity symmetry
V (−x) = V (x) (3.111)
Hence the wave function is either even or odd.

IISERB(2024) QMech I - 79
−V

x = − 2ℓ x= ℓ
2

Figure 3.1: Particle in a one-dimensional finite well

3.3.3 Bound states


Let us now consider the bound states. The bound states have

−V < E < 0 (3.112)

Here V is a positive quantity; so


V > |E| (3.113)
V − |E| measures the energy with respect to the minimum of the potential. In this case, region
I and region III are mapped to each other by parity. So, we will only use the label I and II. In
order to find the solutions, we define the following two quantity
󰁵 󰁵
2m|E| 2m(V − |E|)
κ= , k= (3.114)
󰄁 2 󰄁2
From the definition it follows that both the quantities are non-negative.

Bound states with even wave function First, we will consider the wave functions that are
even under parity. Then the wave function for bound states with an even function takes the
following form
󰀻
󰁁
󰀿 NI e
−κ|x| |x| > 2ℓ
ψ(x) = (3.115)
󰁁
󰀽 ℓ
NII cos k x |x| < 2

Now we have to impose continuity of ψ(x) and ψ ′ (x) at x = 2ℓ


󰀕 󰀖
ℓ ℓ
NII cos k = NI e−κ 2 (3.116a)
2
󰀕 󰀖
ℓ ℓ
k NII sin k = κ NI e−κ 2 (3.116b)
2

We get the same condition if we impose continuity at x = − 2ℓ . From the above two equations,
we get
󰀕 󰀖

k tan k =κ (3.117)
2
From the above equation we know
2m
κ2 + k 2 = V (3.118)
󰄁2
In order to find the solution see fig. 3.2. We have ploted two functions
󰀕 󰀖 󰁵
ℓ 2m
k tan k , V − k2 (3.119)
2 󰄁2

IISERB(2024) QMech I - 80
as a function of k and look for their intesections This provides a quantization condition for k
(i.e. for energy) of the bound states. From the graphs, it is clear that there is at least one bound
state in this section as long as V ℓ ∕= 0.

Figure 3.2: Even parity bound state in potential well

The potential well has two free parameters: ℓ and V . Let’s try to understand what happens
if we vary these parameters.
1. Changing the length of the potential ℓ
If we increase ℓ without changing V , the number of bound state increases.

2. Changing the depth of the potential V


If we keep ℓ unchanged and change V , the number of bound state increases.

Infinite well Consider the case infinite well (i.e. V → ∞, keeping V − |E| finite) and in this
case

κ→∞ (3.120)

This implies
ℓ π
k = (2n + 1) (3.121)
2 2

Dirac delta potential In order to get delta function potential we consider

V →∞ , ℓ→0 such that α := V ℓ = constant (3.122)

We rewrite the condition (3.117)


󰀥󰀣 󰁵 󰀤 󰁵 󰀦
ℓ ℓ
k tan k =κ (3.123)
2 2
We take the limit ℓ → 0, and we use the definition of α to get
󰀵 󰀗󰀕 󰁴 󰀖 󰁴 󰀘 󰀶
󰀣 󰁵 󰀤 tan k ℓ ℓ
󰀹 ℓ 2 2 󰀺
κ = lim 󰀹 k 󰁴 󰀺= α (3.124)
ℓ→0 󰀷 2 ℓ
󰀸 2
2

Regardless of the strength of the delta function, there is one bound state for delta function
potential. The wave function for the bound state is given by
α
ψ(x) = NI e− 2 |x| (3.125)

IISERB(2024) QMech I - 81
Bound states with odd wave function The wave function for bound states with an even
function takes the following form
󰀻
󰁁
󰀿NI sgn (x) e
−κ|x| |x| > 2ℓ
ψ(x) = (3.126)
󰁁
󰀽 ℓ
NII sin kx |x| < 2

We impose continuity of ψ(x) and ψ ′ (x) at x = 2ℓ


󰀕 󰀖
ℓ ℓ
NII sin k = NI e−κ 2 (3.127a)
2
󰀕 󰀖
ℓ ℓ
k NII cos k = −κ NI e−κ 2 (3.127b)
2

Combining these two equations, we get


󰀕 󰀖
ℓ κ
cot k =− (3.128)
2 k

Again we do the same thing. In fig. 3.3, we plot two functions


󰀕 󰀖 󰁵
ℓ 2m
−k cot k , V − k2 (3.129)
2 󰄁2

Not that in this sector, the number of bound state could be zero.

Figure 3.3: Odd parity bound state in the potential well

Infinite well Consider the case infinite well (i.e. V → ∞, keeping V − |E| finite) and in this
case

κ→∞ (3.130)

This implies

k = nπ (3.131)
2

3.4 Bound states from Scattering matrix and Transfer matrix


Up to this point, we were analyzing the scattering states in the presence of a potential. In this
section, we show that from the information on the scattering states of potential, it is possible to

IISERB(2024) QMech I - 82
know about the bound states of the same potential. The potential is zero in regions I and III.
A bound state (by definition) cannot escape to infinity. So the energy of any bound state of the
potential is negative
E<0 (3.132)

Consider the following substitution

k −→ iκ , κ>0 (3.133)

In this case, the scattering wave functions in (2.102a) and (2.102b) goes to

ψI (x) = AI e−κx + BI eκx (3.134a)


−κx κx
ψIII (x) = AIII e + BIII e (3.134b)

The form of these wave functions is similar to the wave functions for bound states with a few
exceptions. In the x → ∞ limit, the wave function becomes unbounded if BIII ∕= 0. Similarly, in
the x → −∞ limit, the wave function is bigger than any finite quantity if AI ∕= 0. For a physical
state, the probability should add up to one and hence the above wave function represents a
physical state iff
AI = 0 = BIII (3.135)

In this case, we obtain a bound state wave function by substituting k −→ iκ in a scattering state
solution. In other words, we should set the (analytically continued) incoming wave- function
to zero; but for a bound state solution, the (analytically continued) outgoing wave function is
finite. Since the S matrix is a map from the incoming state to the outgoing state. Let’s rewrite
that relation again 󰀣 󰀤 󰀣 󰀤󰀣 󰀤
BI S11 (k) S12 (k) AI
= (3.136)
AIII S21 (k) S22 (k) BIII
The above relation is consistent with (3.135) only if S have a singularity at the values of κb
where κb is allowed bound state solution.
Alternatively, we can do the following analytic continuation

k → −iκ (3.137)

In this case, the wave function is normalizable if we set the analytically continued (outgoing)
wave function to zero. The analytically-continued incoming wave function is still finite. This
means for a bound state; the S matrix has zero for κb .
To summarise, the bound states can be found from the poles/zeroes of the analytically
continued S matrix
󰀏 󰀏
󰀏 󰀏
Sij (iκ)󰀏 =∞ , Sij (−iκ)󰀏 =0 (3.138)
κ=κb κ=κb

This is an extremely important and interesting result. We considered potentials which are
localized in a finite interval of the real line. For scattering experiments, we through the plane
from ±∞ and observe what comes back. On the other hand, a bound state is localized in the
region of the potential; the wave function of a bound state falls off exponentially, and hence in
any far away region, the existence of the bound state is undetectable. The above result suggests
that if we know the scattering, then from that, we can deduce the bound states of the potential.

IISERB(2024) QMech I - 83
Bound state from transfer matrix Let’s now look at the expression of transfer matrix
(2.122) and put k = iκ and AI = 0 = BIII .
󰀣 󰀤 󰀣 󰀤󰀣 󰀤 󰀣 󰀤
AIII M11 (iκ) M12 (iκ) 0 M12
= = BI (3.139)
0 M21 (iκ) M22 (iκ) BI M22

Then the bound state solution is given by

M22 (iκb ) = 0 , κb > 0 (3.140)

Example: Bound states from S matrices of delta function To get the bound state, we
put

k −→ i κ (3.141)

The even parity S matrix blows up if


λ
κ=− (3.142)
2
This gives the parity even bound state (2.138). And in this case, there is no parity odd bound
state.

Bound states from the pole of S matrix of parity symmetric two delta function
From this expression, we can check that the pole of the S matrices gives bound states. In order
to get the bound states, we substitute

k = iκ (3.143)

For the parity-even sector, the S matrix blows up if


2κ + λ
e−2κ a = − (3.144)
λ
This is in agreement with eqn (3.106). For the parity odd sector, the S matrix blows up if
2κ + λ
e−2κ a = (3.145)
λ
This is in agreement with eqn (3.107).

3.5 Levinson’s theorem


Levinson’s theorem connects two apparently distinct objects: scattering states and the bound
states. For the moment, we restrict Levinson’s theorem for one-dimensional systems. We know
that for parity-symmetric potential, the bound states and scattering states can be written as
eigenfunctions of the parity operator. The theorem states that for a parity-symmetric potentials,
(+) (−)
the number of parity even (odd) bound states Nb (Nb ) is related to the phase shift δ (+) (k)
(δ (−) (k)) in the following way

(+) 1 󰀓 (+) 󰀔
Nb = δ (0) − δ (+) (∞) (3.146)
π
We will go into the proof now. We will come back to it later 3 .
3
You can watch the lecture by Zwiebach.

IISERB(2024) QMech I - 84
Levinson’s theorem for single delta function There is no phase shift for the parity odd
sector (see (3.39)) and there is no parity odd bound states. So Levinson’s theorem is trivially
satisfied. Let’s look at the parity even sector. From (3.38) we know that the scattering phase
shift is given by 󰀕 󰀖
−1 λ
φ+ (k) = 2 tan (3.147)
2k
Note that the Phase is a monotonically decreasing function of k.So

φ+ (0) = π , φ+ (∞) = 0 (3.148)

Then
1
[φ+ (0) − φ+ (∞)] = 1 (3.149)
π
This is exactly the number of parity, even bound state of single delta potential.

More exercises from this chapter


1. We have considered the consequence of parity symmetry and time-reversal symmetry on
scattering matrix.
Let’s now consider now a Hamiltonian which satisfies the following properties

H(x, t) ∕= H(−x, t) , H(x, t) ∕= H(x, −t) , H(x, t) = H(−x, −t) (3.150)

What is the consequence of this on the S matrix?


Interpret the result.

2. Shifted parity symmetry


A Potential is called Parity symmetric if

V (x) = V (−x) (3.151)

In the lecture, we considered the consequences of this symmetry on the S matrix. Let’s
now consider the case when the potential satisfies the following property

V (x) = V (a − x) (3.152)

Here a is a real number. What is the consequence of this symmetry on the S matrix?

IISERB(2024) QMech I - 85
Chapter 4

A bit more on Framework of


Quantum Mechanics

4.1 Composite systems and tensor product of Hilbert spaces


We have discussed many quantum systems. We have seen that a Quantum system is described
by a Hilbert space H and a self-adjoint operator H, known as the Hamiltonian, acting on that
Hilbert space. In other words, the pair (H, H) defines a quantum system.
Let’s say we are given two such systems (H1 , H1 ) and (H2 , H2 ). For example, we can consider
H1 = C2 & H1 to be a 2 × 2 hermitian matrix and H2 = C3 & H1 to be a 3 × 3 hermitian
matrix. We want to study the combined system. For example, one could ask what the total
energy of the combined system is. In Classical mechanics, if they do not turn on any potential
between the two systems, then the total energy is simply the sum of the individual energies.
The question is, what happens in a quantum system? One would naively expect that the total
Hamiltonian is
H = H1 + H2 (4.1)

But this does not make much sense because we have to specify the Hilbert space on which this
operator acts. The Hilbert space cannot be H1 or H2 . Moreover the action of H1 on H2 or H2
on H1 is not defined. Consider the example mentioned above. In that case, H1 is a 2 × 2 matrix,
and H2 is a 3 × 3 matrix. In that case, it does not make sense to add a 2 × 2 matrix with a 3 × 3
matrix. In this section, we want to make sense of the Hilbert space of the combined system and
the corresponding Hamiltonians.
The Hilbert space H for the combined system is the tensor product of the individual Hilbert
space1 :
H = H1 ⊗ H2 (4.2)

The information of the whole system is determined by the information of the individual system.

|φ〉 ⊗ |χ〉 (4.3)

Let |φi 〉 be a basis (say energy eigenbasis) for H1 (i = 1, · · · , n) and |χI 〉 be a basis for H2
(I = 1, · · · , N ). Then a basis for H is given by

|φi 〉 ⊗ |χI 〉 (4.4)


1
When we take the tensor product of two Hilbert spaces, they have to be Hilbert space over the same field.
Otherwise, it is not possible to construct the tensor product.

86
The tensor product product is distributive w.r.t vector addition and scalar multiplication of H1
and H2
󰀓 󰀔
|φi 〉 + |φj 〉 ⊗ |χI 〉 = |φi 〉 ⊗ |χI 〉 + |φj 〉 ⊗ |χI 〉 (4.5)
󰀓 󰀔
|φi 〉 ⊗ |χI 〉 + |χJ 〉 = |φi 〉 ⊗ |χI 〉 + |φi 〉 ⊗ |χJ 〉 (4.6)
󰀓 󰀔 󰀓 󰀔 󰀓 󰀔
λ |φi 〉 ⊗ |χI 〉 = λ|φi 〉 ⊗ |χI 〉 = |φi 〉 ⊗ λ|χI 〉 (4.7)

For future purposes, we denote this state as |i I〉. The number of independent basis elements is
nN . Hence
dim H = (dim H1 ) · (dim H2 ) (4.8)
If we consider direct product of two general states ci |φi 〉 and dI |χI 〉, then we obtain
󰀣 n 󰀤 󰀣N 󰀤
󰁛 󰁛 󰁛
ci |φi 〉 ⊗ dI |χI 〉 = ci dI |φi 〉 ⊗ |χI 〉 (4.9)
i=1 I=1 i,I

In the product Hilbert space H, the inner product of two states |φ〉 ⊗ |χ〉 and |φ̃〉 ⊗ |χ̃〉 is defined
as 󰀓 󰀔
|φ〉 ⊗ |χ〉, |φ̃〉 ⊗ |χ̃〉 = 〈φ̃|φ〉〈χ̃|χ〉 (4.10)
So the dot product is inherited from the dot product from H1 and H2 .
Now we discuss operators in the product Hilbert space H. Given an operator O1 acting on
H1 and O2 acting on H2 , we can construct operator O1 ⊗ O2 whose action on a state is given
as follows 󰀓 󰀔
O1 ⊗ O2 |φ〉 ⊗ |χ〉 = (O1 |φ〉) ⊗ (O2 |χ〉) (4.11)
For example, the total Hamiltonian of the system is given by

H = H1 ⊗ 1 + 1 ⊗ H2 (4.12)

In terms of the eigenvalues, the total energy is simply the addition of the energy of the individual
systems.
Similarly, the total angular momentum of the system is
(1) (2)
Ji = Ji ⊗ 1 + 1 ⊗ Ji (4.13)

J (1) and J (2) are the angular momentum operators of system 1 and 2 respectively.

We leave it to the readers to check that

1. The inner product of the individual Hilbert spaces (H1 & H2 ) gives the inner product
for H1 × H2 . Let (, )H be the inner product of the Hilbert space H. Then, show that
the following definition gives a sensible inner product.

(φ1 ⊗ ψ1 , φ2 ⊗ ψ2 )H1 ×H2 = (φ1 , φ2 )H1 (ψ1 , ψ2 )H2 (4.14)

2. Show that if O1 is a self-adjoint operator of H1 and O2 is a self-adjoint operator of H2


then O1 ⊗ O2 is a self-adjoint operator of H1 ⊗ H2 .

3. Let |φi 〉1 (i = 1, · · · , dim H1 ) are the basis vectors for H1 and |ψ I 〉2 (I = 1, · · · , dim H2 )
are the basis vectors for H2 . Then show that |φi 〉1 ⊗ |ψ I 〉2 forms a basis for H1 ⊗ H2 .

IISERB(2024) QMech I - 87
4.1.1 Two Qubit systems
let us consider an example. We consider two spin 1/2 systems. So, we review the basics of spin
1/2 systems. It has two linearly independent states; we choose them to be the eigenstate of J3 :
| ↑〉 and | ↓〉.
󰄁 󰄁
J3 | ↑〉 = | ↑〉 , J3 | ↓〉 = − | ↓〉 (4.15)
2 2
This is a two-dimensional vector space. So we choose the states | ↑〉 and | ↓〉 to be
󰀣 󰀤 󰀣 󰀤
1 0
| ↑〉 = , | ↓〉 = (4.16)
0 1
The angular momentum matrices
󰀣 󰀤
󰄁 1 0 󰄁
J3 = = σ3 (4.17)
2 0 −1 2

Let’s now consider two such systems; it has four linearly independent states

| ↑〉 ⊗ | ↑〉 , | ↑〉 ⊗ | ↓〉 , | ↓〉 ⊗ | ↑〉 , | ↓〉 ⊗ | ↓〉 (4.18)

For future purposes, we will denote these states

| ↑↑〉 , | ↓↓〉 , | ↑↓〉 , | ↓↑〉 (4.19)

Let’s consider the action of J3 on these states

J3 | ↑↑〉 = 󰄁| ↑↑〉 , J3 | ↓↑〉 = 0 = J3 | ↑↓〉 , J3 | ↓↓〉 = −󰄁| ↓↓〉 (4.20)

Consider the most general state in H1 (the Hilbert space of the first spin 1/2 particle); it is
given by
|φ〉 = c1 | ↑1 〉 + c2 | ↓1 〉 , |c1 |2 + |c2 |2 = 1 (4.21)
Similarly the most general state in H2 is given by

|χ〉 = d1 | ↑2 〉 + d2 | ↓2 〉 , |d1 |2 + |d2 |2 = 1 (4.22)

So a state obtained from the product of two generic states in H1 and H2 is

c1 d1 | ↑↑〉 + c2 d2 | ↓↓〉 + c1 d2 | ↑↓〉 + c2 d1 | ↓↑〉 (4.23)

However, a generic state in the H is a linear sum of the above four states

f1 | ↑↑〉 + f2 | ↓↓〉 + f3 | ↑↓〉 + f4 | ↓↑〉 , |f1 |2 + |f2 |2 + |f3 |2 + |f4 |2 = 1 (4.24)

So a generic state in H can be written as |φ〉 ⊗ |χ〉 iff

f1 f2 = f3 f4 (4.25)

So a generic state in H cannot be written as a product of a state in H1 times a state in H2 . A


state in H which can be written as |φ〉 ⊗ |χ〉 (|φ〉 ∈ H1 and |χ〉 ∈ H2 ) then it is called a product
state; otherwise it is called an Entangled state. Entanglement is an extremely interesting concept
that has no analogue in classical mechanics. Moreover, Entanglement has nothing to do with the
Hamiltonian; it is a property of a state, not an operator. Entanglement can exist even between
two free systems; the presence or absence of interaction is not required.
Entanglement plays a very important role in Quantum Computing2 .
2
Interested reader can read for example Quantum Computation and Quantum Information by Michael A.
Nielsen, Isaac L. Chuang.

IISERB(2024) QMech I - 88
4.1.2 Two-dimensional harmonic oscillator
Now, we will discuss another example of a composite system: A dimensional oscillator.
1 1
H1 = (p21 + m1 x21 ) , H2 = (p22 + m2 x22 ) (4.26)
2 2
We already know that the Hilbert space for single Harmonic oscillators is ℓ2 (N). So then how
we have two quantum systems (ℓ2 (N), H1 ) and (ℓ2 (N), H2 ). We want to find a system with both
oscillators. The Hamiltonian of the composite system is usually written as

H = H1 + H2 (4.27)

What it actually means is that

H = H1 ⊗ 1 + 1 ⊗ H2 (4.28)

The Hilbert space of the composite system is ℓ2 (N)⊗ℓ2 (N)3 . The states of the composite Hilbert
space are of the form

|φ〉1 ⊗ |ψ〉2 , |φ〉1 ∈ H1 , |ψ〉2 ∈ H2 (4.29)

Then, the creation operator and the annihilation operator for the first oscillator on the composite
Hilbert space

a1 ⊗ 1 , a†1 ⊗ 1 , 1 ⊗ a2 , 1 ⊗ a†2 (4.30)

Let’s denote the energy eigenstates of the first oscillators are

|n〉1 , n∈N (4.31)

The Hilbert space (which is isomorphic to ℓ2 (N)) of the first oscillator is the Hilbert space
spanned by these vectors. Then, the Ground state of the composite system is

|0〉1 ⊗ |0〉2 (4.32)

We leave it to show that this is a ground state by considering that it is annihilated by a1 ⊗ 1


and 1 ⊗ a2 . So the energy eigenstate of the composite system

|n1 〉1 ⊗ |n2 〉2 (4.33)

A commonly used notation in the literature We have already explained that the mathe-
matical structure behind the composite system is the tensor product of Hilbert spaces. However,
the tensor product notation is a bit heavy. So, it is not commonly used in the literature, even
though the underlying structure remains the same. The commonly used notation is

|n1 〉1 ⊗ |n2 〉2 −→ |n1 , n2 〉 (4.34)


|0〉1 ⊗ |0〉2 −→ |0〉 (4.35)

Similarly, the operators are also written in a simpler way

a1 ⊗ 1 −→ a1 , a†1 ⊗ 1 −→ a†1 (4.36)


1 ⊗ a2 −→ a2 , 1⊗ a†2 −→ a†2 (4.37)

We want to stress that this is only about the notation; the underlying mathematical structure
remains the same.
3
We leave it to the readers to show that ℓ2 (N) ⊗ ℓ2 (N) is isomorphic to ℓ2 (N).

IISERB(2024) QMech I - 89
Interaction term Until now, we have considered a Composite system, which is a collection
of free systems. There is a term in the Hamiltonian that involves non-identity operators from
both sides. It is possible to write such operators and add them to the free Hamiltonian. Such a
term in the Hamiltonian is often called the interaction term. For example, consider an operator
of the form

a†1 ⊗ a†2 + a†1 ⊗ a†2 (4.38)

In the notations written above, this is written as

a†1 a†2 + a†1 a†2 (4.39)

4.2 Review of Symmetry in classical mechanics


We want to discuss the consequence of symmetry on dynamics. First, we discuss non-relativistic
classical dynamics. Consider the motion of a point particle in Rn , parametrized by

xi , i = 1, · · · , n (4.40)

Let the equation of motion be


d2 xi
= F i ({xj }, t) (4.41)
dt2
For future convenience, we define the following quantity
d2 xi
Φi ({xj }, t) ≡ − F i ({xj }, t) , Φi ({xj }, t) = 0 (4.42)
dt2
Let the solution of the equation of motion be

xi (t) = f i (t) (4.43)

Let’s define a vector f󰂓(t), which has n-components, and each component depends on time t.
󰀳 󰀴 󰀳 󰀴 󰀳 󰀴
x1 (t) F 1 ({xj }, t) f 1 (t)
󰁅 . 󰁆 󰁅 .. 󰁆 󰁅 󰁆
󰂓x(t) = 󰁅 .. 󰁆 , 󰂓 ({xj }, t) = 󰁅
F . 󰁆 , 󰂓(t) = 󰁅 ... 󰁆 (4.44)
f
󰁃 󰁄 󰁃 󰁄 󰁃 󰁄
xN (t) F N ({xj }, t) f N (t)

What is meant by the symmetry of a dynamics Now we discuss what is meant by a


symmetry of a dynamics. Let denote the space of solutions to be

S({Φi }) (4.45)

Now time-reversal is a symmetry of the dynamics if for any solution f󰂓(t) of the equation of
motion

f󰂓(−t) ∈ S({Φi }) ∀ f󰂓(t) ∈ S({Φi }) (4.46)

In general, K is a symmetry of the dynamics if

K 󰂓◦ f (t) ∈ S({Φi }) ∀ f󰂓(t) ∈ S({Φi }) (4.47)

Note that this should be true for all solution f󰂓(t); For dynamical systems, there can be some
special class of solution for which this is true. But then, such a transformation is not called a
symmetry of the dynamics.

IISERB(2024) QMech I - 90
How do you know whether the dynamics have a symmetry or not The most pedantic
way to check the symmetry of a dynamics is to first compute the space of solutions and then
check whether the space of solutions exhibits symmetry or not. However, there are better ways
to check it. For example, if you are just given an equation of motion, then we can check the
symmetry of the equation of motion. Any symmetry of the equation of motion will also be a
symmetry of the solutions. If we are given a Lagrangian, we can directly check its symmetries.
Any symmetry of the Lagrangian is also of the equation of motion and, hence, a symmetry for
the space of solutions.

4.2.1 Symmetry using the Lagrangians


Upto this point, we discussed symmetry using the equations of motion and solutions to the
equations of motion. Let’s now consider a classical system, which has a Lagrangian description.
The Lagrangian description of classical mechanics is best suited to deal with symmetries and
to understand the consequences of symmetries. In the Lagrangian description, a symmetry is a
transformation of variables that leave the Lagrangian invariant upto a total derivative in time
df (x, t)
x → x̃ : L(x) → L(x̃) = L(x) + (4.48)
dt
f is a function of function of x, t4 . If the symmetry depends on a continuous parameter, it
is called continuous symmetry. If Lagrangian has a continuous symmetry, then from Noether’s
theorem, we know that there is a quantity which does not change with time (conserved quantity).
The existence of conserved quantity has a profound impact on how we do physics. Physics is
a discipline which is finally validated by experiments. However, the Lagrangian is not observable
in an experiment. The dynamics that follow from the Lagrangian can be observed in nature.
An interesting and important challenge for a physicist is to find a Lagrangian that can explain
the phenomenon observed in nature. A conserved quantity is a useful tool in that endeavour.
A conserved quantity is observable in the experiment. If we find a conserved quantity, we
immediately know that the underlying Lagrangian has a symmetry, and the existence of any
symmetry severely constrains the form of the Lagrangian.
We discuss this point using an example:

4.2.2 Usefulness of symmetry in constructing Physical laws


Let’s consider two point particles at positions (x1 , y1 , z1 ) and (x2 , y2 , z2 ). We want to write down
the potential energy between those two particles. Without any other input, the potential is a
function of 6 parameters
V (x1 , x2 , y1 , y2 , z1 , z2 ) (4.49)
Now, we show how symmetry is useful in writing down physical laws.

1. The following transformation is known as the translation

(xi , yi , zi ) −→ (xi + x0 , yi + y0 , zi + z0 ) , x0 .y0 , z0 ∈ R (4.50)

x0 , y0 , z0 are three independent real numbers. Furthermore, x0 , y0 , z0 can take any value
on the real line, hence a continuous symmetry. Whenever a symmetry has translational
4
The analysis of symmetry is much simpler if f is identically 0. Otherwise we have to choose suitable boundary
condition for x(ti ) and x(tf ) such that f (x, ti ) = 0 = f (x, tf ). And x → x̃ is a symmetry of the Lagrangian
subjected to that boundary condition; otherwise, it is not a symmetry.

IISERB(2024) QMech I - 91
invariance, the conjugate momenta are conserved
p(1) (2)
x + px , p(1) (2)
y + py , p(1) (2)
z + pz (4.51)
This implies that if we find in an experiment that the total momentum of the particles
is conserved, then we know that the system has translational invariance. If the system
has translation invariance, then the potential can only be a function of the (x1 − x2 , y1 −
y2 , z1 − z2 ). So, the existence of translation invariance implies that
V (x1 , x2 , y1 , y2 , z1 , z2 ) = V (x1 − x2 , y1 − y2 , z1 − z2 ) (4.52)
So, translational invariance implies that the potential is a function of three variables.

2. Let’s now discuss rotational symmetry. We start with rotation in two dimensions. It is
given by

x → x cos θ + y sin θ , y → −x sin θ + x cos θ (4.53)

θ can take any value between 0 and 2π, and this is a continuous symmetry.
The consequence of rotation symmetry is that the following quantity is conserved
xpy − ypx (4.54)
If we find this quantity to be conserved in an experiment, then we know that there is
rotational symmetry in the x − y plane, and as a consequence, the potential can only be
󰁳
a function of (x2 + y 2 ) and z. So, the rotation in x − y along with translation implies
that potential can only be a function of 2 variables.
The discussion on rotation can be extended to any number of dimensions. For that, we
have to choose a 2-plane and perform the transformation like above. For example, in
3-dimensions, we have three linearly independent 2-planes 5 and we can perform rotation
in any one of them. The rotational symmetry is also known as the isotropy of space, and
the consequence of that is the conservation of angular momentum.
From isotropy of space-time (rotational invariance, we can conclude that V can only be a
󰁳
function of r = (x1 − x2 )2 + (y1 − y2 )2 + (z1 − z2 )2
V (x1 − x2 , y1 − y2 , z1 − z2 ) = V (r) (4.55)

We can see that, using symmetry, we can reduce a problem of six variables to a problem of
one variable. The only problem at this point that is yet to be determined is the functional
dependence of r 󰁛
V (r) = an r n (4.56)
n∈R

3. (The details of this part of the discussion are outside the scope of this course. You will be
learning this next year in the Quantum Field theory course). If we use the framework of
quantum field theory, then we can even specify the functional form of the potential. The
only form of potential that can arise in a unitary relativistic quantum field theory is
e−mr
V (r) = g 2 , m≥0 (4.57)
r
For a long-range interaction, m is 0. Furthermore, Weinberg showed that if the potential is
due to the exchange of a massless spin-two particle, then the force can only be universally
attractive; the equality of inertial mass and gravitational mass follows from underlying
Poincare invariance.
5
In n dimensions, there are n(n − 1)/2 independent two planes.

IISERB(2024) QMech I - 92
4.2.3 Symmetry and groups
We have been discussing symmetry in classical systems and its consequences. Now, we discuss
the relationship between symmetry in physics and its relation with groups in mathematics. We
discuss the relation in a concrete example first.
Consider a system in two dimensions. Let the Lagrangian be given by
1 1
m(ẋ2 + ẏ 2 ) − k(x2 + y 2 ) (4.58)
2 2
The Lagrangian has rotational symmetry

Rθ : x → x cos θ + y sin θ , y → −x sin θ + x cos θ (4.59)

Rθ denotes the rotation by an angle θ. Let’s now consider the collection of all such rotations

G = {Rθ } , θ ∈ [0, 2π] (4.60)

Now we define Rθ ◦ Rφ : this means that first we rotate by φ and then we rotate by θ. We can
use the above expression to check that

Rθ ◦ Rφ = Rθ+φ (4.61)

We find that two successive rotations is just another rotation

Rθ , Rφ ∈ G =⇒ Rθ ◦ Rφ ∈ G (4.62)

Let’s now consider three elements of G: Rθ , Rφ Rχ . Then, from the above equation, we can check

(Rθ ◦ Rφ ) ◦ Rχ = Rθ+φ+χ = Rθ ◦ (Rφ ◦ Rχ ) (4.63)

In G, there is an element corresponding to θ = 0; physically, this is the same as doing nothing.


In the language of groups, this is the same as doing nothing

Rθ=0 ≡ 1 , 1 ◦ Rθ = Rθ = Rθ ◦ 1 (4.64)

Rθ is rotation by angle θ in the counter-clockwise direction; R−θ is the rotation by angle θ in


the clock-wise direction. So R−θ undone the effect of Rθ

R−θ ◦ Rθ = 1 = Rθ ◦ R−θ (4.65)

When these four properties are satisfied by the elements of a set, then mathematically, it is
called a group.
We leave it to the readers to show that this is true for any classical system: the set of all
symmetry transformations forms a group.

4.3 Symmetries in a Quantum theory


In classical mechanics, any set of transformations of the dependent and/or independent variable
that maps any solution of the equation of motion to another solution is called a symmetry.
In the Lagrangian formulation of classical mechanics, it is easier to find a symmetry. Any
transformation that leaves the Lagrangian invariant up to a total derivative is a symmetry
of the theory. It is possible to show that any such transformation maps any solution to the
Lagrangian equation of motion to another solution.

IISERB(2024) QMech I - 93
Now the question is, what do we mean by symmetry in quantum theory? The fundamental
observable in quantum mechanics is the probability, which is simply a mod squared of the
probability amplitude. The probability doesn’t change if we multiply the state vector/wave
function by a phase; ψ and eiθ ψ represent the same physical state. A collection/of vectors which
differ only by a phase is mathematically called a Ray. So a physical state in quantum mechanics
is described by Rays in Hilbert space.
In quantum systems, any transformation that leaves the probability invariant is called a
symmetry transformation. Let’s denote the ray as Ri . U is a symmetry transformation
(U ) (U ) (U )
U : Ri −→ Ri , |〈Ri |Rj 〉|2 = |〈Ri |Rj 〉|2 (4.66)

Such a transformation is called a Ray transformation.

4.3.1 Symmetry and groups


In quantum theory, the set of symmetry transformations obeys certain properties, which are
mathematically called a group. We will elaborate on this point. Let’s consider the set of all
symmetry transformations
G ≡ {Ui } (4.67)
So G denotes the set of all symmetry transformations. Now we will define the product between
any two elements of this set. The product Ui Uj ≡ Ui · Uj means that we first perform the
symmetry transformation Uj and then the symmetry transformation Ui . Following this rule, we
can generalize the notion of product for any number of elements of the set (??). Now, we will
write down the properties

1. Ui , Uj are two symmetry transformations; then, from the definition, it follows that Ui Uj
is also a symmetry transformation. This is mathematically written as

Ui , Uj ∈ G =⇒ Ui Uj ∈ G (4.68)

2. Given any three elements Ui , Uj and Uk

Ui (Uj Uk ) = (Ui Uj )Uk (4.69)

3. Identity transformation (doing nothing) belongs to G

4. For every element Ui , there exists an element Ui−1 such that

Ui−1 Ui = Ui Ui−1 = 1 (4.70)

Physically, this means that for every transformation, you can always do the opposite
transformation so that you can go back to “doing nothing.”

Whenever the members of a set satisfy these four properties, then that set is called a group.
If the product of any element doesn’t depend on the order Ui Uj = Uj Ui , then it is called an
abelian group. Otherwise, it is called a non-abelian group.

4.3.2 How does a symmetry act on Hilbert states?


Given that the probability remains invariant and the set of transformations forms a group, we
want to know how symmetries act on the states. This was answered by Wigner, and it is known
as Wigner’s theorem.

IISERB(2024) QMech I - 94
Wigner’s symmetry representation theorem is one of the important results in the math-
ematical formulation of quantum mechanics. The symmetries of a theory form a group. Ac-
cording to this theorem,

• Any symmetry transformation is realized as either Linear, unitary or anti-linear, anti-


unitary operation on the Hilbert Space. (For every symmetry operator, there exists either
a Linear, Unitary operator or an anti-linear, anti-unitary operator that acts on the elements
of the Hilbert space)
Furthermore, if the symmetry transformation is a continuous symmetry, then the operation
can only be linear and unitary.

• The Hilbert space can transform either as an ordinary representation or as a projective


representation. We will discuss the projective representation later.

This article by David Gross beautifully explains the role and importance of symmetry in
Physics.
The proof of Wigner’s symmetry representation theorem can be found in chapter 2 of
QFT by Weinberg. You can also read this resonance article. In case you are interested in
learning the proof by solving a problem, please see problem 5 in this assignment by Alan
Guth.

For the moment we restrict to a linear unitary transformations. Let’s also consider con-
tinuous transformations. A continuous transformation is a transformation which depends on
continuous parameter(s). For simplicity, let’s also assume that the continuous transformation
depends only on one real parameter θ. We write the corresponding Unitary operator as

U (θ) : U † (θ)U (θ) = 1 = U (θ)U † (θ) (4.71)

Since the set of continuous transformations forms a group, there is an identity element. Let’s
say for θ = θ0 , the unitary transformation becomes an identity transformation

U (θ0 ) = 1 (4.72)

Now let’s consider the value of θ = θ0 + δθ where δθ << 1. In this case, we Taylor expand U (θ)
6 and keep only the first-order terms.

U (θ0 + δθ) = U (θ0 ) + U ′ (θ0 )δθ + O(δθ2 ) (4.73)

Since this is a unitary transformation

U † (θ0 + δθ)U (θ0 + δθ) = 1 =⇒ U ′ (θ0 ) + (U ′ (θ0 ))† = 0 (4.74)

Thus we can write U ′ (θ0 ) as i times a self adjoint operator

U ′ (θ0 ) = iV , V† =V (4.75)

So the unitary transformation, infinitesimal away from the element, takes the form

U (θ0 + δθ) = U (θ0 ) + iδθV + O(δθ2 ) (4.76)


6
It is not at all obvious statement that U (θ) is a differentiable function of θ. We request the readers to take a
course on Lie groups and Lie algebras to understand better about this part.

IISERB(2024) QMech I - 95
We found that by considering a unitary transformation infinitesimal away from the identity ele-
ment, we obtained a self-adjoint operator. Here, we assume that the continuous transformation
only depends on one parameter. This statement can be generalised to multiple parameters. We
leave it to the readers to check that for a transformation that depends on n real parameters, we
get n self-adjoint operators colour red Exercise. The converse is also true. Every self-adjoint
operator provides us with a (infinitesimal) Unitary transformation.

What is a representation ? Let’s say G is a group and H is a Hilbert space. Now for every
element g, does there exist an operator U (g) that acts on the Hilbert space in such a way that
the product of two such operators obeys the group multiplication law upto a phase

U (g1 ) U (g2 ) = eiφ(g1 ,g2 ) U (g1 g2 ) (4.77)

then H is called a representation space, and the dimension of H is called the dimension of the
representation. If U (g)s are Linear, unitary (anti-linear, anti-unitary) operators, then it is called
a unitary (anti-unitary) representation.

Action of symmetry on states and operators Let H be the Hilbert space of a theory.
Then, according to Wigner’s theorem, if S is a symmetry, then there is an operator PS which is
either linear, unitary or anti-linear unitary. Let consider a state |α〉 and we define |α̃〉

|α̃〉 = PS |α〉 (4.78)

Then
|〈α̃|α̃〉|2 = |〈α|α〉|2 (4.79)
The action of an operator on the bra is from the right. We need to settle how a symmetry
operator acts on other operators of the theory. In order to find that consider another state |β〉,
which is obtained by the action of an operator O on |α〉

|β〉 = O |α〉 (4.80)

Then, under the symmetry transformation


󰀓 󰀔 󰀓 󰀔 󰀓 󰀔
|α〉, |β〉 → |α̃〉, |β̃〉 = PS |α〉, PS |β〉 (4.81)

Let’s now consider the second equation

|β̃〉 = PS |β〉 = PS O|α〉 = PS OPS−1 PS |α〉 =⇒ |β̃〉 = Õ |α̃〉 (4.82)

where Õ is given by PS OPS−1 .

4.3.3 Symmetry transformation and Hamiltonian


We have discussed symmetry transformation. From the Wigner theorem, we know that there
are two types of transformations: Linear Unitary transformation and anti-linear anti-unitary
transformation. For the moment. Let’s restrict to linear unitary transformation. Anti-linear,
anti-unitary transformations are also very important, and they always have something to do
with time reversal. A linear unitary transformation is generated by a Unitary operator. Now,
I want to discuss the role of Hamiltonian in the case of symmetry transformations. (In a non-
relativistic theory) The operator corresponding to a symmetry transformation commutes with
the Hamiltonian.

[H, U ] = 0 (4.83)

IISERB(2024) QMech I - 96
where U is a symmetry transformation. If it is a continuous transformation, then we can expand
it infinitesimal far away from the identity element

[H, 1 + iδθV ] = 0 =⇒ [H, V ] = 0 (4.84)

Thus, for every continuous symmetry in quantum mechanics, there is a self-adjoint operator
which commutes with the Hamiltonian. The simplest example is translation symmetry. For
example, in the case of free particles, we can talk about the transformations

x→x+a (4.85)

Here, a is a continuous parameter. So, in quantum theory, there is a unitary operator for this
continuous transformation U (a). We can always consider infinitesimal translation, and Taylor
expands it

U (δa) = 1 + iP δa (4.86)

The self-adjoint operator that we get from infinite translation is called the momentum operator
p.
The position is also an operator in Quantum Mechanics. From the definition of translation
operators, we get

U (a) X U † (a) = X + a (4.87)

Again, we can consider an infinitesimal version of this transformation to get

i(P X − XP ) = 1 =⇒ [X, P ] = i (4.88)

Thus, from the action of symmetry, we can see that X and P do not commute. This relation is
also known as the Canonical commutation relation.

4.3.4 Action of symmetry on the Hilbert space for one dim Harmonic oscil-
lator
We now demonstrate an example of Wigner’s symmetry representation theorem. We consider
a one-dimensional Harmonic Oscillator. We rewrite everything in the Bra-Ket notation. The
Hamiltonian is given by
1
H = (p2 + ω 2 x2 ) (4.89)
2
In order to quantize the system, we promote the variables x and p to operators and impose
canonical commutation relation between them
󰁫 󰁬
x̂, p̂ = i󰄁 (4.90)

One defines creation and annihilation operator as follows


󰁵 󰀓 󰁵 󰀓
ω i 󰀔 † ω i 󰀔
a= x̂ + p̂ , a = x̂ − p̂ (4.91)
2󰄁 ω 2󰄁 ω

We denote the ground state as |0〉.

[a, a† ] = 1 , a|0〉 = 0 (4.92)

IISERB(2024) QMech I - 97
Then, the excited states are given by

(a† )n
|n〉 = √ |0〉 (4.93)
n!
n is a non-negative integer. Let’s now discuss various symmetries of the system and their action
of various symmetries on the Hilbert space. From Wigner’s theorem, we know that the action of
symmetry can either be linear, unitary, anti-linear, or anti-unitary. The way to settle between
these two is to demand the action of the symmetry must preserve the canonical commutation
relation.

Parity symmetry Let’s consider the Hamiltonian in (4.89). It has parity symmetry

P : (x, p) −→ (−x, −p) (4.94)

According to the Wigner symmetry representation theorem, there is an operator P that acts
on the Hilbert space. We are yet to determine whether it is a linear unitary operator or an
anti-linear, anti-unitary operator. From the transformation of x and p, we know that

P(x, p)P −1 = −(x, p) (4.95)

Consistency with the canonical commutation relation (??) demands that this symmetry must
be realized by a linear, unitary operator

P [x, p] P −1 = [x, p] =⇒ P(i󰄁)P −1 = i󰄁 (4.96)

Then, the action on the creation and annihilation operators are given by

P(a, a† )P −1 = (−a, −a† ) (4.97)

Now, we want to find its action in the Hilbert space. This symmetry squares to identity

P2 = 1 (4.98)

So, the eigenvalue of this operator can only be ±1. The vacuum must be an eigenstate of P;
because if it is not an eigenstate, then the action of P will give another state of the same energy.
But we know the vacuum is unique. So, the vacuum must be an eigenstate of P. Let the action
on the vacuum state be

P|0〉 = α|0〉 , α = ±1 (4.99)

α is undetermined. Then, a state with occupation number n transforms in the following way

P|n〉 = (−1)n α|n〉 (4.100)

Given an operator P, we can always find a new operator P̃ = αP such that

P̃ 2 = 1 , P̃|0〉 = |0〉 (4.101)

Any such definition won’t affect (??). Since we can do this redefinition, the phase α is not
physically observable. This phase can always be absorbed through redefinition.
Here, we take a small detour and show that the eigenvalues (which are often referred to
as “Quantum numbers” in physics literature) of a linear unitary operator, corresponding to a

IISERB(2024) QMech I - 98
symmetry, remain unchanged under time evolution. We demonstrate with the Parity operator.
Let U (t, t0 ) be the time evolution operator. Then

PHP −1 = H =⇒ PU (t, t0 )P −1 = U (t, t0 ) (4.102)

Please note that the linear unitary nature of the operator is important to show the above relation.
Now, let’s consider the state |n〉 at two different instances in time. Let t0 be the initial time,
and we know the eigenvalue at that time

P|n(t0 )〉 = (−1)n |n(t0 )〉 (4.103)

We will now try to determine the eigenvalue of the parity operator at a later point in time

P|n(t)〉 = PU (t, t0 )|n(t0 )〉 = PU (t, t0 )P −1 P|n(t0 )〉 = U (t, t0 )(−1)n |n(t0 )〉 = (−1)n |n(t)〉
(4.104)
So, we can see that the eigenvalue of the parity operator does not change with time. In other
words, the eigenvalue of the parity operator is a conserved quantity. We already know that
in classical mechanics, every continuous symmetry is associated with a conservation law. This
remains true for continuous symmetry in Quantum Mechanics. In Quantum mechanics, the
statement can even be extended to linear unitary discrete symmetries. In that case, the corre-
sponding operator has eigenvalues, and the eigenvalues remain the same under time evolution.

Time reversal symmetry Now we will consider another discrete symmetry: Time reversal
symmetry. Under time reversal, the position remains invariant, but the momentum changes the
sign (and hence the direction)

T : (x, p) −→ (x, −p) (4.105)

Then, according to Wigner’s theorem, there must be a symmetry operator that acts on the
Hilbert space. Let the action of Time-reversal transformation be generated by T ; then

T (x, p) T −1 = (x, −p) (4.106)

Time reversal must be an anti-linear, anti-unitary symmetry to be consistent with canonical


commutation relation.

T [x, p] T −1 = −[x, p] =⇒ T (i󰄁)T −1 = −i󰄁 (4.107)

Then, the action on the creation and annihilation operators are given by

T (a, a† )T −1 = (a, a† ) (4.108)

From this, it is straightforward to check that the Hamiltonian in (??) is invariant

T H T −1 = H (4.109)

We know that it is not possible to assign an eigenvalue to an anti-linear operator. A time-reversal


operator has no eigenvalue or conserved charges associated with it.

IISERB(2024) QMech I - 99
4.4 Uncertainty relations
In this section, we discuss uncertainty relations. Heisenberg’s Uncertainty principle is one of the
most celebrated results in Quantum Mechanics. A common misconception about Heisenberg’s
uncertainty principle is that the uncertainty in a quantum system is due to the disturbance that
one creates during observation. This is not correct. Uncertainty relation in quantum systems
is due to the fact that any observable is a self-adjoint operator, and the self-adjoints do not
necessarily commute with each other. Whenever two operators commute with each other, there
is some uncertainty between them.

4.4.1 Cauchy-Schwarz inequality


󰂓 · B|
|A 󰂓 = |A||B|| cos θ| ≤ |A||B| (4.110)

θ is the angle between the two vectors. The equality holds only when θ = 0, π, i.e. A and B,
are along the same direction.
This is a special of a more general inequality known as the Cauchy-Schwarz inequality.
Consider two vectors φ and χ in an inner product space. The Cauchy-Schwarz inequality states
that
(φ, φ)(χ, χ) ≥ |(φ, χ)|2 (4.111)

with equality satisfied if φ and χ are linearly dependent.

Proof We start with two-dimensional real vector space to prove this identity

X = (x1 , x2 ) , Y = (y1 , y2 ) (4.112)

Then we can see that

(X, X)(Y, Y ) − (X, Y )2 = (x21 + x22 )(y12 + y22 ) − (x1 y1 + x2 y2 )2 = (x2 y1 − x1 y2 )2 ≥ 0 (4.113)

It is easy to generalize it to n-dimensional space

X = (x1 , x2 , · · · , xn ) , Y = (y1 , y2 , · · · , yn ) (4.114)

Let’s compute the same quantity again in n dimensional space

󰀓󰁛
N 󰀔󰀓 󰁛
N 󰀔 󰀓󰁛
N 󰀔2
2
(X, X)(Y, Y ) − (X, Y ) = x2i yi2 − x i yi (4.115)
i=1 i=1 i=1

When we expand the expression, the terms of the form x2i yi2 cancel. So we are left with
N
󰁛 N
󰁛 N
󰁛 N
󰁛 N
󰁛
x2i yj2 − 2 (xi yi )(xj yj ) = (xi yj − xj yi )2 ≥ 0 (4.116)
i=1 j=1,j∕=i i,j=1 i=1 j=1,j∕=i

This concludes our proof for any finite-dimensional real vector space.
Now we generalize it to any inner product space. Consider two vectors φ and χ. In order
to prove (4.111), consider the vector φ + c χ (c is a complex number); from the property of the
inner product, it follows that the norm is non-negative

(φ + c χ, φ + c χ) ≥ 0 =⇒ (φ, φ) + |c|2 (χ, χ) + c(φ, χ) + c󰂏 (χ, φ) ≥ 0 (4.117)

IISERB(2024) QMech I - 100


The above inequality is true for any value c. Let’s choose c = −(χ, φ)/(χ, χ). Putting this in
the above identity, we get
(φ, φ)(χ, χ) − |(φ, χ)|2 ≥ 0 (4.118)

Note that the equality will only be if there is a choice of c such that

(φ + c χ, φ + c χ) = 0 (4.119)

From the property of an inner product, it follows that

φ + cχ = 0 (4.120)

So the equality follows only if φ and χ are linearly dependent.


Later, we will see many interesting applications of this identity. For example, we will see that
the uncertainty relations in Quantum Mechanics follow from the Cauchy-Schwarz inequality.

Introducing Bra-ket Bra-ket notation was introduced by Dirac, and it is very widely used
in Quantum Mechanics literature. In this notation, a vector is denoted as follows

ψ −→ |ψ〉 (4.121)

|〉 is called a ket. The inner product of two vectors is written as follows

(ψ, φ) −→ 〈ψ|φ〉 (4.122)

〈| is called a bra.
For example, in the Bra-ket notation, the Cauchy-Schwarz inequality is written as

〈ψ|ψ〉〈φ|φ〉 ≥ |〈ψ|φ〉| (4.123)

4.4.2 Defining uncertainty


The expectation value of an operator with respect to a state |α〉 is

〈O〉α = 〈α|O|α〉 (4.124)

α is normalized. The expectation value depends on the state. For example, if |α〉 is an eigenstate,
then the expectation value is just the eigenvalue. Now, we define the variance of the operator
σO with respect to the state |α〉.
󰀓 󰀔2
σO = 〈α|O2 |α〉 − 〈α|O|α〉 (4.125)

The variance depends on the state α. For example, if we choose |α〉 to be an eigenstate of O,
then the variance is zero.
Given a state |α〉 and operator A, we define the following state

|αA 〉 = A|α〉 − 〈A〉α |α〉 (4.126)

From the above expression, we can find 〈αA |

〈αA | = 〈α|A† − 〈A〉󰂏α 〈α| (4.127)

IISERB(2024) QMech I - 101


We restrict to the case of self-adjoint operators A† = A (implies 〈A〉α = 〈A〉󰂏α ). We can see that
the norm of the state gives the variance of the operator A
󰀓 󰀔2
〈αA |αA 〉 = 〈α|A2 |α〉 − 〈α|A|α〉 = σA (4.128)

Let’s consider an example: Consider the following operator


󰀣 󰀤
λ1 0
A= (4.129)
0 λ2

and the following normalized state


󰀣 󰀤
c1
, |c1 |2 + |c2 |2 = 1 (4.130)
c2

First, we compute 〈A〉

〈A〉 = |c1 |2 λ1 + |c2 |2 λ2 = λ2 + |c1 |2 (λ1 − λ2 ) (4.131)

and 〈A2 〉
〈A2 〉 = |c1 |2 λ21 + |c2 |2 λ22 = λ22 + |c1 |2 (λ21 − λ22 ) (4.132)
Then, the variance is given by

σA = 〈A2 〉 − 〈A〉2 = |c1 |2 (1 − |c1 |2 )(λ1 − λ2 )2 (4.133)

It is zero only if |c1 |2 = 0 or 1 or λ1 = λ2 .


Upto this point, we considered only one operator. Let’s now consider it for two operators
A and B. We have seen that for a single operator, the variance can be set to zero. When
we have two operators, then is it possible to set the variance of both operators to zero? The
variance becomes zero only for an eigenstate. So, if the variance of the operators is zero, then
the state must be an eigenstate of both operators. But often, it is not possible that a state is an
eigenstate of two different operators, especially if the two operators do not commute. We want
to understand what happens to the variance of those operators.

4.4.3 Uncertainty relation and its proof


From the two self-adjoint operators, we can construct two states |αA 〉 and |αB 〉. The inner
product of these two states is given by

〈αA |αB 〉 = 〈AB〉 − 〈A〉〈B〉 (4.134)

Now, we compute two more quantities that will be useful later. We start with Im 〈αA |αB 〉
1󰁫 󰁬 󰀟1 󰀠
Im 〈αA |αB 〉 = 〈αA |αB 〉 − 〈αB |αA 〉 = [A, B] (4.135)
2i 2i
Now we compute Re 〈αA |αB 〉. At first, we define the anti-commutator {} of two operators

{A, B} = AB + BA (4.136)

Then Re 〈αA |αB 〉 is given by


󰀟 󰁱 󰁲󰀠 1 󰁇󰁱
1󰁫 󰁬 1 󰁲 󰁈
Re 〈αA |αB 〉 = 〈αA |αB 〉 + 〈αB |αA 〉 = A − 〈A〉, B − 〈B〉 = A, B − 〈A〉〈B〉
2 2 2
(4.137)

IISERB(2024) QMech I - 102


From (4.137), we define the following quantity, which will be useful later
1 󰁇󰁱 󰁲 󰁈
∆(A, B) ≡ A, B − 〈A〉〈B〉 = Re 〈αA |αB 〉 (4.138)
2
〈αA |αB 〉 is a complex number. For any complex number
󰀓 󰀔2
|z|2 ≥ (Im z)2 =⇒ |〈αA |αB 〉|2 ≥ Im 〈αA |αB 〉 (4.139)

Let’s now apply Cauchy-Schwarz inequality for the two states |αA 〉 and |αB 〉

〈αA |αA 〉〈αB |αB 〉 ≥ |〈αA |αB 〉|2 (4.140)

We combine this with (4.139) to obtain


󰀓 󰀔2
〈αA |αA 〉〈αB |αB 〉 ≥ Im 〈αA |αB 〉 (4.141)

This equation implies that the variance of the operators satisfy


󰀏󰀟 󰁫 󰁬󰀠󰀏󰀏2
󰀏 1
󰀏
σA σB ≥ 󰀏 A, B 󰀏󰀏 (4.142)
2i

This relation is known as the Heisenberg uncertainty relation7 . We can see that if the operators
do not commute, we cannot set the variance of both operators to zero. The product of the
variance is bounded from below by the commutator. Note that the bound on the left-hand side
depends on the state. In general, the commutator of two operators is another operator. The
relation becomes even simpler when the commutator of the two operators is a complex number.
Let’s say 󰁫 󰁬
A, B = 󰄁 c (4.143)

Then we get
󰄁2 2
σA σB ≥
|c| (4.144)
4
So, if the commutator is a complex number, then the right-hand side becomes independent of
the state, and the uncertainty relation has to be satisfied by any state. One such example is x
and p; the commutator is given by 󰁫 󰁬
x, p = i󰄁 (4.145)

In this case, the uncertainty relation becomes


󰀕 󰀖2
󰄁
σp σx ≥ (4.146)
2

Any Quantum state cannot have definite momentum and position simultaneously. Please note
that this has nothing to do with measurement and disturbing the system from the outside.
The uncertainty relations follow from the inherent non-commuting nature of the observables in
Quantum Mechanics.

7
This form of the uncertainty relation was given by H Robertson in this work.

IISERB(2024) QMech I - 103


Common misconception
Often, the Uncertainty principle is associated with the observation of the system. When
we observe a system, we necessarily affect the system with external probes. Historically,
Heisenberg used this consideration to come up with the first version of the uncertainty
principle.
However, later work made the origin of uncertainty clearer. Uncertainty is a result of the
non-commutative nature of quantum mechanics. It has nothing to do with an experiment. A
system cannot be in a simultaneous eigenstate of two operators which do not commute. This
is the origin of the uncertainty principle. The “uncertainty” is present in every quantum
system irrespective of whether we observe it or not.

Consider a wave function of the following form


󰁵
a −a2 x2
ψa (x) = e (4.147)
π
For this wave function, the position uncertainty is
1
∆x =
, ∆p = 󰄁a (4.148)
2a
√ √
here ∆x = σx and ∆p = σp . So, the Gaussian wave function saturates the bound of the
Heisenberg Uncertainty principle.

Schrödinger ’s version of uncertainty principle Schrödinger (in 1930) wrote down a


stronger form of the uncertainty relations
󰀏 󰀓 󰀔󰀏󰀏2 󰀏󰀏󰀟 1 󰀠󰀏2
󰀏1 󰀏
σA σB ≥ 󰀏󰀏 〈{A, B}〉 − 〈A〉〈B〉 󰀏󰀏 + 󰀏󰀏 [A, B] 󰀏󰀏 (4.149)
2 2i
Let’s now consider (4.140) and use (4.135) and (4.138)
󰀓 󰀔2 󰀓 󰀔2
〈αA |αA 〉〈αB |αB 〉 ≥ |〈αA |αB 〉|2 = Re 〈αA |αB 〉 + Im 〈αA |αB 〉
󰀏󰀟 󰀠󰀏2 (4.150)
󰀏 1 󰀏
= |∆(A, B)| + 󰀏󰀏
2
[A, B] 󰀏󰀏
2i
Any Quantum system obeys both HUP and SUP. SUP is a stronger form of uncertainty relation.
It provides a more precise lower bound for the uncertainty relation. So, as an inequality relation,
SUP is a more refined statement, and HUP simply follows from it.

4.4.4 Examples: Harmonic oscillator


In the case of a Harmonic oscillator, the wave function is either even or odd. So, the probability
is evenly distributed around the origin. As a result,

〈x〉 = 0 = 〈p〉 (4.151)

let us now compute the expectation value of x2 and p2


󰄁 󰄁mω
〈x2 〉 = (2n + 1) , 〈p2 〉 = (2n + 1) (4.152)
2mω 2
So we get the following relation for nth state
󰄁 󰄁
∆x ∆p = (2n + 1) ≥ (4.153)
2 2
We can see that the ground state saturates the bound.

IISERB(2024) QMech I - 104


4.5 Two-dimensional isotropic oscillators
Consider the Hamiltonian of a two-dimensional Quantum oscillator
1 1 1
M (ẋ21 + ẋ22 ) + k1 x21 + k2 x22 (4.154)
2 2 2
We restrict our attention to the particular case when k1 = k2 . This is known as the isotropic
oscillator since the frequency in the x direction is the same as the frequency in the y direction.
We can quantize the system in the same way as that of the one-dimensional harmonic oscillator.
Here, we get two sets of creation and annihilation operators. The quantum Hamiltonian is
simply
H = 󰄁ω (a†1 a + a†2 a2 + 1) (4.155)
Analogous to the single oscillator, we can define number operators

Ni = a†i ai , N = N1 + N2 , H = H = 󰄁ω (N + 1) (4.156)

The states of this theory are

(a†1 )n1 (a†2 )n2


|n1 , n2 〉 = √ |0, 0〉 , Ni |n1 , n2 〉 = ni |n1 , n2 〉 (4.157)
n1 ! n2 !
The ground of the system is unique. There are two states at the first excited level. At nth
excited level, there are n + 1 states. So, the spectrum is higher degenerate, and the degeneracy
increases linearly with energy.
At this point, it is very natural to ask

• What is the spectrum degenerate?

• Why does the degeneracy increase linearly with energy?

To understand, note that we can construct four operators of the following form

a†i aj (4.158)

All these operators commute with the Hamiltonian!


󰀅 󰀆
H, a†i aj = 0 (4.159)

But these operators do not commute amongst themselves. Actually, one linear combination of
these operators gives the Hamiltonian. We can construct three other operators
󰄁 † 󰄁 † 󰄁
J3 = (a1 a1 − a†2 a2 ) , J1 = (a1 a2 + a†2 a1 ) , J2 = i (a†2 a1 − a†2 a1 ) (4.160)
2 2 2
Then we leave it to the reader to check that

[Ji , Jj ] = i󰂃ijk Jk , [H, Ji ] = 0 (4.161)

4.5.1 A digression
We found that two-dimensional oscillator has three self-adjoint operator Ji which commute
with Hamiltonian and they have a non-trivial. So here we take a slightly general perspective.
Let’s consider any quantum mechanical system which has three self-adjoint operators satisfying
J1 , J2 &J3 such that they satisfy
󰁫 󰁬
Ji , Jj = i󰄁󰂃ijk Jk (4.162)

IISERB(2024) QMech I - 105


At this point, we are agnostic about the origin of these operators in particular system. We do
not assume that they can be written in terms of creation operators. We only ask the question
that given three such operators in a quantum system what are the consequences. This algebra
is known as the algebra of angular momentum in Quantum Mechanics. We will come to the
reason behind this name later.
In order to find a representation, we define ladder operators

J± = J1 ± iJ2 (4.163)

They satisfy the following commutation relation


󰁫 󰁬 󰁫 󰁬
J+ , J− = 2󰄁J3 , J3 , J± = ±󰄁J± (4.164)

From the above relations, we can check that


󰁫 󰁬
J 2 , Ji = 0 (4.165)

We want to find the representation of this algebra on a Hilbert space. There are four operators:
J 2 and J1 , J2 & J3 . The first one commutes with everything but the last three do not commute
amongst themselves; As a result, we cannot simultaneously diagonalize them. We write the states
in terms of the eigenvalue of J 2 and J3

J 2 |α, β〉 = α|α, β〉 , J3 |α, β〉 = β|α, β〉 (4.166)

We also choose the states to be normalized

〈α, β|α, β〉 = 1 (4.167)

Let’s now consider the state J+ |α, β〉


󰀓󰀅 󰀆 󰀔
J 2 J+ |α, β〉 = J 2 , J+ + J+ J 2 |α, β〉 = αJ+ |α, β〉
󰀓󰀅 󰀆 󰀔 (4.168)
J3 J+ |α, β〉 = Jz , J+ + J+ Jz |α, β〉 = (β + 󰄁)J+ |α, β〉

From this, we can conclude that the action of J+ does not change the J 2 eigenvalue, but it
changes the J3 eigenvalue by +󰄁.

J+ |α, β〉 ∝ |α, β + 󰄁〉 (4.169)

Similarly, we can consider the action of J− , and this gives

J− |α, β〉 ∝ |α, β − 󰄁〉 (4.170)

From the expression of J 2 we get


1
J 2 − J32 = J12 + J22 = (J+ J− + J− J+ ) (4.171)
2
Since the Ji are self-adjoint operators, we get

Ji† = Ji =⇒ J+† = J− , J−† = J+ (4.172)

Putting this back in the above equation, we get


1 󰀓 󰀔
J 2 − J32 = (J+ J+† + J+† J+ ) =⇒ 〈α, β| J 2 − J32 |α, β〉 ≥ 0 =⇒ α − β 2 ≥ 0 (4.173)
2

IISERB(2024) QMech I - 106


This implies that for a given α, there is the maximum value of β; it is not possible to increase
the value of β beyond that.
J+ |α, βmax 〉 = 0 (4.174)

Similarly, there must be a minimum value of β.

J− |α, βmin 〉 = 0 (4.175)

Then we consider (4.162). We leave it to the reader to check that


1
(J+ J− + J− J+ ) = J− J+ + 󰄁J3
2 (4.176)
= J+ J− − 󰄁J3

Putting this to compute expectation value in 〈α, βmax |J 2 |α, βmax 〉 we obtain
󰀓 󰀔
α = 〈α, βmax |J 2 |α, βmax 〉 = 〈α, βmax | J32 + J− J+ + 󰄁J3 |α, βmax 〉 = βmax
2
+ 󰄁βmax (4.177)

Similarly for |α, βmin 〉 we obtain


󰀓 󰀔
α = 〈α, βmin |J 2 |α, βmin 〉 = 〈α, βmin | J32 + J+ J− − 󰄁J3 |α, βmin 〉 = βmin
2
− 󰄁βmin (4.178)

We obtain
α = βmax (βmax + 󰄁) = βmin (βmin − 󰄁) (4.179)

The solution to this equation is


βmax = −βmin ≥ 0 (4.180)

Now there must be an integer n such that


n
|α, βmax 〉 ∝ (J+ )n |α, βmin 〉 =⇒ βmax = βmin + n󰄁 =⇒ βmax = 󰄁 (4.181)
2
Let’s call j = n/2, then
α = 󰄁2 j(j + 1) , (4.182)

The states of a representation are denoted by the eigenvalue of the operators J 2 , Jz

|j, m〉 : J 2 |j, m〉 = 󰄁2 j(j + 1)|j, m〉 , J3 |j, m〉 = m󰄁|j, m〉 (4.183)

j, and m can only have following values

2j ∈ Z+ , m ∈ (j, j − 1, · · · , −j + 1, −j) (4.184)

We choose |j, m〉 to be normalized

〈j, m|j ′ , m′ 〉 = δjj ′ δmm′ (4.185)

then the action of the ladder operator is given by


󰁳
J+ |j, m〉 = (j − m)(j + m + 1)|j, m + 1〉 (4.186)

This implies that |j, j〉 is annihilated by J+ .


Similarly, the action of the lowering operator is given by
󰁳
J− |j, m〉 = (j + m)(j − m + 1)|j, m − 1〉 (4.187)

IISERB(2024) QMech I - 107


4.5.2 Back to 2-dimensional oscillators
Now we come back to the two dimensional oscillator again.
󰄁 †
J3 = (a a1 − a†2 a2 ) , J+ = 󰄁a†1 a2 , J− = 󰄁a†2 a1 (4.188)
2 1
We can check that these three operators satisfy the SU (2) lie algebra
󰀅 󰀆 󰀅 󰀆
J3 , J± = ±J± , J+ , J− = 2J3 (4.189)

The Casimir of the group is


󰀕 󰀖
2 1 N N
J = J32 + (J+ J− + J− J+ ) = 󰄁2 +1 (4.190)
2 2 2

The creation operators transform as the fundamental representation 2 of SU (2)


󰀅 󰀆 󰄁 󰀅 󰀆 󰄁
J3 , a†1 = a†1 , J3 , a†2 = − a†2 (4.191)
2 2
The states in (4.157) are also eigenstate of J3

󰄁
J3 |n1 , n2 〉 = (n1 − n2 ) (4.192)
2
The action of J± is given by
󰁳 󰁳
J+ |n1 , n2 〉 = (n1 + 1)n2 |n1 + 1, n2 − 1〉 , J− |n1 , n2 〉 = (n2 + 1)n1 |n1 − 1, n2 + 1〉
(4.193)
Alternatively, we can write the states on the following basis

(a†1 )ℓ+ℓ3 (a†2 )ℓ−ℓ3


|ℓ, ℓ3 〉 = 󰁳 |0, 0〉 , 2ℓ ∈ Z+ ∪ {0} , ℓ ± ℓ3 ∈ Z , |ℓ| > |ℓ3 | (4.194)
(ℓ + ℓ3 )! (ℓ − ℓ3 )!

ℓ is a half-integer.

H|ℓ, ℓ3 〉 = 󰄁ω(2ℓ + 1)|ℓ, ℓ3 〉 , J 2 |ℓ, ℓ3 〉 = 󰄁2 ℓ(ℓ + 1)|ℓ, ℓ3 〉


J3 |ℓ, ℓ3 〉 = 󰄁ℓ3 |ℓ, ℓ3 〉
,
(4.195)
Note that the ground state transforms as a scalar under su(2). The degeneracy of level ℓ is ℓ + 1;
so it transforms as spin ℓ/2 representation of su(2). We can check that actions of J+ and J− are
the same as those known from angular momentum. If we substitute n1 = ℓ + ℓ3 and n2 = ℓ − ℓ3
in Eqn (4.193) then we get
󰁳
J+ |ℓ, ℓ3 〉 = (ℓ − ℓ3 )(ℓ + ℓ3 + 1)|ℓ, ℓ3 〉 (4.196)

This is precisely the action L+ as we know it from the representation theory of angular momen-
tum.
The ground state transforms trivially under su(2); all the states with energy 󰄁ω(ℓ + 1)
transforms as spin ℓ/2 representation 8 .
Schwinger considered this model to compute various things in angular momentum, including
the Quantum Rotation matrix. One can check On Angular Momentum to find the original
treatment by Julian Schwinger. It can also be found in sec 3.8 of Sakurai Modern Q Mech.
8
It is a remarkable fact every representation of su(2) appears exactly once in the Hilbert space for two-
dimensional isotropic oscillators!

IISERB(2024) QMech I - 108


Summary

1. The system has three self-adjoint operators Ji which commute with the Hamiltonian
󰀅 󰀆 󰀅 󰀆
H, Ji = 0 , Ji , Jj =∕ 0 (4.197)

As a result, the action of these operators does not change the eigenvalue of H

H|E〉 = E|E〉 =⇒ H(Ji |E〉) = E(Ji |E〉) (4.198)

2. These operators do not commute amongst themselves


󰀅 󰀆
Ji , Jj = i󰂃ijk Jk (4.199)

So, not all of them are simultaneously diagonalizable! At most, one of them can be
diagonalized. Let’s choose it to be J3 . Then, every state in the Hilbert space can be
written in terms of the eigenvalue of H and J3

|E, ℓ〉 : H|E, ℓ〉 = E|E, ℓ〉 , J3 |E, ℓ〉 = ℓ|E, ℓ〉 (4.200)

3. From J1 and J2 , we can construct two operators such that their commutator with J3 is
proportional to the operator.
󰀅 󰀆
J± = J1 + iJ2 =⇒ J3 , J± = ±J± (4.201)

4. The action of J± changes J3 eigenvalue without changing the H eigenvalue.


󰀃 󰀄 󰀃 󰀄 󰀃 󰀄 󰀃 󰀄
H J± |E, ℓ〉 = E J± |E, ℓ〉 , J3 J± |E, ℓ〉 = (ℓ + 1) J± |E, ℓ〉 (4.202)

Since J3 is a self-adjoint operator, then two states with eigenvalue for J3 are necessarily
orthogonal.
󰀃 󰀄
〈E, ℓ| J± |E, ℓ〉 = 0 (4.203)
As a result, the action of J± changes the state without changing energy value9 . This is
the reason behind the degeneracy of the system.

We conclude that the existence of a collection of a) self-adjoint operators, b) which commute


with the Hamiltonian but c) do not commute amongst themselves, necessarily leads to the
degeneracy of the spectrum.
Note that the non-commutativity of the operators is important to conclude the existence
of the degeneracy. The existence of a commuting set of operators does not necessarily imply
degeneracy. For example, consider a system with three operators Oi such that they commute.
Then in that case even if create operators O± because of the commutative nature
󰀃 󰀄 󰀃 󰀄
O3 O± |E, ℓ〉 = ℓ O± |E, ℓ〉 (4.204)

So, we cannot conclude


󰀃 󰀄
〈E, ℓ| O± |E, ℓ〉 = 0 (4.205)
The non-commutative aspect of the operators is important to conclude the degeneracy of the
spectrum. If the operators are commutative, then there may or may not be a degeneracy of the
spectrum.
9
We are implicitly assuming that J± does not annihilate the state. In that case, we do not get a new state.

IISERB(2024) QMech I - 109


4.5.3 How is it related to symmetry?
From the Wigner symmetry representation theorem, we know that whenever there is a continuous
symmetry, it is realized by a linear unitary operator on the Hilbert space. In (??) and (??),
we have seen that we can consider an infinitesimal transformation, and this gives us a self-
adjoint operator which commutes with the Hamiltonian. Is the converse true? i.e. given a
self-adjoint operator L that commutes with the Hamiltonian, can we conclude that the system
has a continuous symmetry? The answer is yes.
Consider a self-adjoint operator L that commutes with the Hamiltonian. Then, we can
construct an operator of the following form
󰀗 󰀘
i
U (θ) = exp Lθ , θ∈R (4.206)
󰄁

Since L is self-adjoint, we obtain


󰀗 󰀘
† i
U (θ) = exp − Lθ (4.207)
󰄁

Then U is a unitary operator

U † (θ)U (θ) = 1 = U (θ)U † (θ) (4.208)

Hence, it realizes a continuous symmetry in the Hilbert space.


This is also true if there are multiple linearly independent self-adjoint operators Ji which
commute with the Hamiltonian H
󰀅 󰀆
Ji† = Ji , Ji , H = 0 (4.209)

We already know that for every Ji , we can construct a unitary operator


󰀗 󰀘
i
Ui (θi ) = exp Ji θi , θi ∈ R (4.210)
󰄁

Now, the product of any finite number of unitary operators is also a unitary operator

U ({θi }) = U1 (θ1 ) · · · UN (θN ) (4.211)

All such matrices form a group. This gives a continuous symmetry group with N parameters.
The number of real independent parameters is called the dimension of the group. Hence, N self-
adjoint linearly independent conserved operators give a continuous symmetry group of dimension
N.

Abelian and non-abelian nature of the symmetry group We know the symmetries of
form a group. We have seen above that U operators form a group. If two different elements
commute with each other, the group is called an abelian group. Otherwise, it is a non-abelian
group. Here, we would like to understand whether the symmetry group is abelian or not.
Let’s start with the case when there is only one conserved self-adjoint operator. Then from
the expression given in (4.206), we can see that

∀θ ∕= φ , U (θ) U (φ) = U (φ) U (θ) (4.212)

So, in this case, the symmetry group is necessarily abelian.

IISERB(2024) QMech I - 110


Now, we move to the case when there is more than one conserved operator. Then from the
expression given in (4.210) we see
󰀅 󰀆
∀i ∕= j , Ui (θi ) Uj (θj ) = Uj (θj )Ui (θi ) ⇐⇒ Ji , Jj = 0 (4.213)

So, the symmetry group is abelian if and only if the conserved self-adjoint operators commute
with each other. Otherwise, it is a non-abelian group.
From the previous discussion, then, it follows that the existence of a non-abelian symmetry
group implies that the spectrum is degenerate 10 . In fact, the bigger the non-abelian symmetry
group, the more degenerate the system is. One can check this statement from a very simple
example, considering N dimensional isotropic harmonic oscillator. In this case, the system has
a SU (N ) non-abelian symmetry. One can also check that degeneracy increases rapidly with N .

4.5.4 Symmetry, theoretical physics and experiments


For the two-dimensional isotropic oscillator, we found how the existence of symmetry is related
to the presence of degeneracy. Symmetry is an extremely important concept in the way physics
is practised today.
We start by discussing the symmetry of objects around us. For example, consider a square.
If we rotate it by 90 degrees around the centre of mass, the final configuration is identical to
the initial configuration. In ancient Greece, one can find the concept of platonic solids. The
platonic solids were highly symmetrical, and it was postulated that the world is made of these
seven platonic solids. Since ancient times, symmetric objects have attracted the human mind.
However, we are discussing symmetries that are much deeper and subtler. We are interested
in the symmetries of the underlying laws that govern the whole world. We start with classical
mechanics.
In classical mechanics, Lagrangian formalism is the most efficient in identifying symmetries.
If the symmetry is continuous, then from Noether’s theorem, we obtain conserved quantities.
Noether’s theorem also works in the other way. There must be an underlying continuous sym-
metry if there is a conserved quantity for the dynamics. All the symmetries of a system form
a group. The conserved quantities are extremely useful in finding solutions to the dynamical
equations. Once we have identified conserved quantities, the Hamiltonian formalism is effective
in finding solutions to the equations of motion. For example, we use a suitable canonical trans-
formation to make the canonical momenta a function of conserved charges only. In that case,
all the canonical momenta are constant of motion, and hence, it is easy to solve the equation of
motion for the canonical coordinates. Now the question is, when can we do this? These criteria
are known as the Liouville-Arnold integrability theorem:
Consider a 2n dimensional Hamiltonian system. If the system has n independent conserved
quantities such that the Poisson bracket for any two of them is zero, then it is possible to find a
solution for the system exactly. The Liouville-Arnold integrability theorem shows the usefulness
and power of symmetries in classical dynamics.
Up to this point, we have written down a Lagrangian, identified its symmetries, and found
the conserved charges. So, a Lagrangian is a fundamental object, and the symmetries follow
from the Lagrangian. However, in the last 150 years, developments in physics have shown that
symmetry is more fundamental, and often, it uniquely determines dynamics. We have discussed
one such example above. Let’s elaborate on that point more. A Lagrangian is not an observable
quantity, but a conserved quantity is observable. Hence, it is easy to find conserved quantities
10
We encourage the reader to think whether the converse is also true?

IISERB(2024) QMech I - 111


in experiments. Then, we can use the Noether theorem to conclude that the Lagrangian has
some underlying continuous symmetry. One can try to construct the most general Lagrangian
that is consistent with the symmetry. Often, the Lagrangian that one can write down is unique;
hence, the symmetry uniquely fixes the dynamics.
In Quantum dynamics, the symmetry is even more powerful. In quantum mechanics, the
dynamic is governed by a linear differential equation. This is not always true in classical theory.
The linearity of the dynamics has a unique advantage. The solutions of a linear differential
equation form a vector space. We have already seen that symmetries of a differential equation
form a group. If the differential equation is linear, then we can decompose the space of solutions
(which is a vector space) as a representation of the symmetry group. This is why representation
theory is crucial in Quantum theory, and symmetry is more powerful in Quantum theory.
Upto this point, we have seen the important consequences due to the existence of symmetry.
A large enough symmetry allows us to solve the system completely. So, when building a new
theory, we can put a large enough symmetry to solve it exactly. This would simplify our lives
drastically. However, nature and physics are way more subtle than this. A bigger symmetry
group comes with a larger degeneracy of the spectrum; the existence of a bigger symmetry group
comes with an important experimental signature. Unless experiments observe those signatures,
it does not matter how beautiful the theory with a big symmetry group is; it is practically
irrelevant.
So, the challenge of a practising physicist is to impose a larger group symmetry to make the
system as solvable as possible but should not be so large that current experiments rule it out.
In QMech II, we will see that this is not the whole story. We will see how “breaking of
a symmetry” leads to important consequences and how the existence of life and civilization is
related to the lack of a symmetry.

More exercises from this chapter


1. Expectation value of an operator
The expectation value of an operator A in Quantum Mechanics is defined as
ˆ
〈A〉 = ψ 󰂏 (x)Aψ(x)dx (4.214)

(a) Consider Quantum particle in a one-dimensional box.


Find the expectation value of the following operator with respect to the state with
quantum number n.
H , x , x 2 , p , p2 (4.215)

(b) Let us now consider a one-dimensional Harmonic oscillator.


Compute the expectation value of the following operators with respect to the state
with Quantum number n

H , x , x2 , p , p2 (4.216)

You can use creation and annihilation operators

We also define the following quantity

(∆A)2 = 〈A2 〉 − 〈A〉2 (4.217)

IISERB(2024) QMech I - 112


In the above two cases
∆p , ∆x (4.218)

2. Deriving Ehrenfest’s theorem


Use Schrodinger equation

i󰄁
ψ = Hψ (4.219)
∂t
and the expectation value of an operator to show that
󰀟 󰀠
d〈A〉 1 ∂A
= 〈[A, H]〉 + (4.220)
dt i󰄁 ∂t
Please note that in order to derive this, you do not need to use any particular form of the
Hamiltonian. Let’s consider a particular form of the Hamiltonian

p2
H= + V (x) (4.221)
2m
Compute the following commutators

[x, H] , [p, H] (4.222)

Using these results shows that


󰀟 󰀠
d〈x〉 〈p〉 d〈p〉 dV
= , =− (4.223)
dt m dt dx

3. Coherent state in Harmonic oscillator


In the case of a Quantum Harmonic oscillator, consider states which satisfy the following
relations
a|λ〉 = λ|λ〉 (4.224)
λ is a complex number.

(a) Find the expression for |λ〉 in terms of the energy eigenstates
(b) Compute the following inner product

〈λ1 |λ2 〉 (4.225)

From this, find the normalisation of the state.


(c) Compute the following quantities

〈λ|x|λ〉 , 〈λ|x2 |λ〉 , 〈λ|p|λ〉 , 〈λ|p2 |λ〉 (4.226)

(d) From above results, compute


∆p , ∆x (4.227)

4. Uncertainty principle and Harmonic oscillator

(a) Consider energy eigenstates of the Harmonic oscillator and show that they satisfy the
Heisenberg uncertainty principle.
(b) Show that any Harmonic oscillator state satisfies the uncertainty principle.
(c) Please repeat the previous two steps to demonstrate that the Schrodinger uncertainty
principle is also satisfied.

IISERB(2024) QMech I - 113


(d) Does any of these states saturate the bound?

5. Uncertainty principle and Particle in a box

(a) Consider the energy eigenstates of the system and show that they satisfy the Heisen-
berg uncertainty principle.
(b) Show that any state of the Particle in a box satisfies the uncertainty principle.
(c) Please repeat the previous two steps to demonstrate that the Schrodinger uncertainty
principle is also satisfied.
(d) Does any of these states saturate the bound?

6. Consider the following Hamiltonian


1
H = Pi Pi − V (R) , R = Xi Xi (4.228)
2
Here we have followed Einstein summation convention. i takes value from 1 to n. Pi is
given by

Pi = −i∂i ≡ −i (4.229)
∂Xi

Consider the following set of operators

Lij = Xi Pj − Xj Pi , L2 = Lij Lij (4.230)

Then compute

[H, Lij ] , [Lij , Lkℓ ] , [Lij , L2 ] (4.231)

7. Consider N independent Harmonic oscillators with the same frequency. The creation and
annihilation operators satisfy

[ai , a†j ] = δij , [ai , aj ] = 0 = [a†i , a†j ] (4.232)

Let’s consider the following operators


n
H = ωa†i ai + ω , Lij = ai a†j − aj a†i , Fij = ai a†j + aj a†i
2
L2 = Lij Lij , F 2 = Fij Fij (4.233)

Compute the following commutators

[H, Lij ] , [Lij , Lkℓ ] , [Lij , L2 ] (4.234)


2
[H, Fij ] , [Fij , Fkℓ ] , [Lij , F ] (4.235)
2
[Lij , Fkℓ ] , [Fij , F ] (4.236)

Find the energy eigenstates of this model and find the degeneracy of the states with energy
ω(N + n2 ).

8. Cn is n dimensional complex vector space. Show that Cn ⊗ Cm ≃ Cmn .

IISERB(2024) QMech I - 114

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy