Qunatum Final
Qunatum Final
Arnab Rudra
Department of Physics
1
“I think I can safely say that nobody understands quantum mechanics.”
- Richard Feynman
Preface Quantum Mechanics is one of the foundational pillars of Physics. This is a note
for the first course in quantum mechanics. We welcome questions, comments, criticisms and
suggestions. They can be emailed to rudra@iiserb.ac.in or can be submitted here.
Acknowledgment The author is grateful to Mahesh KNB, Manoj Mandal and especially
Rahul Shaw for various inputs on the content of the note. The author would like to thank all
the students of PHY303N(2022) and PHY303N(2024) for the questions during the class. This
provided a constant source of encouragement to improve the note.
Pre-requisites The course requires one course on mechanics and mathematical methods.
IISERB(2024) QMech I - 2
Resources: Textbooks There are many textbooks on this subject.
3. MIT OpenCourseWare: Quantum Physics I, Quantum Physics II, Quantum Physics III
If we are given such a box, the most natural questions that one would ask are:
• What is the energy stored in the box?
• ...
We now have a list of questions. Whenever we want to study a system, we have many questions.
In such a scenario, it is important to identify first the following things:
• Which set of questions are more important than others? If we can answer some of the
questions, the answers to the other questions can be derived from that. What is the
complete set of questions that is enough to answer all the properties of a system?
1
• What tools should we use to study those questions?
In order to address the above issues, it is often useful to identify the relevant physical parameters
and how large those parameters are. In physics, it only makes sense to talk about the largeness
or smallness of a dimensionless quantity. The value of a dimension-full parameter depends on the
choice of units and is, hence, not very useful. For example, let’s say the box size is 100milimeter
or equivalently .1 meter. We can see the size is “Large” or “Small,” depending on the units.
This is because the size of the box is a dimension-full parameter. But let’s say we measure the
ratio of the length of the box to the ratio of the room it is kept in. That ratio is always the same,
irrespective of what scale we use. In physics, we are mostly interested in these dimensionless
quantities; they play an important role in identifying the relevant tool to analyze the system.
At this point, one can ask whether these relevant dimensionless quantities depend on the system
at hand or whether some general procedure determines them. The answer to this will become
clear soon. Let’s identify the dimensionless parameters of the system given above.
2. The ratio of the mass of the gas molecule to the ratio of the mass of the container g1 .
3. The length of the box in terms of the Compton wavelength of the particle g2 .
We choose what tools we should use depending on how large these numbers are. For example,
if g1 ≃ 1, the box will recoil whenever a gas molecule hits it. For the moment, we focus on the
regime when g1 ≃ 0. Now, depending on the other two dimensionless quantities, we determine
which tools to use. For example:
2. N large
Classical Classical
Mechanics Stat Mech
g2
Quantum Quantum
Mechanics Stat Mech
IISERB(2024) QMech I - 2
Depending on the situation, some of these can even be irrelevant. For example, let’s say we are
interested in what happens to the box of gas molecules if it gets hit by a ball of similar mass.
In that context, N is irrelevant.
Depending on the question, we need to determine which parameters are the most relevant and
which to ignore. In the above, we mentioned that the parameters were large and small. These
are, again, not very precise statements. The number 100 can be large or small, depending on
the person. So it is important to be precise about them. Let’s say g is a dimensional parameter
of the system (For example, N , g1 and g2 in the above example). We are mostly interested in
three regimes
g→0 , g→∞ , g≃1 (1.1)
Why are these three regimes important? We have mentioned before that physics is an art of
approximation. This means that we compute all observables as a Taylor expansion in terms of
dimensionless parameters (this technique is known as perturbation theory; we will learn it in
the context of quantum mechanics in this course). In the first case, we can Taylor expand in g
around the point g = 0; in the second case, we Taylor expand in 1g around the point g = ∞. In
the third case, the technique of Taylor expansion breaks down. We need a different method to
study the system. For practical purposes, we restrict to the first few terms in the Taylor series
because the higher-order terms in the Taylor expansion become smaller than the experimental
sensitivity. Hence, we cannot measure the effect of those terms. However, the higher order terms
in the Taylor expansion are significant in determining the correctness of a theory 1 . Experimental
advances measure a theory to higher accuracy and hence can experimentally determine the effect
of the higher order terms in the Taylor expansion. It often happens that the predictions of two
theories agree on the first one or two terms of the Taylor expansion but mismatch at higher
order.
We see that the dimensionless parameters can depend on the context/system or can be
determined from the fundamental constant. We can see that N and g1 follow from the system’s
consideration (i.e. N and g1 can be determined solely from the system’s information). In
contrast, the third one (g2 ) depends on fundamental constants of nature. In general, one should
first identify all the dimension-full parameters of the system and form dimensionless quantities
out of them and the fundamental constants of nature.
This approach to addressing a problem in physics depending on dimensionless parameters
is known as the Wilsonian approach . This started from the revolutionary work of Kenneth G.
Wilson, for which he was awarded the Nobel prize in 1982. The Wilsonian approach is a very
different reductionist approach, as put forward by the work of Dalton, Rutherford, Bohr et al. In
the reductionist approach, one key motivation of physics was to find the fundamental description
of nature and to connect every phenomenon in terms of the fundamental constituents. In the
Wilsonian paradigm, we try to find an effective description of a phenomenon depending on
dimensionless quantities.
IISERB(2024) QMech I - 3
key principles of physics can be learnt. Any physical theory relies on certain fundamental
assumptions or postulates; these postulates cannot be proven. They are like Euclidean postulates
in geometry. The only verification of these postulates comes from experiments. While studying
a theory, it is important to distinguish between the postulates of a theory and the mathematical
deduction from those postulates. Physics research is about questioning those postulates and
altering them to compose a wider range of phenomena. For example, Newton provided three
laws of motion describing all large objects we can see around us. But there is no mathematical
derivation of these laws. Their validity relies entirely on experimental verification.
Similarly, the discipline of quantum mechanics relies on some fundamental postulates. The
postulates of quantum mechanics depend on a particular choice of formulations. There are the
three most commonly used formalisms of Quantum Mechanics. These three formulations of
quantum mechanics are known as
1. Wave Mechanics: This approach relies on Schrödinger equations. This is analogous to the
Newtonian formulation of Classical mechanics.
2. Matrix Mechanics: This approach relies on the Canonical commutation relations, which
Heisenberg first wrote down. This is analogous to the Hamiltonian formulation of Classical
mechanics
3. Path integral: This approach was developed by Richard Feynman. It is analogous to the
Lagrangian formulation of classical mechanics.
Other formulations of quantum mechanics are also there (see here). All these formulations have
different assumptions/postulates. But they agree on all current experiments. For a wide class of
systems, we can also derive two other formulations starting from any one of these formulations.
f : R −→ C (1.2)
We also assume that the function can be differentiated any number of times; it is an infinitely
differentiable (“smooth”) function. Consider the set of all such smooth functions from R to C;
let’s call that set V. V has the following property
1. addition
If f1 (x) and f2 (x) belong to the set V then f1 (x) + f2 (x) also belongs to V
From the definition, it is clear that the addition operation satisfies the following properties
(a) Commutative
f1 (x) + f2 (x) = f2 (x) + f1 (x) (1.4)
The order in which elements appear does not matter.
IISERB(2024) QMech I - 4
(b) Associative
f1 (x) + f2 (x) + f3 (x) = f1 (x) + f2 (x) + f3 (x) (1.5)
(c) Identity
f1 (x) + 0 = f1 (x) (1.6)
The function 0 belongs to V.
(d) Inverse
f1 (x) + (−f1 (x)) = 0 (1.7)
2. scalar multiplication
If c is a complex number (independent of x) and f (x) belongs to V, then c f (x) also belongs
to V.
(c) Distributive
c(f1 (x) + f2 (x)) = cf1 (x) + cf2 (x) (1.10)
These are the same properties that are required to label a set as vector space. For example,
these properties are also satisfied by coordinate vectors that you have studied in your classical
mechanics course. V is called a vector space.
(c1 , · · · , cn ) (1.11)
We can check that it also satisfies all the properties listed above. This collection is known as
n-dimensional Complex vector space Cn . We usually denote the elements of Cn as n-dimensional
column matrix
c1
.
.. (1.12)
cn
(c1 , · · · , cn , · · · ) (1.13)
This is not a very familiar example. So, we get to give a few explicit examples first:
1 1 1
(1, −1, 1, −1, · · · ) , 1, , , , · · · (1.14)
2 3 n
This collection also forms a vector space. For future reference, we call it Vseq . This space has
no matrix representation.
2
We do not require the sequences to be convergent sequences.
IISERB(2024) QMech I - 5
1.3.4 space of square summable infinite sequence
Let’s consider the previous example with one extra condition
∞
|cn |2 = Finite (1.15)
n=1
This criterion is known as the square summability. We denote the space as ℓ2 (N).
One can show that ℓ2 (N) is also a vector space. Note that every element of ℓ2 (N) is an
element of Vseq but the converse is not true. Homework
1. addition
If ψ1 and ψ2 belong to the set V then ψ1 + ψ2 also belongs to V
ψ1 , ψ2 ∈ V =⇒ ψ1 + ψ2 ∈ V (1.17)
From the definition, it is clear that the addition operation satisfies the following properties
(a) Commutative
ψ1 + ψ2 = ψ2 + ψ1 (1.18)
The order in which elements appear does not matter.
(b) Associative
ψ1 + ψ2 〉 + ψ3 = ψ1 + ψ2 + ψ3 (1.19)
(c) Identity
ψ+0=ψ (1.20)
2. scalar multiplication
If c is a complex number and ψ belongs to V, then c ψ also belongs to V. The complex
number is usually known as a “scalar.”
1·ψ =ψ (1.22)
IISERB(2024) QMech I - 6
(c) Compatibility between field multiplication and scalar multiplication
These properties are also satisfied by vectors that you have studied in your classical mechanics
course. In this case, V is also called a vector space. In this case, we are dealing with complex
vector space.
We have already given a few examples above. Let’s consider various examples here:
2. Consider the collection of all the solutions of a linear differential equation subjected to a
particular condition. This collection also forms a complex vector space. Homework
1. In the case of Cn , we can find at most n vectors which are linearly independent. So, the
dimension of the Space n. So, it is a finite-dimensional vector space.
The only solution to this equation is cn = 0 for all values of n. So, the number of linearly
independent vectors is greater than any finite quantity. So, this is an infinite-dimensional
vector space.
IISERB(2024) QMech I - 7
3. ℓ2 is also an infinite dimensional vector space.
4. Consider a vector Vn such that the only non-zero element is in the n position. All other
entries are zero. In this case, we consider the equation
N
c n Vn = 0 (1.30)
n=M
Again, the only solution is that cn = 0. So, this space is again an infinite-dimensional
vector space.
Finite-dimensional vector spaces are easier to understand. One can show that n-dimensional
complex vector space is essentially the same as Cn 3 . As a result, the whole machinery of
matrices can be used for any finite-dimensional vector space. For a physicist, n-dimensional
vector space and n-column matrix are the same. As a result, many of our intuitions of vector
spaces are based on matrices. In quantum mechanics, we often counter vector spaces, which are
infinite-dimensional. For example, consider the collection of the wave function for a particle in
a box. They form a vector space. But it is an infinite dimensional vector space.
An infinite dimensional vector cannot be written in terms of matrices; There is nothing like
infinite-dimensional matrices; it is not even correct to think about infinite dimensional vector
spaces as a limit 4 of finite dimensional vector spaces. Many features of finite-dimensional vector
spaces become subtle in the case of infinite dimensions. A step toward understanding Quantum
Mechanics is to understand those subtleties.
So, in our discussion, we will emphasize various points of how infinite-dimensional vectors
are distinct from finite-dimensional vector spaces. We will not go into a lot of mathematical
details; One can check, for example, the book by Brian Hall.
Span and Basis A set of vectors (ϕ1 , · · · , ϕn ) spans a linear vector space if every vector in
the vector space can be written as a linear sum of the ϕi s.
A set of vectors is called a basis if it spans the vector space.
Examples
1. In the case of Cn , there are many choices for the basis vector. A very convenient choice is
2. Consider the collection of periodic functions. Any periodic function can be written as a
linear sum of χn (θ)
∞
f (θ + 2π) = f (θ) =⇒ f (θ) = cn χn (θ) (1.33)
n=−∞
IISERB(2024) QMech I - 8
Subtleties with Infinite dimensional vector space Here, we would like to elaborate on
a subtle point on Infinite dimensional vector space. In a finite-dimensional vector space (say,
dim n), if we find n linear independent vectors, then we know those n vectors also form a basis.
However, in infinite-dimensional vector space, an infinitely many linearly independent vector
may not form a complete basis. For example, consider the collection of periodic functions: e2in
(n ∈ Z) are infinitely many linearly independent vectors. However, they do not form a complete
basis.
Infinite sets have many other subtleties. We encourage the readers to see these two videos:
video 1, video 2. There is also a a very beautiful book on this theme
f : V −→ V (1.34)
In general, we can consider a map from one vector space to a different vector space. But for
our purpose, we restrict to the functions from a vector space to itself. Let’s start with a few
examples
f : Cn −→ Cn (1.35)
(a)
(c1 , · · · , cn ) −→ (0, · · · , 0) (1.36)
(b)
(c1 , · · · , cn ) −→ (c21 , · · · , c2n ) (1.37)
(c)
(c1 , · · · , cn ) −→ (c1 , 2c2 , · · · , n cn ) (1.38)
(d)
(c1 , · · · , cn ) −→ (cn , c1 , · · · , cn−1 ) (1.39)
(c)
(c1 , · · · , cn , · · · ) −→ (c1 , 2c2 , · · · , n cn , · · · ) (1.42)
We leave it to the readers to check that the first two maps are also maps from ℓ2 (N) to
itself, whereas the third one is NOT a map from ℓ2 (N) to itself. Homework
IISERB(2024) QMech I - 9
1.5.1 Linear operators
We have already discussed that any element of Cn can be written as a column vector. Now we
can check that the example in (1.36), (1.38) and (1.39) as matrices, whereas the example in
(1.37) cannot be written as a matrix. This brings us to the concept of linear operator. Consider
a map f
f : V −→ V (1.43)
such that
Matrices are examples of linear operators on finite dimensional vector spaces. One can check
that (1.37) is not a linear operator. (1.40), (1.41) and (1.42) are examples of linear operators in
infinite-dimensional vector spaces; however they are not matrices !!!
Consider the collection of all operators from Cn to itself; Show that it is a complex vector
space. Homework: Show this.
O : H −→ H (1.45)
We use the following Convention. Ket vectors must always be put on the right side of an
operator.
O|v1 〉 (1.46)
Given a linear operator O if we can find a vector |v〉 such that
In this case, the operator has n different eigenvectors with eigenvalues 1, 2, · · · , n. For
example, consider the vector (1, 0, · · · , ) with eigenvalue 1. The vector (0, 1, · · · , ) has
eigenvalue 2.
• Infinite dimensional vector space, for any linear operator, we can always find an eigenvector
and an eigenvalue.
• In infinite dimensions, a linear operator may not have an eigenvector or an eigenvalue. For
example, consider the right shift operator in (1.40) or the left shift operator in (1.41).
IISERB(2024) QMech I - 10
1.6 Inner product and inner product space
A Complex inner product is a function V × V to complex plane C which satisfies certain
properties. We denote the inner product of two vectors ψ and φ as
ψ, φ −→ (ψ, φ) (1.50)
Note that we demanded linearity only in the second argument. There are a few corollaries of
the four properties
• The inner product defines a real (positive definite) norm 5 . From the second property, it
follows that
(φ, φ) = (φ, φ) (1.52)
(φ, χ) = 0 (1.53)
A Complex inner product space (or pre-Hilbert space) is a complex vector space V equipped
with a complex inner product.
Let’s try to write an inner product on this vector space. The inner product is some function
(U, V ) = f {ui }, {vi } (1.55)
But this is not allowed since the inner product between any vector and 0 vector is zero. So, we
make another guess
An inner product has to be linear in the second argument. So β1 = 0. In fact, the linearity
condition rules out any higher powers of vi s. So we are left with the choice
IISERB(2024) QMech I - 11
We consider U and V such that the first component is zero, but the other components are not.
In this case, the above inner product is zero even though the vectors are not 0. So again, this is
not an allowed inner product even though it obeys linearity in the second argument. We make
minimal modifications to the proposal so that it does not spoil the linearity.
This relation obeys conjugate symmetry if αi are real. And it is positive definite only if all the
αi are positive. We arrive at a function which obeys all the properties of inner products
N
(U, V ) = αi ui vi , αi ∈ R , αi > 0 (1.60)
i=1
We can check that it satisfies all the properties of an inner product. Homework
Exercise: Let’s modify the above definition slightly; we introduce a function f (θ)
ˆ 2π
(φ, ψ) = dθ f (θ)φ (θ) ψ(θ) (1.62)
0
f (θ) is periodic function. What are the conditions on f (θ) so that it is a good inner product?
Homework
Inner product and angle between two vectors In the case of 3-dimensional vectors, we
know the angle between two vectors is given by
·B
A
cos θ = (1.63)
|A||B|
In an inner product space, this notion can be generalized for any two vectors (even if they are
infinite-dimensional vectors)
(U, V )
(1.64)
(U, U )(V, V )
θ is the angle between the two vectors. The equality holds only when θ = 0, π, i.e. A and B,
are along the same direction.
This is a special of a more general inequality known as the Cauchy-Schwarz inequality.
Consider two vectors φ and χ in an inner product space. The Cauchy-Schwarz inequality states
that
(φ, φ)(χ, χ) ≥ |(φ, χ)|2 (1.66)
IISERB(2024) QMech I - 12
Proof We start with two-dimensional real vector space to prove this identity
(X, X)(Y, Y ) − (X, Y )2 = (x21 + x22 )(y12 + y22 ) − (x1 y1 + x2 y2 )2 = (x2 y1 − x1 y2 )2 ≥ 0 (1.68)
When we expand the expression, the terms of the form x2i yi2 cancel. So we are left with
N
N
N
N
N
x2i yj2 − 2 (xi yi )(xj yj ) = (xi yj − xj yi )2 ≥ 0 (1.71)
i=1 j=1,j∕=i i,j=1 i=1 j>i
This concludes our proof for any finite-dimensional real vector space. We leave it as an exercise
to extend this proof for complex vector spaces colour red Homework.
Now, we generalize it to any inner product space. Consider two vectors φ and χ. In order
to prove (4.111), consider the vector φ + c χ (c is a complex number); from the property of the
inner product, it follows that the norm is non-negative
The above inequality is true for any value c. Let’s choose c = −(χ, φ)/(χ, χ). Putting this in
the above identity, we get
(φ, φ)(χ, χ) − |(φ, χ)|2 ≥ 0 (1.73)
Note that the equality will only be if there is a choice of c such that
(φ + c χ, φ + c χ) = 0 (1.74)
φ + cχ = 0 (1.75)
Some notation Here, we introduce a notation, which is known as the Dirac Bra-Ket notation.
It is very widely used in Quantum Mechanics literature. In this notation, a vector is denoted as
follows
ψ −→ |ψ〉 (1.76)
|〉 is called a ket. The inner product of two vectors is written as follows
〈| is called a bra.
For example, in the Bra-ket notation, the Cauchy-Schwarz inequality is written as
IISERB(2024) QMech I - 13
1.7 Unitary operators (and anti-unitary operators)
Let’s define the following two things
1. Unitary
An operator is called a unitary operator if
2. Anti-unitary
An operator is anti-unitary if
Properties of Linear, Unitary operators Let U be an operator and |α〉 be a ket vector,
and O be another operator. The action of U on |α〉 and O is defined as
U
|α〉 −→ |αU 〉 = U |α〉 (1.82)
U −1
O −→ OU = U OU (1.83)
U † = U −1 (1.85)
We will demonstrate with this a few examples: We begin with the example of Cn . Consider two
vectors in Cn
v1 = (c1 , · · · , cn ) , v2 = (d1 , · · · , dn ) (1.90)
An operator from Cn to Cn is an n × n matrix with complex entries. Then
IISERB(2024) QMech I - 14
i, j and k are dummy indices; we follow Einstein’s summation convention: one needs to sum
over all possible values of a repeated dummy index. Then
We compare this equation with the definition of adjoint ((1.89)), and we get
Thus, this reproduces the result of hermitian adjoin for complex-valued matrices
O† = (O )T (1.94)
Let’s now consider a vector space which is not finite-dimensional. Consider the collection of all
normalisable differentiable functions from R to C. Given two such functions, we can define the
inner product to be ˆ ∞
(f, g) = dxf (x)g(x) (1.95)
−∞
Let’s now consider the operator O = d/dx. We want to compute it’s adjoint
∞
ˆ ∞
d ˆ ∞
d
(Of, g) = dx f (x) g(x) = f (x)g(x) − dxf (x) g(x) (1.96)
−∞ dx −∞ dx
−∞
For normalisable functions f (±∞) = g(±∞) = 0. Then, comparing the above equation with
the definition of adjoint ((1.89)), we obtain
†
d d
=− (1.97)
dx dx
Now, we define the concept of the self-adjoint operator. An operator is called a self-adjoint
operator if it is equal to its adjoint
self-adjoint : O = O† (1.98)
This implies that the eigenvalue of a self-adjoint is real. Let’s now consider two eigenvectors
with different eigenvalues
IISERB(2024) QMech I - 15
Again we use (1.89),
Since ξ ∕= λ, we obtain that (v1 , v2 ) = 0. This implies that two eigenvectors with different
eigenvalues are necessarily orthogonal.
It is also possible to show that the collection of all eigenvectors of a self-adjoint operator is
Since each of the operators is a map from V to itself, if we consider their action, as given above,
then that is also a map from V to itself. So, the product of two operators is also an operator.
Let’s now see whether the final result depends on the order of operation or not. We start with
an example: O1 = x and O2 = x2 . Then
O1 ◦ O2 ◦ f (x) = O1 ◦ x2 f (x) = x3 f (x)
(1.105)
O2 ◦ O1 ◦ f (x) = O2 ◦ (x f (x)) = x2 (x f (x)) = x3 f (x)
So in this case, O1 ◦ O2 = O2 ◦ O1 . Let’s consider another example: O1 = x and O2 = d/dx.
Then
d d
O1 ◦ O2 ◦ f (x) = O1 ◦ f (x) = x f (x)
dx dx
(1.106)
d d
O2 ◦ O1 ◦ f (x) = O2 ◦ (x f (x)) = (x f (x)) = x f (x) + f (x)
dx dx
Clearly, we can see that O1 ◦ O2 ∕= O2 ◦ O1 ;
If the actions of two operators are the same irrespective of the order of operation, then it
is said that those two operators commute. Otherwise, we say that those two operators do not
commute. In the case of linear operators, we compute the following quantity
O1 , O2 = O1 O2 − O2 O1 (1.107)
This is called a commutator, and whenever the action of two operators does not depend on the
order of operation, this quantity vanishes. In the example given above, we can see
d
x, = −1 (1.108)
dx
In order to compute a commutator, it is a good idea to consider its action on a function. For
example, let’s say we want to compute [d/dx, eax ]. Then
d d ax d
, eax f (x) = (e f (x)) − eax (f (x)) = aeax f (x) (1.109)
dx dx dx
So we conclude that d/dx, eax = a eax . For more than one operator, we can consider various
commutators, but all of them can be reduced to a commutator of two operators. For example,
consider the following quantity
O1 , O2 O3 = O1 , O2 O3 + O2 O1 , O3 (1.110)
IISERB(2024) QMech I - 16
We leave it to the reader to prove this. This is known as the Leibniz identity of the operators.
For a more general commutator, we can use the above identity repeatedly. For example, consider
the following commutator
O1 , O2 O3 O4 (1.111)
Let’s first consider O3 O4 as a single operator (try to show that the product of two linear operators
is also a linear operator) and use the above identity
O1 , O2 O3 O4 + O2 O1 , O3 O4 (1.112)
Now, for the second commutator, we can use the above identity again. Commutators satisfy
another very important property, which is known as the Jacobi identity
O1 , O2 , O3 + O2 , O3 , O1 + O3 , O1 , O2 = 0 (1.113)
3. In any measurement, the observed value for any observable A is one of the eigenvalues of
the operator A.
4. When we make a measure on a general state |ψ〉, the probability observe the value a for
the operator A is
|〈a|ψ〉|2 (1.114)
where |a〉 is the eigenstate of A with eigenvalue a.
Take-away lessons:
IISERB(2024) QMech I - 17
1.10.1 A simple example
We begin with an example of a simple quantum mechanical system. We consider the vector
space to be C2 . Any Hermitian operator is a Hamiltonian for the system.
Bz Bx − iBy
H= , Bx , By , Bz ∈ R (1.116)
Bx + iBy Bz
In physics, this system is known as a spin 1/2 particle in a magnetic field. The name is not
important. The purpose was to demonstrate how to define a quantum system. The Schrodinger
equation for this system is
∂ c1 (t)
Hψ(t) = i ψ(t) , ψ= , c1 (t), c2 (t) ∈ C (1.117)
∂t c2 (t)
We leave it to the readers to find the eigenvalues for H and then to solve the Schrodinger
equation Homework.
In physics experiments, we often have more experimentally observed quantities apart from
the energy. In Quantum Mechanics, an experimentally observed quantity is an Eigenvalue of a
Self-adjoint operator. For example, momentum is an experimentally observed quantity, and it
is a self-adjoint operator. We will consider more examples in the next chapter.
IISERB(2024) QMech I - 18
More exercises from chapter I
1. Take collection of all continuous real function f (x) (f = f ) of one real variable such that
ˆ
dxf (x)2 = Finite (1.118)
(a) Show that the collection of all such functions forms a vector space.
(b) Which of the operators are well defined on the whole of V ?
d
x , , eαx , eα|x| (1.119)
dx
2. Unitary and anti-unitary operators
(a) Show that the product of any (finite) number of linear unitary operators is also linear,
unitary.
(b) Show that the product of an even (finite) number of anti-linear, anti-unitary operators
is linear, unitary.
(c) Show that the product of an odd (finite) number of anti-linear, anti-unitary operators
is anti-linear, anti-unitary.
IISERB(2024) QMech I - 19
7. Self-adjoint operators
Consider the following operator
d
P =i (1.124)
dx
An operator is self-adjoint on functions defined in the interval [a, b]
ˆ b ˆ b
dx O ◦ f g= dxf O ◦ g (1.125)
a a
Find the condition on the functions f and g such that P is self-adjoint on the space of
functions defined on a) half space [0, ∞) b) interval [a, b]
Check it also for P 2 .
Show that gij is a hermitian matrix. We form a linear combination of the following form
fi = cij fj (1.127)
Find the relation between gij and gij . Show that you can always choose cij s such that gij
is a diagonal matrix with real entries.
IISERB(2024) QMech I - 20
11. From the definition of a Linear operator, show that a linear operator commutes with a
complex number.
13. Consider two operators O1 and O2 such that there commutator is not zero. Let ψ1 , · · · , ψn , · · ·
are the complete set of eigenvectors for O1 . Are they also eigenvectors for O2 ?
Explain your answer through examples.
IISERB(2024) QMech I - 21
Chapter 2
In the last chapter, we saw that a quantum system is defined by a pair: a Hilbert space and
a self-adjoint operator (which is also known as the Hamiltonian). In this chapter, we discuss
a large class of quantum systems. Unlike the Qbit system that we introduced in the previous
chapter, these systems also have a classical description and in that sense they are special as they
can be studied in both classical mechanics and in quantum mechanics. The readers should keep
in mind that this is not true for a generic Quantum/Classical system. For all these system the
Hilbert space is the collection of normalisable functions from real line to complex plane.
ˆ ∞
Ψ : R −→ C , |Ψ(x)|2 dx = finite (2.1)
−∞
2 d 2
H(x) = − + V (x) , V (x) = V (x) (2.3)
2m dx2
These systems are broadly known as one-dimensional systems; in classical mechanics, these
describes motion of a particle in one dimension in presence of a potential V (x). From the
perspective of Quantum Mechanics, the name “one-dimensional” may sound misleading as the
corresponding Hilbert space is infinite-dimensional. The function Ψ is known as the wave
function. The broad class of quantum systems where the Hilbert space consists of functions
is also known as the wave mechanics. We rewrite the postulates of Quantum Mechanics in the
language of Wave functions.
1. The information of a closed, isolated quantum system is encoded in wave function Ψ(x, t),
which is a function of space x and time t.
The probability of finding the system in a differential volume dV at a point x is given by
At any point in time, the total probability over all space points is 1. This implies that the
wave function is normalized
ˆ
|Ψ(x, t)|2 dV = 1 , t ∈ (−∞, ∞) (2.5)
22
2. Every observable of the system corresponds to a linear self-adjoint operator.
In the next section, we will explain what a linear self-adjoint operator is.
3. In any measurement, the observed value for the observables is one of the eigenvalues of
the corresponding operator.
4. Consider a system described by normalized wave function Ψ. Then the average expectation
value of the observable corresponding O(x, t) is
ˆ
Ψ (x, t) O(x, t) Ψ(x, t)dV (2.6)
For a solution of the above form, the dependence on x and t is there in two different functions.
We put this back in the above equation, and after a few simple steps, we get
1 2 d 2 1 ∂
− 2
+ V (x) ψ(x) = i f (t) (2.12)
ψ(x) 2m dx f (t) ∂t
The left-hand side of the equation does not depend on t, and the right-hand side does not depend
on x. Hence both sides must equal to some constant. Let us call it E
2 d 2 ∂
− 2
+ V (x) ψ(x) = Eψ(x) , i f (t) = Ef (t) (2.13)
2m dx ∂t
The first equation is known as the time-independent Schrödinger equation. E is the eigenvalue
of the Hamiltonian operators; since Hamiltonian is a self-adjoint operator, its eigenvalue is real
and observable. E denotes the energy of the system.
Before we go into analyzing a particular system, we list down a few properties of the wave
function. These properties follow from the postulates of Wave mechanics.
1
In technical terms, these are valid only for systems whose wave function can be written as a function of
spacetime co-ordinate. This is true for many systems but not for all quantum systems.
IISERB(2024) QMech I - 23
1. Two wave functions which are proportional to each other up to a complex number represent
the same states. This is because all the physical observables can be obtained from the
action of Linear self-adjoint operators. Since two wave functions are proportional to each
other, the action of any linear operator gives the same answer for both of them. Hence it
is not possible to distinguish them by experiment.
Two wave functions ψ1 (x) and ψ2 (x) represent two completely different states if the inner
product of their wave functions is zero
ˆ ∞
dx ψ1 (x)ψ2 (x) = 0 (2.14)
−∞
here N 2 is a finite real number. If this is the case, we can always find a normalized wave
function which is consistent with the probability interpretation.
4. Consider the time-independent Schrödinger equation and integrate it around a local neigh-
bourhood (x − , x + ) of x = a ( > 0)
ˆ a+ ˆ a+
2 d 2
dx − ψ(x) + V (x) ψ(x) = E dx ψ(x) (2.17)
a− 2m dx2 a−
ψ(x) is always continuous so that probability is unambiguous at every point. Let’s take
the limit → 0. Then the above integral implies that ψ ′ (x) can have a discontinuity at
x = a depending on the functional form of the potential. The discontinuity is given by
dψ dψ
ˆ a+
2m
− = lim dx V (x) ψ(x) (2.18)
dx + dx − 2 →0 a−
x=a x=a
The delta function gives rise to a discontinuity in the first derivative of the wave function,
unless the wave function vanishes at that point.
To summarise, the wave function in one dimension is continuous and piece-wise differentiable.
IISERB(2024) QMech I - 24
2.1 Particle in a box
We consider a particle in a box
L
V (x) =0 , |x| ≤
2 (2.21)
L
=∞ , |x| >
2
Since the potential is time-independent, we consider the time-independent Schrödinger equa-
x = − L2 x= L
2
This is also known as the Dirichlet boundary condition. The solution of (2.22) subjected to the
above boundary condition is given by
2 x 1
ψn (x) = sin nπ − , n ∈ Z+ (2.25)
L L 2
n is called the quantum number. We can check that the solutions are orthonormalized
ˆ L/2
ψn 1 (x) ψn2 (x) dx = δn1 n2 (2.26)
−L/2
IISERB(2024) QMech I - 25
1. The energy level, in this case, is discrete; n in (2.27) is a positive integer.
2. In classical mechanics, the lowest possible value for the energy of the system is 0; this
is not allowed in quantum mechanics. The lowest possible energy quantum mechanics is
when n = 1
π 2 2
(2.28)
2mL2
For n = 0, the wave function vanishes identically, and hence the total probability turns
out to be 0. So, n = 0 is not an allowed state.
As we increase n, the even functions and the odd functions appear alternatively.
5. Let’s plot some of the wave functions. The wave functions for the first few states can be
found in fig. (2.13). One can take the help of Mathematica to plot the wave function of a
general state.
A node of a wave function is a point where the wave function vanishes, but its first
derivative does not vanish. From the plot, we can see that the wave function of nth state
has n − 1 nodes.
6. If there are more than one wave functions that satisfy the time-independent Schrödinger
equation with the same energy but are mutually orthogonal, then we call them degenerate
energy eigenstate. In this case, we can see that the energy levels are non-degenerate, i.e.
for any particular energy, there is only one state.
π 2 2
En+1 − En = (2n + 1) (2.30)
2mL2
So the relative increment in energy is
En+1 − En 2n + 1
= (2.31)
En n2
For large values of n, this gap is smaller than the experimental sensitivity. In that case,
we cannot experimentally verify the discrete jump between two energy levels. The energy
levels will appear to be continuous. For this reason, the large quantum number limit is
often referred to as a quantum system’s classical limit. Note that a dimensionless quantity
controls the classical limit of the system.
8. In Classical mechanics, the particle is equally probable everywhere in the box. This is
not true in quantum mechanics. For example, in the ground state, the particle is more
probable at the centre of the box; in the first excited state, the probability of finding the
particle at the centre is zero.
However, for large quantum numbers, the wave function changes very rapidly. In that
limit, if we choose any region in the box, the probability of finding the particle becomes
uniform again.
IISERB(2024) QMech I - 26
n=1 n=2 n=3
Students are encouraged to read the following two articles by V Balakrishnan to learn about
subtle aspects of Particle in a box.
Time-dependent solutions Upto this point, we have discussing the solution to the time-
independent Schrodinger equation. Let’s now discuss the solution to the time-dependent Schrodinger
equation
En
Ψn (x, t) = e−i t ψn (x) (2.32)
En is given in (2.27). These wave functions satisfy orthonormalibility condition at any point in
time ˆ ∞
dx|Ψn (x, t)|2 = 1 , ∀t ∈ (−∞, ∞) (2.33)
−∞
Since the Schrodinger equation is linear, partial differential equation, any linear combination of
solutions is also a solution to the equation. So most general solution to the Schrodinger equation
is
∞
Ψ(x, t) = cn Ψn (x, t) (2.34)
n=1
IISERB(2024) QMech I - 27
So the Hilbert space for this system is simply ℓ2 (N)2 . Let’s now consider the action of Hamilto-
nian
∞
π 2 2 2
HΨ(x, t) = n cn Ψn (x, t) (2.37)
2mL2
n=1
So the Hamiltonian correspond to the following operator
π 2 2
H : (c1 , c2 , · · · ) −→ (c1 , 4c2 , · · · , n2 cn , · · · ) (2.38)
2mL2
2 d 2 1
H(x) = − 2
+ kx2 (2.40)
2m dx 2
It is possible to find the wave function and energy levels of this system by solving the differential
equation. However, we solve it in a different way: the method of factorization3 . We begin by
defining the following quantity
k
ω= (2.41)
m
in classical harmonic oscillator, this is the frequency of oscillator. In Quantum Mechanics, we
keep referring it as frequency. Then we can rewrite the Hamiltonian as
ω d2 mω 2 ω d2 2
H(x) = − + x = − + x̃ (2.42)
2 mω dx2 2 dx̃2
where we have defined
mω
x̃ = x (2.43)
mω
We can check that x̃ is dimensionless because has dimension of inverse length. Now we
explain the method of factorization, which was original discovered Erwin Schrodinger. Let’s
define the operator
1 d 1 d
A= √ + x̃ =⇒ A† = √ − + x̃ (2.44)
2 dx̃ 2 dx̃
Then we can see that
1 d2 1 1 1 d2 1 1
AA† = − 2
+ x̃2 + , A† A = − 2
+ x̃2 − (2.45)
2 dx̃ 2 2 2 dx̃ 2 2
Comparing this with the Hamiltonian in (2.40) we get
† 1 †
A, A =1 , H = ω A A + (2.46)
2
2
It is possible to show that the Hilbert space for any Quantum system with discrete energy levels is ℓ2 (N).
This is a known result in mathematics: any Hilbert Space with countable basis is isomorphic to ℓ2 (N). Thus all
quantum system with discrete energy level share the same Hilbert space; the only difference between them is the
choice of Hamiltonian. This provides an unified view for large class quantum systems.
3
Please see the book by Shi-Hai Dong, “Factorization Method in Quantum Mechanics” if you want to know
more about this.
IISERB(2024) QMech I - 28
The method of factorization relies on identifying one operator (and it’s adjoint) such that the
Hamiltonian can be written as product of those operators (plus some real number). Now we
compute two more things that will useful later. We begin with
1 1
[H, A] = ω A† A + , A = ω A† A, A + ω , A (2.47)
2 2
The second term is a commutator of an operator with a complex number and thus it will be
zero. Let’s consider the first term then
A† A, A = A† [A, A] + A† , A A = −A (2.48)
thus we get
[H, A] = −ωA (2.49)
Following similar steps one can show that
Now we want to find energy eigenstates in this system. Before we begin the analysis, we
start by making a few comments: H is a linear sum of two self-adjoint operators ωA† A and
ω/2. Moreover, these self-adjoint operators commute each other and thus it is possible to find
simultaneous eigenvectors of these two operators.
In order find eigenvalues, we start by proving that the eigenvalue of A† A is always non-
negative. Let’s assume that ψ is an eigenfunction of A† A
A† Aψ = λψ , λ∈R (2.51)
Moreover,
λ = 0 =⇒ ψ̃ = 0 =⇒ Aψ = 0 (2.56)
In this the expression of A is given in (2.44) and putting it back in the above expression we can
see that there is solution to this equation (we denote the solution as ψ0
d −α x 2 mω
+ x̃ ψ0 (x) = 0 =⇒ ψ0 (x) ∝ e 2 , α= (2.57)
dx̃
ψ0 has the lowest energy in the system; It is the ground state. This state has energy ω/2. The
wave function is not normalised; but it is normalisable. We can evaluate the gaussian integral
easily ˆ ∞
−αx2 π
e = (2.58)
−∞ α
IISERB(2024) QMech I - 29
So the normalized wave function is
α − α x2
ψ0 (x) = e 2 (2.59)
π
Just like in the case of Particle in a box, we see that the ground state is an even function.
Now we have to determine other eigenvalues and eigenfunctions of H(x). In order to deter-
mine that, consider state ψ with energy E
Hψ = Eψ (2.60)
Here we have used [H, A] = − ωA. This means that the action of A on a wave function lowers
its energy. Similarly, we can show that action A† increases its energy.
H A† ψ = (E + ω)A† ψ (2.62)
A can act on a state ψ multiple times to lower the energies more and more; in this way, the
system can have arbitrarily low energy.
E + ω
A A†
E
A A†
E − ω
This situation can be stopped if action of A on a state gives 0 (A annihilates the system)
Aψ0 = 0 (2.63)
IISERB(2024) QMech I - 30
To find other excited states, we consider the action of A† because it increases energy.
1 † n 1
ψn (x) = √ (A ) ψ0 (x) , H(x)ψn (x) = ω n + ψn (x) (2.64)
n! 2
√
the factor of 1/ n! is for the purpose of normalisation. The wave function of nth state is
1 α − α x2
ψn (x) = √ e 2 Hn (x̃) (2.65)
2n n! π
Hn (x̃) are Hermite polynomials; their definition is the following
2 dn −z 2
Hn (z) = (−1)n ez e (2.66)
dz n
For the particle in a box, we listed a number of properties for the system. We can check that
the properties are also satisfied by the states in the Quantum Harmonic oscillator. Homework:
Check those statement !
A most general state of the system is
Ψ(x) = cn ψn (x) , |cn |2 = Finite (2.67)
n=0 n
We also leave it to the readers to check that the Hilbert space in this case is again ℓ2 (N). Let’s
now consider the action of creation operator
√ √
A† Ψ(x) = cn n + 1ψn+1 (x) , AΨ(x) = cn nψn−1 (x) (2.68)
n=0 n=0
We leave it as a homework to Plot the wave function of various energy eigenstates. Homework.
IISERB(2024) QMech I - 31
the allowed form of the potential is. We want the wave function to be continuous, and that puts
restriction on the potential . For example, consider the following function
1. λ δ(x − a)
3. sin ax
1
4. x2 +a2
In order to understand a generic potential, we first make a list of checks: Firstly, we check if
limx→±∞ V (x) exists or not. For example, if take V (x) = sin(ax) then this limit does not exist.
We restrict to the case when both the limit exists. Let’s say
We denote the maxima and the minima of the potential by Vmax and Vmin . Then
We restrict to the case where Vmin is a finite quantity (apart from the cases where the potential
has delta function contributions); otherwise, the system is not stable and hence is not physical.
The solution of the Schrödinger equation in a generic potential can be broadly classified into
two categories
1. Bound state
If the energy of a state is less than V+ , then such a state can never reach x = ±∞. Such
a state is known as a bound state. They exist iff
2. Scattering state
If the energy of a state is greater than V+ (and/or V− ) , then such a state can travel to
x = ±∞. Such a state is known as a scattering state. This exists only if
E ≥ V+ , V− (2.74)
V+ , V− < ∞ (2.75)
Depending on the value of V+ , V− , Vmin , Vmax , we can classify potentials in four broad categories:
IISERB(2024) QMech I - 32
2. Three of them are the same
This can happen in two ways:
We refer to them as potential well and potential barriers (see fig. 2.5). Later we will deal
with two special cases of this: rectangular potential well and rectangular potential barrier.
(a)
V− = Vmin , V+ = Vmax (2.78)
There are many examples of this. One is drawn in fig 2.6. Another example is
V0 tanh(x) (2.79)
(b)
V+ = Vmin , V− = Vmax (2.80)
IISERB(2024) QMech I - 33
V
V+ = V− (2.81)
It is possible to relax this assumption. We will discuss later what happens when we lift
this assumption.
2. V (x) − V+ is non-zero only in a bounded interval of the real line; it is zero otherwise.
When this assumption is true, we can make a very special type of experiment. Moreover,
in nature, many potentials (but not all) satisfy these criteria. So this is an assumption
which is motivated by experiment.
1
V (x) = x2 +a2
is an example of potential for which this is not true.
V (x) = 0 (2.82)
IISERB(2024) QMech I - 34
In this case, the Schrödinger equation takes the form
2 d 2
− ψ(x) = Eψ(x) (2.83)
2m dx2
For E < 0, any solution of the above equation has a probability unbounded from above, and
hence they do not represent any physical states. The solution of the equation is of the form
ikx 2mE
ψk (x) = e , k=± (2.84)
2
These are called plane-wave solutions, and the plane wave solutions are NOT square integrable4 ;
they are only delta function normalizable; i.e. they satisfy
ˆ ∞
ψk′ (x) ψk (x)dx = δ(k − k ′ ) (2.85)
−∞
Physically this means that these wave functions are not wave functions of any state that can be
observed since the probability associated with them is not bounded. However, note that we can
always take a “collection” of plane wave states
ˆ ∞
ψ(x) = dk g(k) eikx (2.86)
−∞
∂
ψk (x) = −i
p ψk (x) = k ψk (x) (2.89)
∂x
The state represented by the wave function has momentum k.
4
a function is called square-integrable if
ˆ ∞
|ψ(x)|2 dx = finite
−∞
The wave function of any physical state should be square-integrable; this follows from the probability interpretation
of the wave function.
IISERB(2024) QMech I - 35
2.5 Probability current and unitarity
Let’s consider the time-dependent Schrödinger equation again
2 ∂ 2 ∂
− 2
Ψ(x, t) + V (x, t)Ψ(x, t) = i Ψ(x, t) (2.90)
2m ∂x ∂t
We consider the complex conjugate of this equation, and we get
2 ∂ 2 ∂
− 2
Ψ (x, t) + V (x, t)Ψ (x, t) = −i Ψ (x, t) (2.91)
2m ∂x ∂t
Now we multiply (2.90) by Ψ (x, t) and (2.91) by −Ψ(x, t) from the left and then add them to
get
2 ∂2 ∂2 ∂
− Ψ (x, t) 2 Ψ(x, t) − Ψ(x, t) 2 Ψ (x, t) = i Ψ (x, t)Ψ(x, t) (2.92)
2m ∂x ∂x ∂t
The above equation can be written as
∂ ∂
ρ(x, t) + jx (x, t) = 0 (2.93)
∂t ∂x
where ρ and jx (x) is defined as
∂ ∂
ρ(x, t) = Ψ (x, t)Ψ(x, t) , jx (x, t) = i Ψ(x, t) Ψ (x, t) − Ψ (x, t) Ψ(x, t)
2m ∂x ∂x
(2.94)
Eqn (2.93) takes the form of a continuity equation. One can also do this for the higher dimen-
sional Schrödinger equation. In that case, (2.93) will be replaced by
· j(x, t) = 0
∂t ρ(x, t) + ∇ (2.95)
the expression of ρ is given in (2.94). The expression for j(x, t) changes and it is given by
j(x, t) = i (x, t) − Ψ (x, t)∇Ψ(
x, t)
Ψ(x, t)∇Ψ (2.96)
2m
The physical meaning of this continuity equation follows from the probability interpretation of
quantum mechanics. ρ(x, t) is the probability density to find the particle at position x at time
t. j(x, t) is the probability current. The above equation tells that if the probability at any point
changes, then the change in probability can be accounted for from the probability current at
that point.
A local-continuity equation is always associated with some conservation law (though the
converse is not necessarily true). In this case, the continuity equation is associated with the
conservation of probability. ρ(x, t) is the probability density to find a particle at position x and
time t. The probability of finding the particle at that point can change with time only if there is
a non-zero flux of the probability current at that point. Local conservation is a powerful tool to
analyze a system; it implies that the ρ(x, t) cannot suddenly decrease at one point and increase
at some other point (even though such a scenario will keep the total probability the same).
For time-independent potentials, we can consider solution to the time-independent Schrödinger
equation given in (2.11). If we substitute it in (2.95) we get
· j(x) = 0 j(x) = i (x) − ψ (x)∇ψ(
x)
∇ , ψ(x)∇ψ (2.97)
2m
IISERB(2024) QMech I - 36
j(x) is time-independent from the definition, and it is divergence-less. Consider the special case
of one dimension; in this, the equation reduces to
d d d
j(x) = 0 , j(x) = i ψ(x) ψ (x) − ψ (x) ψ(x) (2.98)
dx 2m dx dx
λ̃ δ(x − a) (2.100)
The solution to the Schrödinger equation with definite energy in regions I and III is just a
plane wave.
IISERB(2024) QMech I - 37
We have already discussed that plane wave solutions do not represent a physical state. So why
are you considering them here? We consider this because we know that they form a basis for
writing down physical states. We have written down such example in (2.86) and (2.87). Any
state with energy E > 0 can be written down by a suitable choice of g(k). The plane wave states
form a basis for the physical states. Hence if we understand what happens to a plane wave state
in the presence of potential, we can figure out what would happen to any physical state under
the influence of the potential.
In ψI (x), the first (second) term has positive (negative) momenta and hence represents the
wave functions that are travelling towards (away) the potential. That’s why the first (second)
term is called an incoming (outgoing) wave function. For ψIII (x), we can see that the second
term represents the incoming wave function, and the first term represents the outgoing wave
function.
We define two different basis states. Later we will see the utility of these states.
AI BI
χin = , χout = (2.103)
BIII AIII
The probability current (defined in (2.98)) in the region I and III is given by
1
jI = kI |AI |2 − |BI |2 (2.105a)
m
1
jIII = kIII |AIII |2 − |BIII |2 (2.105b)
m
Conservation of probability current, given in (2.99), implies (and using the fact that kI = kIII )
we get
This suggests that if the norm of the vectors in (2.103) are defined as
This unitary matrix is called the S matrix (or scattering matrix). It is a 2 × 2 matrix
S11 S12
S= (2.110)
S21 S22
IISERB(2024) QMech I - 38
So, the S matrix simply becomes
0 1
(2.112)
1 0
The advantage of this definition is that the S-matrix, in the absence of any interaction,
simply becomes identity matrix 12×2 .
S depends on the potential and the momentum of the plane wave state. Often we denote
that functional dependence in the following way
S(V, k) (2.114)
Again we will go back to the case of delta functions. Now we are interested in scattering
solutions.
ψI (x) = AI eikx + BI e−ikx , ψIII (x) = AIII eikx + BII e−ikx (2.115)
2mλ̃ 2 λ
λ= =⇒ λ̃ = (2.116)
2 2m
The continuity of the wave function at x = a gives
this simplifies to
AIII 1 −ike−ika −e−ika eika e−ika AI
= − ika ika ika −ika
BIII 2ik −ike e (λ + ik)e (λ − ik)e BI
1 2k − iλ −iλ e−2ika AI
= 2ika
(2.120)
2k iλ e 2k + iλ BI
Note that this matrix relates the left basis and right basis as defined in (2.104). This definition
can be generalised to any potential.
IISERB(2024) QMech I - 39
2.6.1 Transfer matrix
The S-matrix relates in-states to out-states. We can define another matrix which relates the
left state to the right states
AIII M11 M12 AI
= (2.122)
BIII M21 M22 BI
M is called the transfer matrix. Using (2.104), the above equation (2.122) can be written as
χIII = M χI (2.124)
From (2.105a) (2.105b), it follows that the natural norm on the left and the right vector is the
Lorentzian norm (This is essentially a statement of unitarity)
1 0 AI
jI = χ†I Ω χI = kI AI BI (2.125a)
0 −1 BI
1 0 AIII
†
jIII = χIII Ω χIII = kIII BIII AIII
(2.125b)
0 −1 BIII
M† Ω M = Ω (2.127)
IISERB(2024) QMech I - 40
Later in section 2.7, we will see that the transfer matrix obeys the product rule; if we know the
transfer matrix for two potentials separately, then we know the transfer matrix for the combined
system. This, in turn, implies that if we know the S matrix of two potentials separately, then
we know the S matrix for the combined system.
Again, we go back to the case of the delta function potential. In that case, the transfer
matrix is defined in (2.121). From this, we can determine the S-matrix
−iλ e2ika 2k
1
S(k; λ, a) = (2.132)
2k + iλ −2ika
2k −iλ e
1. Consider an incoming wave from Region I. It can either be reflected back to the region I
or will be transmitted to region III.
(a) If the wave is reflected back, we denote it as (I−→I). The corresponding probability
for the wave to be reflected is R11
(b) If the wave is transmitted to region III, we denote it as (I−→III). The corresponding
probability for the wave to be transmitted is T21
There are four different processes. In the presence of symmetries of the potential, some of these
processes are mapped into each other. We discuss this in section 3.1.
Again, we go back to the case of the delta function potential.the S matrix is defined in (??).
In this case, the reflection and transmission coefficients are
−iλ 2k
R= e2ika , T = (2.134)
2k + iλ 2k + iλ
It is easy to check a very special property of delta function potential
S(k; λ, a) S(k; −λ, −a) = S(k; −λ, −a) S(k; λ, a) = 12×2 (2.135)
IISERB(2024) QMech I - 41
At this point, κ is an unknown constant. The wave function is continuous at x = a, and the
discontinuity in the derivative is
dψ dψ
+− = −2κ N (2.137)
dx x=a dx x=a−
Comparing this with (2.20) we get
mλ̃
κ=− (2.138)
2
The solution is normalizable only if λ̃ < 0. So there is only one bound state, and that is only
for α < 0. The energy of the state is (bringing back , and mass)
mλ̃2
E=− (2.139)
22
In such a situation, it is possible to compute the S matrix for the following potential
V (x) = Vi (x) (2.143)
i
From the knowledge of the S matrix of individual potentials. Since the potentials are non-
overlapping, let’s choose the label i in an ordered fashion
U (x) = V (x − a) (2.145)
IISERB(2024) QMech I - 42
We want to find the relation between S[U ] and S[V ] (also between M[U ] and M[V ]). Let’s
define a new co-ordinate y = x − a. Then our potential is simply V (y). From (2.102a) and
(2.102b) we get the wave functions to be
In terms of these wave function, the scattering matrix and transfer matrix is simply S and M;
the effect of translating potential has been nullified by choice of co-ordinate. Let’s now write in
terms of x coordinate.
where U is given by
eika 0
U [a, k] = −ika
, U −1 [a, k] = U [−a, k] (2.149)
0 e
5 Againwe go back to the case of delta function. The transfer matrix for delta function located
at x = 0 is M(k; λ, 0). From these expressions in (2.121) we can check that
M(k; λ, a) = U [−a, k] M(k; λ, 0) U [a, k] , S(k; λ, a) = U [a, k]S(k; λ, 0)U [a, k] (2.154)
which is consistent with (2.153) and (2.151). The expression for U [a, k] can be found in (2.149).
5
For any arbitrary 2 × 2 matrix
eika 0 a b eika 0 a e2ika b
=
0 e−ika c d 0 e−ika c d e−2ika
IISERB(2024) QMech I - 43
2.7.2 Two non-overlapping potentials
Let’s now consider the case when there are two such potentials (see fig 2.10). Let’s say the
potential is given by
V (x) = V1 (x) + V2 (x) (2.155)
V1 (x) is non-zero only in region II, and V2 (x) is non-zero only in region IV. In this case, we want
to find the transfer matrix that relates region V to region I.
Since the potential in region III is zero, from (2.124) it follows that
Please note that the transfer matrix in the product appears in a particular order. If we have
multiple potentials as given in (2.143) (with the convention given in (2.144)), the transfer matrix
for the full potential is given by
Region IV
IISERB(2024) QMech I - 44
Let’s now consider a series of potentials. Their S matrices and hence the transfer matrices are
given by
r r′
t′i − iti i rtii ri′ ti
Mi = r′ 1
⇐⇒ S[Mi ] = ′ (2.161)
− tii ti
t i r i
Then consider a case when Vi and Vj are present, and Vj are left to the Vi . Then the transfer
matrix is given by
(ri ri′ − ti t′i ) rj rj′ − tj t′j − ri rj′ ri ri′ (−rj ) + ti t′i rj + ri
1
Mi Mj = rj′ (ri′ rj − 1) − ri′ tj rj′ 1 − ri′ rj (2.162)
ti tj
All these components have a physical meaning. Let’s equate it to the transmission and reflection
coefficient of the combined system.
r ′ tj t′
′ ti tj
i
′
j
+ r j ′
1−ri rj 1−ri rj r ′ t ij
= ij
(2.164)
t ′
′ ′
ti tj ′
t i t i rj ij rij
1−r′ rj 1−r′ rj
+ ri
i i
We know that the transfer matrix satisfies a product relation (see (2.157). However, this is not
true for S matrices. It is clear from the expression in (2.163) and in (2.161) that
S Mi Mj ∕= S Mi S Mj (2.165)
1. For non-overlapping potential, we can construct the transfer matrix from the information
of the transfer matrix from individual potentials. The transfer matrix is simply the product
of individual transfer matrices as shown in (2.163). The product relation is not true for S
matrices!
2. It is possible to find the bound states from the transfer matrix; we will discuss that in
section 3.4.
IISERB(2024) QMech I - 45
2.9.1 Bound states
At first, we analyze the bound states. Bound states have normalizable solutions. Let us consider
the wave function of the form
NI eκ(x−a1 ) x < a1
ψ(x) = NII+ eκx + NII− e−κx a1 < x < a2 (2.167)
N e−κ(x−a2 ) x>a
III 2
Note that for λ2 = 0 we get back (2.138). The solution to the above equation depends on the
following three dimensionless parameters
λ2 a2
, , λ1 a 2 (2.175)
λ1 a1
Then in order to find the bound states, we need to solve
4κ2 + 2κ(λ1 + λ2 ) + λ1 λ2 1 − e−2(a2 −a1 )κ = 0 (2.176)
Now we would like to study various limits first before solving this equation
IISERB(2024) QMech I - 46
1. Coincident limit: a2 − a1 = 0 Let the distance between the two delta function go to
zero, i.e. it reduces to a single delta function of strength λ1 +λ2 . In that case, the equation
reduces to λ1 + λ2
2κ 2κ + (λ1 + λ2 ) = 0 =⇒ κ = − (2.177)
2
So in the coincident limit, there is one bound state iff λ1 + λ2 < 0.
So we have two bound states, in this case, provided λ1 , λ2 < 0. If the two deltas have
opposite signs, then there is only one bound state.
From this analysis, it is clear that as we change a2 − a1 from zero to ∞, a new bound state
appears !. We can show this phenomenon in Mathematica.
λ1 = −λ2 = λ a2 − a1 = a (2.179)
So this can have at most one bound state, and that too beyond a critical distance between
the potentials.
IISERB(2024) QMech I - 47
2.10.1 Step potentials
Consider the potential of the following form
0 x < x
V (x) = (2.182)
V x > x
We call it right-step potential because the step is towards the right. Similarly, one can consider
left-step potential.
x=a
This potential has no bound state. It only has scattering states. But there are two types of of
scattering states solutions
• E≥V
• E<V
Scattering solutions: E > V The wave function takes the following form
ψI (x) = AI eikI x + BI e−ikI x x < x (2.183)
ψII (x) = AII eikII x + BII e−ikII x x > x (2.184)
The continuity of ψ(x) and ψ ′ (x) at x = x
AI eikI x + BI e−ikI x = AII eikII x + BII e−ikII x (2.185a)
kI AI eikI x − BI e−ikI x = kII AII eikII x − BII e−ikII x (2.185b)
So, in this case, the transfer matrix is given by
1 eix (kI −kII ) (kI + kII ) e−ix (kI +kII ) (kII − kI )
Trightstep [kI , kII , x ] = (2.186)
2kII eix (kI +kII ) (kII − kI ) e−ix (kI −kII ) (kI + kII )
From the definition, it is clear that
(Trightstep [kI , kII , x ])−1 = Trightstep [kII , kI , x ] (2.187)
Then we can compute the S matrix
1 e2ix kI (kI − kII ) 2eix (kI −kII ) kII
Srightstep [kI , kII , x ] = (2.188)
(kI + kII ) 2eix (kI −kII ) kI e−2ix kII (kII − kI )
Consider the special case when a = 0.
1 (kI − kII ) 2kII
Srightstep [kI , kII , x ] = (2.189)
(kI + kII ) 2kI (kII − kI )
From these equations, we get
kI − kII kI
RI ≡ BI = , TII ≡ AII = 2 (2.190)
kI + kII kI + kII
IISERB(2024) QMech I - 48
Probability currents We choose the initial wave to come only from x = −∞. Then
AI = 1 , BII = 0 (2.191)
jincident = 2 kI (2.192a)
2
jreflected = 2 kI |RI | (2.192b)
jtransmitted = 2 kII |TII |2 (2.192c)
Note that when E = V (i.e. kII = 0), the transmitted current vanishes.
The limit V → ∞ gives quantum mechanics on a half-line with Dirichlet boundary condition.
Scattering states solutions: E < V We are interested in the solutions of the time-independent
Schrodinger equation with
E<V (2.194)
In this case we define the following variables
2mE 2m(V − E)
k= , κ= (2.195)
2 2
Pathwise solution is given by
kII −→ iκ (2.196)
Another important feature of this solution is that, it is non-zero in region II; this implies that
the probability to find the particle in region II is non-zero. Again this is sharp departure from
classical behaviour. In classical mechanics, if a particle has energy less than the height of the
potential barrier then it remains confined in region I only.
At x = x , we impose continuity of the wave function and the continuity of derivative to
obtain
AL eikx + BL e−ikx AR e−κx + BR eκx AL eikx + BL e−ikx ik(AR e−κx + BR eκx )
= =⇒ =
ik(AL eikx − BL e−ikx ) −κ(AR e−κx − BR eκx ) (AL eikx − BL e−ikx ) −κ(AR e−κx − BR eκx )
(2.197)
Then from the above equation
IISERB(2024) QMech I - 49
We can also write AR and BR in terms of AL and BL
AR e−κx + BR eκx −κ(AL eikx + BL e−ikx ) AR e−κx −(κ − ik)AL eikx − (κ + ik)BL e−ikx
= =⇒ =
AR e−κx − BR eκx ik(AL eikx − BL e−ikx ) BR eκx −(κ + ik)AL eikx − (κ − ik)BL e−ikx
(2.201)
Now we want to substitute (2.199), to get
AL eikx
= −e−2iθ(k,κ) (2.203)
BL e−ikx
AL eikx
= −1 (2.204)
BL e−ikx
Property of angle defined in (2.199) Let’s consider the angle θ(k, κ) defined in (2.199)
k E
tan θ(k, κ) = = (2.205)
κ V −E
We can see that it is a strictly increasing function of E. We can also see that
π
0≤E≤V =⇒ 0 ≤ θ(k, κ) ≤ (2.206)
2
We can also take the limit V → ∞ at constant E. In that case, we get
This is similar to the previous potential with I − II interchanged. We are first interested in the
solutions with
E>V (2.209)
We can compute the transfer matrix and the S matrix from the previous formulae
1 e ix̂(kI −kII ) (k + k ) e−ix̂(kI +kII ) (k − k )
I II II I
Tleftstep [kI , kII , x̂] = (Trightstep [kII , kI , x̂])−1 =
2kII eix̂(kI +kII ) (kII − kI ) e−ix̂(kI −kII ) (kI + kII )
(2.210)
Then we can compute the S matrix
1 e2ix̂kI (kI − kII ) 2eix̂(kI −kII ) kII
Srightstep [kI , kII , x̂] = (2.211)
(kI + kII ) 2eix̂(kI −kII ) kI e−2ix̂kII (kII − kI )
IISERB(2024) QMech I - 50
Let’s now look for the solutions with
E<V (2.212)
AR eikx̂
= −e2iθ(k,κ) (2.217)
BR e−ikx̂
AR eikx̂
= −1 (2.218)
BR e−ikx̂
Here V is a positive number. We have depicted the potential in fig. 2.12. The potential has
parity symmetry
The solution of the Schrodinger equation in the presence of this potential has two different types
of solutions depending on the energy
IISERB(2024) QMech I - 51
V
x = − 2ℓ x= ℓ
2
We start by considering the case E > V . The momentum in the region I and II are given by
2mE 2m(E − V )
kI = , kII = (2.222)
2 2
We start by writing down the transfer matrix. This is simply a product of two transfer matrices.
Again we use Mathematica for the computation; we write down the results here. The transfer
matrix is given by
ℓ ℓ
Tbarrier [kI , kII , a] = Tleftstep kII , kI , Trightstep kI , kII , − (2.223)
2 2
We can compute this using Mathematica. It is given by
1 (kI + kII )2 e−iℓ(kI −kII ) − (kI − kII )2 e−iℓ(kI +kII ) −2i(kI − kII )(kI + kII ) sin(kII ℓ)
4kI kII 2i(kI − kII )(kI + kII ) sin(kII ℓ) (kI + kII )2 eiℓ(kI −kII ) − (kI − kII )2 eiℓ(kI +kII )
(2.224)
From the transfer matrix, we can compute the S matrix.
(k −k )(k +k )
I II I II 2ikI kII
kI2 +2ikI kII cot(kII ℓ)+kII2
(kI2 +kII2 ) sin(kII ℓ)+2ikI kII cos(kII ℓ) −ikI ℓ
e (2.225)
2ikI kII (kI −kII )(kI +kII )
(kI2 +kII2 ) sin(kII ℓ)+2ikI kII cos(kII ℓ) kI2 +2ikI kII cot(kII ℓ)+kII
2
4kI2 kII
2
2 + (k 2 − k 2 )2 sin2 (k ℓ)
(2.227)
4kI2 kII I II II
We can see that transmission and reflection probabilities are functions of three quantities: ℓ,
kI and kII ; from eqn (2.222) we know that kI and kII can be written as function of energy E
and depth of the potential V . Another important feature is that even if E > V , the reflection
coefficient is generically non-zero. This is not true in classically. In classical mechanics, a particle
is never reflected back if it’s energy is more that the height of the potential.
Let’s first consider functional dependence on ℓ. If we vary ℓ (keeping E and V unchanged)
then sin2 (kII ℓ) varies between 0 and 1. The transmission probability is maximum when
kII ℓ = nπ (2.228)
IISERB(2024) QMech I - 52
In this case the transmission probability is 1 but reflection probability is 0. The complete wave
is transmitted to the other side. Note that, the condition also depends on the energy of the
wave. For any potential, there are certain frequencies which are completely transmitted; but
this is never true for all possible values of the frequencies.
We already know that the minimum value of the reflection probability is zero. The maximum
value of the reflection probability is
(kI2 − kII
2 )2
(2.229)
(kI2 + kII
2 )2
1
(kI2 −kII
2 )2
(kI2 +kII
2 )2
π/kII 2π/kII
We want to analyze what happens when we keep ℓ unchanged but vary the dimensionless
ratio E/V . But we will do this after finding the solutions for E < V . This dependence can be
found in the next section
Let’s first consider the case when E < V . In classical mechanics, the Particle starts from the
region I; then, it remains in region I for all. The Particle will not have enough energy to go to
region II and hence also to region III. To understand the behaviour of the system in the Quantum
system, we need to solve the time-independent Schrodinger equation in this background. We
define
2mE 2m(V − E)
k= , κ= (2.231)
2 2
Then the transfer matrix is given by
1 −e−ikℓ (2κk cosh (κℓ) + i(k 2 − κ2 ) sinh (κℓ)) −i κ2 + k 2 sinh (κℓ)
2κk i κ2 + k 2 sinh (κℓ) eikℓ (2κk cosh (κℓ) − i(k 2 − κ2 ) sinh (κℓ))
(2.232)
the S matrix is given by
2 2
k +κ 2ikκ
k2 +2ikκ coth (κℓ)−κ2 (k−κ)(k+κ) sinh (κℓ)+2ikκ cosh (κℓ)
2ikκ k2 +κ2 e−ikℓ (2.233)
(k−κ)(k+κ) sinh (κℓ)+2ikκ cosh (κℓ) k2 +2ikκ coth (κℓ)−κ2
IISERB(2024) QMech I - 53
Our convention is that
(2kκ)2
(2.235)
(2kκ)2 + (k 2 + κ2 )2 Sinh2 (κ ℓ)
Putting the values of k and κ, we get
4E(V − E)
√ (2.236)
2 2 2m(V −E)
4E(V − E) + V Sinh ℓ
So the particle has a non-zero probability of being in region III. Classically this is not possible.
This is called tunnelling. This is purely a Quantum Mechanical phenomenon. It has many
drastic consequences.
4E(V − E)
|T (E)|2 = √ , E<V (2.237)
2 2 2m(V −E)
4E(V − E) + V Sinh ℓ
4E(E − V )
= √ , E>V (2.238)
2 2 2m(E−V )
4E(E − V ) + V Sin ℓ
IISERB(2024) QMech I - 54
This is the underlying reason why nuclei undergoes the alpha decay. However, it leads to many
important phenomenon. For example,
VI VIII
x=a x=b
We are assuming VI , VIII > 0. In the limit, VI , VIII −→ ∞, we get back the situation of
quantum particle in a box.
Let’s briefly discuss the situation in classical mechanics. In classical mechanics, the minimum
energy configuration is the minima of the potential (In this case, it is the bottom of the potential).
In particular, the energy of the minimum energy configuration do not depend on the height of
the potential. In the following analysis, we will see that in Quantum Mechanics, the situation
changes drastically.
First we focus on a solution when
This situation denotes the situation of a bound state. In this case we define the following
variables
2m(VI − E) 2mE 2m(VIII − E)
κI = , k= . , κIII = (2.241)
2 2 2
From (2.240), we can see that
2mVI 2mVI
E = 0 =⇒ k = 0 , κI = , E = VI =⇒ k = , κI = 0 (2.242)
2 2
Then the piecewise solution is given by
IISERB(2024) QMech I - 55
L = 0 (BIII = 0 ). Now we
Normalizability of the wave function ΨI (x) (ΨIII (x)) demands that A
impose continuity Ψ(x) and Ψ′ (x) at x = a and at x = b.
1. At x = a, We use the result (2.217), with L = I and R = II and x̂ = a
AII
= −e2iθ(k,κI ) e−2ika (2.244)
BII
2. At x = b, We use the result (2.203), with L = II and R = III and x = b
AII
= −e−2iθ(k,κIII ) e−2ikb (2.245)
BII
The length of the well is given by L = b − a. Combining these two equations, we get
e2ikL = e−i2(θI +θIII ) (2.246)
where
θI = θ(k, κI ) , θIII = θ(k, κIII ) (2.247)
k is increasing function of E and κI & κIII is a decreasing function of E. So, θI and θIII are
increasing function of E.
π
E = 0 =⇒ θI = 0 , E = VI =⇒ θI = (2.248)
2
The solution of this equation is
nπ 1
k= − (θI + θIII ) , n ∈ Z+ (2.249)
L L
The unknown variable in the above equation is E; k, θI and θIII depends on energy and other
externally provided parameters. The restriction (n ∈ Z+ ) comes from the fact that k is positive
quantity.
IISERB(2024) QMech I - 56
Case III: We focus on the case when
A few observations We can see that the ground state depends on the height of the potential
and the length of the region
Egs (VI , VIII , L) (2.259)
1. Egs is a strictly increasing function of VI and VIII . This is striking different from classical
mechanics.
2. From the expression in (2.249), we can see that Egs decreases as L increases.
x=a x=b
V (x) = V x≤a
= 0 a<x<b
= V b≤x≤c
= 0 c<x<d
= V d≤x (2.260)
IISERB(2024) QMech I - 57
0 I II III IV
1. At x = a, we have a right step potential. So, we use the result (2.217), with L = 0 and
R = I and x̂ = a.
AI
= −e2iθ(k,κ) e−2ika (2.264)
BI
We define
AI eika−iθ(k,κ) = −BI e−ika+iθ(k,κ) = NI (2.265)
2. At x = d, we have a left step potential. So, we use the result (2.203), with L = III and
R = IV and x̂ = d.
AIII
= −e−2iθ(k,κ) e−2ikd (2.266)
BIII
We define
AIII eikd+iθ(k,κ) = −BIII e−ikd−iθ(k,κ) = NIII (2.267)
lim ΨI (x) = lim ΨII (x) , lim Ψ′I (x) = lim Ψ′II (x) (2.268)
x→b− x→b+ x→b− x→b+
Let’s define b − a = LI
AII e−κ̃b + BII eκ̃b −κ̃ sin(kLI + θ) AII −2κ̃b κ̃ sin(kLI + θ) − k cos(kLI + θ)
= =⇒ e =
AII e−κ̃b − BII eκ̃b k cos(kLI + θ) BII κ̃ sin(kLI + θ) + k cos(kLI + θ)
(2.271)
IISERB(2024) QMech I - 58
4. Similarly from x = c, we get
lim ΨII (x) = lim ΨIII (x) , lim Ψ′II (x) = lim Ψ′III (x) (2.272)
x→c− x→c+ x→c− x→c+
LI = LIII = L (2.276)
Case IA Let’s consider the simple case LII = ∞. In this case, we define θ.
nπ 1
tan(θ) = k/κ =⇒ tan(kL) = − tan θ = tan(nπ − θ) =⇒ k = − θ (2.278)
L L
In this limit, it reduces to the case of asymmetric well (Case III of sec 3.3.2). Consider the RHS
of (2.277). We define
κ̃
f (κ̃) = e− 2 LII (2.279)
This factor is essentially coming from Tunnelling wave function in region II. This decreases as
we increase LII ; it also decreases if we increase the height of the potential in region II. Then
(2.277) becomes
κ̃ sin(kL + θ) + k cos(kL + θ)
= ±f (κ̃) (2.280)
κ̃ sin(kL + θ) − k cos(kL + θ)
Let’s first focus on the solution with plus sign
(1 + f (κ̃))k
tan(kL + θ) = − (2.282)
(1 − f (κ̃))κ̃
Let’s define
k (1 + f (κ̃))k
tan θ = , tan θ = (2.283)
κ̃ (1 − f (κ̃))κ̃
IISERB(2024) QMech I - 59
From the definition of tan θ in (2.278) we can see that
We denote the solution to the above equation as k(n, +); + denotes the choice that we have
made in (2.282)
nπ 1
k(n, +) = − (θ + θ) (2.285)
L L
In the case of infinite separation LII → ∞, f (κ̃) goes to zero
k (dw) (n, +) < k (asw) (n) < k hwell (n) < k box (n) ∀n (2.287)
Role of tunnelling We saw that there two sets of solutions: k (dw) (n, +) and k (dw) (n, −). In
the limit of infinite separation, Both of these solutions become the same, and they are equal
to the solution of the asymmetric well. However, for finite separation k (dw) (n, +) is less than
κ̃
k (asw) (n) and k (dw) (n, −) is greater than k (asw) (n). This energy splitting is due to f (κ̃) = e− 2 LII .
This factor is coming from the tunnelling solution in region II 6 .
Let’s try to understand this better. In the infinite separation, we have two independent
potential wells. Thus, with this limit, the situation is reduced to two independent potential
wells. For finite separation, one would naively expect doubly degenerate bound states. But due
to tunnelling, the energy splits. One of the states has lower energy than that of a single well,
and the other one has more energy. One way to understand energy splitting is the following:
We can take a symmetric and anti-symmetric combination of the ground states. The symmetric
combination has no node, but the anti-symmetric combination has one node. Hence, the anti-
symmetric combination has higher energy. To summarise, the ground state energy of a double
well is lower than the ground state energy of a single well due to quantum tunnelling. The effect
of tunnelling is often known as the hopping term in condensed matter physics.
This is the origin of the formation of bands in many body systems. A band is a collection of
many energy levels, and the gap between them is very small. The gap is so small that it appears
as continuous.
6
This effect is also known as instanton correction to energy levels/“Non-perturbative” correct of energy levels.
The basic reason is the same; these jargons are more used in the Path integral formalism on in Quantum Field
Theory
IISERB(2024) QMech I - 60
Figure 2.18: Ground state wave function in symmetric double well
Figure 2.19: Wave function of first excited state in symmetric double well
• “Atom” stands for a Quantum particle in a potential such that the potential has only with
one minimum.
The location of the minima is known as the “Nuclei”.
• “Molecule” stands for the situation when a Quantum particle is in a potential with two or
more minima.
• In case of Double well potential, we found that the ground state function is a new state
which resembles the shape of the linear superposition of the ground state of the each well.
It is a new state of the double well and it has lower energy than the ground state energy
of the individual well. This situation is known as “Hybridisation”. It essentially means
that Molecular orbitals have lower energy than atomic orbitals.
• From the wave functions of double well (in the fig. 2.18 and in the fig. 2.19) we can
see that modulus of the wave function has maxima in multiple places. This means that
the Quantum particle can be found in any of those wells. Unlike classical particle, it is
not restricted to a single well. This situation is often denoted by the word “Delocalised
electron”.
• Example of the double well also helps to understand bonding and Anti-bonding molecular
orbitals. In this case, the energy eigenstate of (left/right) potential well are the atomic
orbitals. We have plotted two molecular orbitals (i.e. the energy eigenstates of the double
IISERB(2024) QMech I - 61
well) in fig. 2.18 and in fig. 2.19. From the graphs, the molecular orbital in fig. 2.18 (fig.
2.19) resembles the symmetric (anti-symmetric) linear superposition of the two atomic
orbitals (i.e. the ground state energy wave functions of the individual wells). Consider the
molecular orbitals in fig. 2.18. This orbital has lower energy (eigenvalue) than that the
atomic orbital. This is known as the bonding molecular orbital. The molecular orbitals in
fig. 2.19 have more energy than that of the atomic orbitals. It is known as anti-bonding
molecular orbitals in the Chemistry Literature.
From the above discussion, we can see that the role of Tunnelling in Chemical bonding. We
have already mentioned that the effect of Tunnelling is known as “Hopping” in the condensed
matter literature. Bands form in solid due to hopping. We are not elaborating that here. The
reader is requested to consult any standard condensed matter textbook for this.
Let’s consider a particle in region I with energy E < V . First we discuss the situation in classical
mechanics. In Classical mechanics, the particle will be stuck there and can never come out in
region III. However, as we will derive below, the situation is different in Quantum Mechanics.
We begin by defining the variables
2mE 2m(V − E)
k= , κ= (2.293)
2 2
The solution of the Schrodinger equation in Region I (subjected to the boundary condition at
the left end) is
ΨI (x) = AI sin kx
ΨII (x) = AII e−κx + BII eκx (2.294)
ikx −ikx
ΨIII (x) = AIII e + BIII e
IISERB(2024) QMech I - 62
V
I II III
(a) In the class, we listed the various properties of a quantum particle in a box. They
are listed in sec 1.1.
Check whether they are true for Quantum Harmonic Oscillator or not.
(b) Plot the wave function of the first few excited states of Quantum Harmonic Oscillator
using Mathematica.
(c) Consider the Large Quantum number limit of the system and comment on its be-
haviour and the behaviour of a classical harmonic oscillator.
(2.299)
Find the form of the wave function as a function of x for the energy eigenstates.
ℓ(ℓ + 1) C
Hℓ = −∂r2 + − (2.300)
r2 r
IISERB(2024) QMech I - 63
The spectrum of this Hamiltonian can be derived using the factorization method. Let us
define two operators
(ℓ + 1) (ℓ + 1)
Aℓ = ∂ r − + cℓ , A†ℓ = −∂r − + cℓ (2.301)
r r
Compute
Aℓ A†ℓ , A†ℓ Aℓ (2.302)
Show that if we choose 2cℓ (ℓ + 1) = C, then Hℓ − A†ℓ Aℓ and Hℓ+1 − Aℓ A†ℓ are just real
numbers; determine those numbers.
Can you derive the spectrum of a hydrogen atom from this?
L z = q 1 p2 − q 2 p1 (2.308)
1 Zq1
A1 = − (Lz p2 + p2 Lz ) + (2.309)
2 r
1 Zq2
A2 = (Lz p1 + p1 Lz ) + (2.310)
2 r
(b) Show that
1
A21 + A22 =2 L2z + H + Z2 (2.311)
4
6. In (2.164), we computed the reflection and transmission coefficients for two potentials.
Derive the same expression from the multiple reflections and transmissions between the
regions confined between the potentials.
IISERB(2024) QMech I - 64
7. In sec 3.2, we have analyzed the case of two delta function potentials. We also computed
the S matrix using Mathematica. Repeat the analysis for three delta function potentials.
Find the energy eigenvalue of this system and find the Wave functions of the energy
eigenstates.
Is there any energy eigenstate with negative energy eigenvalue? If yes, find the wave
function.
12. In sec 3.3.2, we have consider a potential well. We found the S matrix and bound states.
Take the limit in which the potential well becomes the delta function well. Show that the
S matrix and the bound state expression for the potential well gives the S matrix and
bound states for the delta function well.
IISERB(2024) QMech I - 65
Chapter 3
In this chapter, we will discuss some general features of one dimensional systems, based on
broader principles like symmetry, unitary and complex analyticity. The content of this chapter
is not part of examinable material.
At first we will consider the consequences of symmetry on the scattering matrix.
Since the kinetic term is invariant under parity, the Hamiltonian has a parity symmetry for
symmetric potential. Parity can be used to map region I to the region III and vice-versa.
AI A′I BIII
−→ ′
=
BIII BIII AI
0 1 AI
= (3.2)
1 0 BIII
We can write this equation (also a similar equation for out states) as
where P is defined as
0 1
P= , P2 = 1 (3.4)
1 0
Let’s now start from the definition of the S matrix and multiply by P
66
So, under parity transformation S-matrix transforms as
0 1 S11 S12 0 1 S22 S21
= (3.6)
1 0 S21 S22 1 0 S12 S11
S matrix has 4 complex parameters to begin with. The above relation reduces the number
of independent complex parameters to 2. These coefficients are also related to each other by
unitarity (which is one condition on these variables). So, the S matrix depends on one complex
(two real) parameter for Parity symmetric potential. This property remains true in any one
dimensional quantum system irrespective of the form of the potential. Whenever the system has
a symmetry, it restricts the form of the S matrices. In the QMech II course, we will see this for
three-dimensional S matrices for systems with spherical symmetry.
We can understand the consequence of parity symmetry using the understanding given in
sec 2.6.3. The action of parity is the following
Since two processes related by parity transformation should be the same, we obtain
So in case of a parity symmetric potential, the two reflection coefficients are the same and the
two transmission coefficients are the same.
Action of parity on Transfer matrix We already explored the consequence of parity sym-
metry on the S matrix. Let’s now see its implications on the transfer matrix. The transfer
matrix is defined as
AIII M11 M12 AI
= (3.11)
BIII M21 M22 BI
and similarly
AI BIII 0 1 AIII
−→ = (3.13)
BI AIII 1 0 BIII
IISERB(2024) QMech I - 67
So under parity transformation the transfer matrix goes to
0 1 0 1
M −→ M (3.14)
1 0 1 0
M−1 is given by
−1
M11 M12 1 M22 −M12
= (3.16)
M21 M22 det M −M21 M11
and
0 1 M11 M12 0 1 M22 M21
= (3.17)
1 0 M21 M22 1 0 M12 M11
So we get
det M = 1 , M12 = −M21 (3.18)
A very important property of Quantum dynamics is linearity. If we have two (or more) solutions
to the Schrodinger equation, then any linear combination of them is also a solution to the
Schrodinger equation. In this particular case, we can use this property to form the following
two linear combinations
1 1
√ (AI + BIII ) , √ (AI − BIII ) (3.20)
2 2
such that the first one parity invariant and the second one only changes by a sign. The states in
(3.19) are not eigenstates of Parity; but thanks to the linearity property of Quantum Mechanics,
we can consider states which are eigenstates of the Parity operator. This is true in general. We
can write the wave functions on a Parity eigenvalue basis for a symmetric potential. The wave
function can only be parity symmetric (i.e. even function) or antisymmetric (i.e. odd function).
For example consider the following wave function
eikx , x<0
Ψ(x) = −ikx
(3.21)
e , x>0
This wave function is in-coming in both region I and III. It is also invariant under parity
symmetry. It is an eigenfunction of Parity with eigenvalue 1. The above wave function can
simply written as
χsym
in (x) = e
−ik|x|
(3.22)
IISERB(2024) QMech I - 68
We can also construct another wave function of the following form
eikx , x<0
Ψ(x) = −ikx
(3.23)
−e , x>0
This wave function is again in-coming in both the region and it is linear independent from the
wave function in (3.21). It is an eigen function of the Parity operator with eigenvalue −1. This
wave function can also be written as1
χanti-sym
in (x) = sgn (x) e−ik|x| (3.25)
One can similarly construct parity eigen-value basis for out-going wave functions. They are
given by
χsym
out (x) = e
ik|x|
(3.26)
χanti-sym
out (x) = −sgn (x) eik|x| (3.27)
Note that the parity symmetry relates the left states to the right states. So, the left-right
decomposition does not commute with parity transformation. Then the full wave function (in
the region where the potential is zero) takes the form
Change of basis Now, we want to find the relation between the two bases. In region I, it
takes the form
1 1
ΨI (x) = √ (Cin − Din ) eikx + √ (Cout + Dout ) e−ikx (3.31)
2 2
In region III it takes the form
1 1
ΨIII (x) = √ (Cin + Din ) e−ikx + √ (Cout − Dout ) eikx (3.32)
2 2
Then comparing this with eqn (2.102a) and (2.102b) we get
AI 1 1 −1 Cin BI 1 1 1 Cout
=√ , =√ (3.33)
BIII 2 1 1 Din AIII 2 1 −1 Dout
1
sgn stands for sign function or signum function. It is defined as
1 x>0
sgn (x) = 0 x=0 (3.24)
−1 x<0
IISERB(2024) QMech I - 69
√
The 1/ 2 is to ensure that the the transformation is a orthogonal transformation. It is possible
to show that
S++ S+− 1 1 1 S11 S12 1 −1
SP ≡ =
S−+ S−− 2 1 −1 S21 S22 1 1
1 S11 + S22 + S12 + S21 −S11 + S22 + S12 − S21
= (3.34)
2 S11 − S22 + S12 − S21 −S11 − S22 + S12 + S21
The S matrix is diagonal on this basis. This is expected. The potential has the Parity symmetry.
Then it would preserve the parity eigenvalue of the incoming wave function; the parity of the
incoming wave and the outgoing wave must be the same. A parity-symmetric potential cannot
mix symmetric and antisymmetric wave functions.
We saw that the S matrix is diagonal in the parity-symmetric basis. From the unitarity of
the S matrix, we know that S † S = 1. From these two conditions, it follows that the diagonal
entries can only be a phase
e2iδ+ (k) 0
, δ+ (k) , δ− (k) ∈ R (3.36)
0 e2iδ− (k)
We can see that the S matrix depends on two real variables (δ± ) for a parity-symmetric potential.
The parity odd scattering wave is unaffected by the delta function potential.
3.1.4 Example II
Now we focus on the special case when the potential takes the following form
V (x) = λ δ(x + a) + δ(x − a) , a>0 (3.40)
Z2 : x → −x (3.41)
In this case, the system has 2 free parameters. Due to this symmetry, all the wave function of
any bound state is either an even or an odd function. Similarly, we write the S matrix parity
eigenbasis, and in this case, we find the S matrix to be a pure phase.
IISERB(2024) QMech I - 70
S-matrix: Partial wave In this, the wave function and scattering states can be written in
the parity eigenvalue basis.
2ik+λ(1+e−2iak )
0
2ik−λ(1+e2iak )
2ik+λ(1−e−2iak )
(3.42)
0 2ik−λ(1−e 2iak
)
If we put the separation a = 0, then we get
ik+λ
ik−λ 0
(3.43)
0 1
This is same as the partial wave for single delta function with strength 2λ (see (3.38) and (3.39)).
T : t → −t (3.44)
then the system is called time-reversal symmetric. We are currently considering time-independent
potential, and the Hamiltonian is time-independent
2 d 2
H(x, t) = − + V (x) (3.46)
2m dx2
Such a Hamiltonian is time-reversal symmetric.
dx d
p
p = m , = F (x, t) (3.47)
dt dt
Then under time-reversal symmetry
T : p −→ −
p , F (x, t) −→ F (x, −t) (3.48)
Since the direction of motion flips under time-reversal transformation, it is also known as
motion-reversal transformation. If F (x, t) = F (x, −t), the second equation is also invariant
under time-reversal transformation. In that case, if x(t) is a solution to the equation of
motion, x(−t) is also a solution to the equation of motion.
Let’s consider a few examples: we start with free particles. In that case, the solution of
the equation of motion is
x = v t + x0 (3.49)
We replace v by −v , then it still remains a solution to the equation of motion. Next, we
consider the motion of a charged particle in the electromagnetic field
d2 x + v × B)
m = e(E (3.50)
dt2
IISERB(2024) QMech I - 71
This is the Lorentz force law. The equation is invariant under time-reversal symmetry
provided electric and magnetic field transform in the following way
T : B)
(E, −→ (E,
−B)
(3.51)
So the direction of the magnetic field should change under time-reversal symmetry. There
is an intuitive way to understand it. Consider a magnetic field created by the current.
Under time-reversal symmetry, the direction of motion flips and the direction of current also
flips. This implies that the direction of the magnetic field must change under time-reversal
symmetry. Note that every classical system is not invariant under time-reversal symmetry.
For example, consider the following equation of motion
d2 x dx
m 2
+γ + F (x) = 0 (3.52)
dt dt
This equation is not invariant under time-reversal symmetry.
All the laws of physics at the sub-atomic level are invariant under time-reversal trans-
formation. However, this is not true in the macroscopic world. We do not see an old person
becoming young again or a dead animal becoming alive again. This is one of the important
puzzles in physics. This puzzle is deeply connected with the second law of thermodynamics
which says that entropy can never decrease.
Let’s now try to understand what happens to the solution of the Schrodinger equation under
time-reversal symmetry. Let Ψ(x, t) be a solution of the Schrödinger equation
∂
i Ψ(x, t) = H(x, t)Ψ(x, t) (3.53)
∂t
We take the complex conjugate of the above equation
∂ ∂
−i Ψ (x, t) = H(x, t)Ψ (x, t) =⇒ i Ψ (x, t) = H(x, t)Ψ (x, t) (3.54)
∂t ∂(−t)
Now, we make a variable substituion t → −t and use H(x, −t) = H(x, t) to obtain
∂
i Ψ (x, −t) = H(x, t)Ψ (x, −t) (3.55)
∂t
From this equation we can conclude that for a time-reversal symmetric Hamiltonian if Ψ(x, t)
is a solution of Schrödinger equation then Ψ (x, −t) is also a solution of Schrödinger equation.
Under time-reversal symmetry
IISERB(2024) QMech I - 72
Hence, the action of time-reversal symmetry on the in- states and the out-state are given by
AI BI BI AI
−→ , −→ (3.59)
BIII AIII AIII
BIII
Now consider the definition of the S-matrix is (2.109); Multiply both sides by S † and then
complex conjugate to get
AI T BI
=S (3.61)
BIII AIII
S = ST (3.62)
(I → I) −→ (I → I) (3.64a)
(I → III) −→ (III → I) (3.64b)
(III → I) −→ (I → III) (3.64c)
(III → III) −→ (III → III) (3.64d)
T 2 Applying time-reversal symmetry twice We have seen that under time-reversal t flips
sign
T : t −→ −t (3.65)
This means that if we apply the time-reversal transformation twice (we denote it as T 2 ), then
it is the same as doing nothing.
T2 : t −→ t (3.66)
In the case of scattering, we can check from equation (3.58) that if we apply time-reversal
symmetry twice, then it is the same as doing nothing
T2 : AI , BI , AIII , BIII −→ AI , BI , AIII , BIII (3.67)
T2 = 1 (3.68)
IISERB(2024) QMech I - 73
Action of time-reversal symmetry on the transfer matrix We now analyse the conse-
quence of time-reversal invariance on the transfer matrix. We know that under time-reversal
symmetry
AI BI AIII
BIII
−→ , −→ (3.69)
BI AI BIII AIII
If time-reversal is a symmetry then
BIII BI
=M (3.70)
AIII AI
We take complex conjugation of the above equation to obtain
BIII B I 0 1 A III 0 1 AI
= M =⇒ = M (3.71)
AIII AI 1 0 BIII 1 0 BI
Comparing this with (??) we get
0 1 0 1
M
= M =⇒ M11 = M22
M12 = M21 (3.72)
1 0 1 0
So for a system with time-reversal symmetry the transfer matrix has the following form
M11 M12
(3.73)
M12 M11
And in this case det M = 1
These equations gives the relations between the determinant of these two matrices
M11 S21
det S = − , det M = (3.76)
M22 S12
IISERB(2024) QMech I - 74
2. Restrictions from Unitarity
Unitarity constraints these two matrices
S † S = 1 = SS † , M† Ω M = Ω (3.77)
det M = 1 , M11 = M22 , M12 = M21 (3.79)
These conditions are equivalent to the following condition for transfer matrix
d2
− ψI (x) + V (x)ψI (x) = E ψI (x) (3.82)
dx2
d2
− 2 ψII (x) + V (x)ψII (x) = E ψII (x) (3.83)
dx
We multiply (3.82) by ψII (x) and (3.83) by by ψI (x) and then take the difference to obtain
d ′
ψI (x) ψII (x) − ψII (x) ψI′ (x) = 0 (3.84)
dx
2
To be precise, we are considering quantum system on real line. One can also consider quantum system on
circle (S 1 ) and in that case, the theorems discussed in this system does not hold.
IISERB(2024) QMech I - 75
We can integrate both sides to get
′
ψI (x) ψII (x) − ψII (x) ψI′ (x) = constant (3.85)
Since we are considering the case of a real line, we can take the limit x → ∞. For bound states,
the wave function vanishes , and the first derivative remains bounded in the limit x → ±∞ and
hence the constant vanishes. In that case,
In Quantum mechanics, if two value functions proportional to each other then they describe the
same state. So, ψI (x) and ψII (x) describe the same state.
Any wave function is equivalent to this wave function. The original wave function ψ = (1 + ic)ψR
is phase equivalent to this wave function.
then the wave functions are either even or odd under parity.
This is easy to demonstrate. The symmetry of the potential implies
If the wave function vanishes at a point, but the first derivative does not vanish at that
point, then the point is called a node.
IISERB(2024) QMech I - 76
Theorem IV: Node theorem Consider a one-dimensional time-independent potential. We
can arrange the eigenfunctions with increasing energy value, starting from 1 for the lowest energy
state. Let’s say the energy of state ψn is En
Hψn = En ψn (3.92)
Since we have arrange the state with increasing value of energy it implies that
In such case, the node theorem says that ψn (x) has n − 1 nodes. It is possible to refine this
statement further. ψn has at least one node between two consecutive nodes of ψn−1 and hence
ψn has at least one more node than ψn .
We will not prove it. We refer readers to the lectures by Zwiebach in youtube. Using node
theorem, we can show a few more interesting stuff for a parity-symmetric potential given in
(3.88)
1. Odd wave functions always have nodes at x = 0. From the node theorem, we know that
the ground state cannot have any node and hence the ground state is always parity even.
2. An even (odd) wave function has an even (odd) number of nodes. From node theorem, it
follows that even and odd functions alternatively appear. For nth state
In the absence of gravitational force, energy does not have any absolute meaning; only the
difference in energy is physically relevant. In such a case, we can always choose the potential
with its global minimum value of 0. That means that the energy of the lowest energy state in
classical theory is 0. In such a context, we have the following theorem.
Theorem V: Ground state in Quantum theory has non-zero positive energy. To rephrase it,
the difference between the lowest energy Configuration in quantum theory and classical theory
is a positive real number.
We will not give a proof here. However we discuss the reason why one would expect such
a statement to be true in Quantum Mechanics. The quantum Hamiltonian is linear sum of
−d2 /dx2 and V (x). The classical lowest energy state is obtained by minimising the potential
term. If we try to become more and more localised in position space then the contribution
from −d2 /dx2 becoming bigger. So there are two competing term in the Quantum Hamiltonian
and we have to minimise this competing effect to obtain the lowest energy state. As a result,
the minimum energy in Quantum Mechanics is greater than the minimum energy in classical
mechanics.
Consider a potential V (x); for simplicity we assume that V (±∞) = 0. As long as there
exists some values of x such that
V (xc ) < 0 (3.95)
there are classical bound state(s). In previous theorem we found that, the ground state of the
Quantum system is always greater than the classical lowest energy state. So it seems a possibility
that if there is a classical bound state, the lowest energy state in Quantum mechanics can be
positive and hence is no bound state in Quantum Mechanics. The next theorem discusses that
situation.
IISERB(2024) QMech I - 77
Theorem VI: As long as the following condition is satisfied
ˆ ∞
V (x) − V (±∞) dx < 0 (3.96)
−∞
the quantum potential always has at least one bound state. When the above condition becomes
equality, one needs to carefully analyse to conclude whether there is a bound state or not.
For example, in section (3.2) we have seen that when the two delta functions have equal and
opposite strength the bound state exists only when the distance between them is greater than
some critical distance.
Note that the condition in (3.95) is necessary but not sufficient condition for existence of a
bound state in Quantum Mechanics.
Since, H(−x) = H(x) it follows that the ψ(−x) also satisfies the above eigenvalue equation with
the same eigenvalue.
H(x)ψ(−x) = Eψ(−x) (3.99)
Now we take linear combination of the above wave functions
From the definition, it follows that ψ+ (x) is an even function and ψ− (x) is an odd function.
So the bound state wave functions can be made in such a that it is either even function or odd
function.
Now we use the result that in one dimensional systems, there is no degeneracy and the wave
functions can be made purely real. In the above case, we found that there are two orthogonal
states ψ± with the same energy eigenvalue. The only wave to avoid this situation is iff
In that case, either ψ+ or ψ− is zero. So in one dimensional systems, whenever we write down
real wave functions for a parity symmetric potential, they are either even function or an odd
function. We have already this fact for particle in a box and for Harmonic oscillator.
An odd function on R always has a zero at the origin. Any other zero of an odd function
will come in pair. Similarly for an even function, the zeroes always come in pair. We use this
IISERB(2024) QMech I - 78
simple properties of even/odd function along with the Node theorem. From node theorem, we
already know that the ground state has no node. Thus the ground state wave function for a
parity symmetric potential is always an even function. The first excited state has one node and
thus it must be an odd function. Extending this argument we can show that even wave function
and odd function appear in alternation for an one dimensional parity symmetric potential.
IISERB(2024) QMech I - 79
−V
x = − 2ℓ x= ℓ
2
Bound states with even wave function First, we will consider the wave functions that are
even under parity. Then the wave function for bound states with an even function takes the
following form
NI e
−κ|x| |x| > 2ℓ
ψ(x) = (3.115)
ℓ
NII cos k x |x| < 2
We get the same condition if we impose continuity at x = − 2ℓ . From the above two equations,
we get
ℓ
k tan k =κ (3.117)
2
From the above equation we know
2m
κ2 + k 2 = V (3.118)
2
In order to find the solution see fig. 3.2. We have ploted two functions
ℓ 2m
k tan k , V − k2 (3.119)
2 2
IISERB(2024) QMech I - 80
as a function of k and look for their intesections This provides a quantization condition for k
(i.e. for energy) of the bound states. From the graphs, it is clear that there is at least one bound
state in this section as long as V ℓ ∕= 0.
The potential well has two free parameters: ℓ and V . Let’s try to understand what happens
if we vary these parameters.
1. Changing the length of the potential ℓ
If we increase ℓ without changing V , the number of bound state increases.
Infinite well Consider the case infinite well (i.e. V → ∞, keeping V − |E| finite) and in this
case
κ→∞ (3.120)
This implies
ℓ π
k = (2n + 1) (3.121)
2 2
Regardless of the strength of the delta function, there is one bound state for delta function
potential. The wave function for the bound state is given by
α
ψ(x) = NI e− 2 |x| (3.125)
IISERB(2024) QMech I - 81
Bound states with odd wave function The wave function for bound states with an even
function takes the following form
NI sgn (x) e
−κ|x| |x| > 2ℓ
ψ(x) = (3.126)
ℓ
NII sin kx |x| < 2
Not that in this sector, the number of bound state could be zero.
Infinite well Consider the case infinite well (i.e. V → ∞, keeping V − |E| finite) and in this
case
κ→∞ (3.130)
This implies
ℓ
k = nπ (3.131)
2
IISERB(2024) QMech I - 82
know about the bound states of the same potential. The potential is zero in regions I and III.
A bound state (by definition) cannot escape to infinity. So the energy of any bound state of the
potential is negative
E<0 (3.132)
k −→ iκ , κ>0 (3.133)
In this case, the scattering wave functions in (2.102a) and (2.102b) goes to
The form of these wave functions is similar to the wave functions for bound states with a few
exceptions. In the x → ∞ limit, the wave function becomes unbounded if BIII ∕= 0. Similarly, in
the x → −∞ limit, the wave function is bigger than any finite quantity if AI ∕= 0. For a physical
state, the probability should add up to one and hence the above wave function represents a
physical state iff
AI = 0 = BIII (3.135)
In this case, we obtain a bound state wave function by substituting k −→ iκ in a scattering state
solution. In other words, we should set the (analytically continued) incoming wave- function
to zero; but for a bound state solution, the (analytically continued) outgoing wave function is
finite. Since the S matrix is a map from the incoming state to the outgoing state. Let’s rewrite
that relation again
BI S11 (k) S12 (k) AI
= (3.136)
AIII S21 (k) S22 (k) BIII
The above relation is consistent with (3.135) only if S have a singularity at the values of κb
where κb is allowed bound state solution.
Alternatively, we can do the following analytic continuation
k → −iκ (3.137)
In this case, the wave function is normalizable if we set the analytically continued (outgoing)
wave function to zero. The analytically-continued incoming wave function is still finite. This
means for a bound state; the S matrix has zero for κb .
To summarise, the bound states can be found from the poles/zeroes of the analytically
continued S matrix
Sij (iκ) =∞ , Sij (−iκ) =0 (3.138)
κ=κb κ=κb
This is an extremely important and interesting result. We considered potentials which are
localized in a finite interval of the real line. For scattering experiments, we through the plane
from ±∞ and observe what comes back. On the other hand, a bound state is localized in the
region of the potential; the wave function of a bound state falls off exponentially, and hence in
any far away region, the existence of the bound state is undetectable. The above result suggests
that if we know the scattering, then from that, we can deduce the bound states of the potential.
IISERB(2024) QMech I - 83
Bound state from transfer matrix Let’s now look at the expression of transfer matrix
(2.122) and put k = iκ and AI = 0 = BIII .
AIII M11 (iκ) M12 (iκ) 0 M12
= = BI (3.139)
0 M21 (iκ) M22 (iκ) BI M22
Example: Bound states from S matrices of delta function To get the bound state, we
put
k −→ i κ (3.141)
Bound states from the pole of S matrix of parity symmetric two delta function
From this expression, we can check that the pole of the S matrices gives bound states. In order
to get the bound states, we substitute
k = iκ (3.143)
(+) 1 (+)
Nb = δ (0) − δ (+) (∞) (3.146)
π
We will go into the proof now. We will come back to it later 3 .
3
You can watch the lecture by Zwiebach.
IISERB(2024) QMech I - 84
Levinson’s theorem for single delta function There is no phase shift for the parity odd
sector (see (3.39)) and there is no parity odd bound states. So Levinson’s theorem is trivially
satisfied. Let’s look at the parity even sector. From (3.38) we know that the scattering phase
shift is given by
−1 λ
φ+ (k) = 2 tan (3.147)
2k
Note that the Phase is a monotonically decreasing function of k.So
Then
1
[φ+ (0) − φ+ (∞)] = 1 (3.149)
π
This is exactly the number of parity, even bound state of single delta potential.
In the lecture, we considered the consequences of this symmetry on the S matrix. Let’s
now consider the case when the potential satisfies the following property
V (x) = V (a − x) (3.152)
Here a is a real number. What is the consequence of this symmetry on the S matrix?
IISERB(2024) QMech I - 85
Chapter 4
But this does not make much sense because we have to specify the Hilbert space on which this
operator acts. The Hilbert space cannot be H1 or H2 . Moreover the action of H1 on H2 or H2
on H1 is not defined. Consider the example mentioned above. In that case, H1 is a 2 × 2 matrix,
and H2 is a 3 × 3 matrix. In that case, it does not make sense to add a 2 × 2 matrix with a 3 × 3
matrix. In this section, we want to make sense of the Hilbert space of the combined system and
the corresponding Hamiltonians.
The Hilbert space H for the combined system is the tensor product of the individual Hilbert
space1 :
H = H1 ⊗ H2 (4.2)
The information of the whole system is determined by the information of the individual system.
Let |φi 〉 be a basis (say energy eigenbasis) for H1 (i = 1, · · · , n) and |χI 〉 be a basis for H2
(I = 1, · · · , N ). Then a basis for H is given by
86
The tensor product product is distributive w.r.t vector addition and scalar multiplication of H1
and H2
|φi 〉 + |φj 〉 ⊗ |χI 〉 = |φi 〉 ⊗ |χI 〉 + |φj 〉 ⊗ |χI 〉 (4.5)
|φi 〉 ⊗ |χI 〉 + |χJ 〉 = |φi 〉 ⊗ |χI 〉 + |φi 〉 ⊗ |χJ 〉 (4.6)
λ |φi 〉 ⊗ |χI 〉 = λ|φi 〉 ⊗ |χI 〉 = |φi 〉 ⊗ λ|χI 〉 (4.7)
For future purposes, we denote this state as |i I〉. The number of independent basis elements is
nN . Hence
dim H = (dim H1 ) · (dim H2 ) (4.8)
If we consider direct product of two general states ci |φi 〉 and dI |χI 〉, then we obtain
n N
ci |φi 〉 ⊗ dI |χI 〉 = ci dI |φi 〉 ⊗ |χI 〉 (4.9)
i=1 I=1 i,I
In the product Hilbert space H, the inner product of two states |φ〉 ⊗ |χ〉 and |φ̃〉 ⊗ |χ̃〉 is defined
as
|φ〉 ⊗ |χ〉, |φ̃〉 ⊗ |χ̃〉 = 〈φ̃|φ〉〈χ̃|χ〉 (4.10)
So the dot product is inherited from the dot product from H1 and H2 .
Now we discuss operators in the product Hilbert space H. Given an operator O1 acting on
H1 and O2 acting on H2 , we can construct operator O1 ⊗ O2 whose action on a state is given
as follows
O1 ⊗ O2 |φ〉 ⊗ |χ〉 = (O1 |φ〉) ⊗ (O2 |χ〉) (4.11)
For example, the total Hamiltonian of the system is given by
H = H1 ⊗ 1 + 1 ⊗ H2 (4.12)
In terms of the eigenvalues, the total energy is simply the addition of the energy of the individual
systems.
Similarly, the total angular momentum of the system is
(1) (2)
Ji = Ji ⊗ 1 + 1 ⊗ Ji (4.13)
J (1) and J (2) are the angular momentum operators of system 1 and 2 respectively.
1. The inner product of the individual Hilbert spaces (H1 & H2 ) gives the inner product
for H1 × H2 . Let (, )H be the inner product of the Hilbert space H. Then, show that
the following definition gives a sensible inner product.
3. Let |φi 〉1 (i = 1, · · · , dim H1 ) are the basis vectors for H1 and |ψ I 〉2 (I = 1, · · · , dim H2 )
are the basis vectors for H2 . Then show that |φi 〉1 ⊗ |ψ I 〉2 forms a basis for H1 ⊗ H2 .
IISERB(2024) QMech I - 87
4.1.1 Two Qubit systems
let us consider an example. We consider two spin 1/2 systems. So, we review the basics of spin
1/2 systems. It has two linearly independent states; we choose them to be the eigenstate of J3 :
| ↑〉 and | ↓〉.
J3 | ↑〉 = | ↑〉 , J3 | ↓〉 = − | ↓〉 (4.15)
2 2
This is a two-dimensional vector space. So we choose the states | ↑〉 and | ↓〉 to be
1 0
| ↑〉 = , | ↓〉 = (4.16)
0 1
The angular momentum matrices
1 0
J3 = = σ3 (4.17)
2 0 −1 2
Let’s now consider two such systems; it has four linearly independent states
| ↑〉 ⊗ | ↑〉 , | ↑〉 ⊗ | ↓〉 , | ↓〉 ⊗ | ↑〉 , | ↓〉 ⊗ | ↓〉 (4.18)
Consider the most general state in H1 (the Hilbert space of the first spin 1/2 particle); it is
given by
|φ〉 = c1 | ↑1 〉 + c2 | ↓1 〉 , |c1 |2 + |c2 |2 = 1 (4.21)
Similarly the most general state in H2 is given by
However, a generic state in the H is a linear sum of the above four states
f1 f2 = f3 f4 (4.25)
IISERB(2024) QMech I - 88
4.1.2 Two-dimensional harmonic oscillator
Now, we will discuss another example of a composite system: A dimensional oscillator.
1 1
H1 = (p21 + m1 x21 ) , H2 = (p22 + m2 x22 ) (4.26)
2 2
We already know that the Hilbert space for single Harmonic oscillators is ℓ2 (N). So then how
we have two quantum systems (ℓ2 (N), H1 ) and (ℓ2 (N), H2 ). We want to find a system with both
oscillators. The Hamiltonian of the composite system is usually written as
H = H1 + H2 (4.27)
H = H1 ⊗ 1 + 1 ⊗ H2 (4.28)
The Hilbert space of the composite system is ℓ2 (N)⊗ℓ2 (N)3 . The states of the composite Hilbert
space are of the form
Then, the creation operator and the annihilation operator for the first oscillator on the composite
Hilbert space
The Hilbert space (which is isomorphic to ℓ2 (N)) of the first oscillator is the Hilbert space
spanned by these vectors. Then, the Ground state of the composite system is
A commonly used notation in the literature We have already explained that the mathe-
matical structure behind the composite system is the tensor product of Hilbert spaces. However,
the tensor product notation is a bit heavy. So, it is not commonly used in the literature, even
though the underlying structure remains the same. The commonly used notation is
We want to stress that this is only about the notation; the underlying mathematical structure
remains the same.
3
We leave it to the readers to show that ℓ2 (N) ⊗ ℓ2 (N) is isomorphic to ℓ2 (N).
IISERB(2024) QMech I - 89
Interaction term Until now, we have considered a Composite system, which is a collection
of free systems. There is a term in the Hamiltonian that involves non-identity operators from
both sides. It is possible to write such operators and add them to the free Hamiltonian. Such a
term in the Hamiltonian is often called the interaction term. For example, consider an operator
of the form
xi , i = 1, · · · , n (4.40)
Let’s define a vector f(t), which has n-components, and each component depends on time t.
x1 (t) F 1 ({xj }, t) f 1 (t)
. ..
x(t) = .. , ({xj }, t) =
F . , (t) = ... (4.44)
f
xN (t) F N ({xj }, t) f N (t)
S({Φi }) (4.45)
Now time-reversal is a symmetry of the dynamics if for any solution f(t) of the equation of
motion
Note that this should be true for all solution f(t); For dynamical systems, there can be some
special class of solution for which this is true. But then, such a transformation is not called a
symmetry of the dynamics.
IISERB(2024) QMech I - 90
How do you know whether the dynamics have a symmetry or not The most pedantic
way to check the symmetry of a dynamics is to first compute the space of solutions and then
check whether the space of solutions exhibits symmetry or not. However, there are better ways
to check it. For example, if you are just given an equation of motion, then we can check the
symmetry of the equation of motion. Any symmetry of the equation of motion will also be a
symmetry of the solutions. If we are given a Lagrangian, we can directly check its symmetries.
Any symmetry of the Lagrangian is also of the equation of motion and, hence, a symmetry for
the space of solutions.
x0 , y0 , z0 are three independent real numbers. Furthermore, x0 , y0 , z0 can take any value
on the real line, hence a continuous symmetry. Whenever a symmetry has translational
4
The analysis of symmetry is much simpler if f is identically 0. Otherwise we have to choose suitable boundary
condition for x(ti ) and x(tf ) such that f (x, ti ) = 0 = f (x, tf ). And x → x̃ is a symmetry of the Lagrangian
subjected to that boundary condition; otherwise, it is not a symmetry.
IISERB(2024) QMech I - 91
invariance, the conjugate momenta are conserved
p(1) (2)
x + px , p(1) (2)
y + py , p(1) (2)
z + pz (4.51)
This implies that if we find in an experiment that the total momentum of the particles
is conserved, then we know that the system has translational invariance. If the system
has translation invariance, then the potential can only be a function of the (x1 − x2 , y1 −
y2 , z1 − z2 ). So, the existence of translation invariance implies that
V (x1 , x2 , y1 , y2 , z1 , z2 ) = V (x1 − x2 , y1 − y2 , z1 − z2 ) (4.52)
So, translational invariance implies that the potential is a function of three variables.
2. Let’s now discuss rotational symmetry. We start with rotation in two dimensions. It is
given by
θ can take any value between 0 and 2π, and this is a continuous symmetry.
The consequence of rotation symmetry is that the following quantity is conserved
xpy − ypx (4.54)
If we find this quantity to be conserved in an experiment, then we know that there is
rotational symmetry in the x − y plane, and as a consequence, the potential can only be
a function of (x2 + y 2 ) and z. So, the rotation in x − y along with translation implies
that potential can only be a function of 2 variables.
The discussion on rotation can be extended to any number of dimensions. For that, we
have to choose a 2-plane and perform the transformation like above. For example, in
3-dimensions, we have three linearly independent 2-planes 5 and we can perform rotation
in any one of them. The rotational symmetry is also known as the isotropy of space, and
the consequence of that is the conservation of angular momentum.
From isotropy of space-time (rotational invariance, we can conclude that V can only be a
function of r = (x1 − x2 )2 + (y1 − y2 )2 + (z1 − z2 )2
V (x1 − x2 , y1 − y2 , z1 − z2 ) = V (r) (4.55)
We can see that, using symmetry, we can reduce a problem of six variables to a problem of
one variable. The only problem at this point that is yet to be determined is the functional
dependence of r
V (r) = an r n (4.56)
n∈R
3. (The details of this part of the discussion are outside the scope of this course. You will be
learning this next year in the Quantum Field theory course). If we use the framework of
quantum field theory, then we can even specify the functional form of the potential. The
only form of potential that can arise in a unitary relativistic quantum field theory is
e−mr
V (r) = g 2 , m≥0 (4.57)
r
For a long-range interaction, m is 0. Furthermore, Weinberg showed that if the potential is
due to the exchange of a massless spin-two particle, then the force can only be universally
attractive; the equality of inertial mass and gravitational mass follows from underlying
Poincare invariance.
5
In n dimensions, there are n(n − 1)/2 independent two planes.
IISERB(2024) QMech I - 92
4.2.3 Symmetry and groups
We have been discussing symmetry in classical systems and its consequences. Now, we discuss
the relationship between symmetry in physics and its relation with groups in mathematics. We
discuss the relation in a concrete example first.
Consider a system in two dimensions. Let the Lagrangian be given by
1 1
m(ẋ2 + ẏ 2 ) − k(x2 + y 2 ) (4.58)
2 2
The Lagrangian has rotational symmetry
Rθ denotes the rotation by an angle θ. Let’s now consider the collection of all such rotations
Now we define Rθ ◦ Rφ : this means that first we rotate by φ and then we rotate by θ. We can
use the above expression to check that
Rθ ◦ Rφ = Rθ+φ (4.61)
Rθ , Rφ ∈ G =⇒ Rθ ◦ Rφ ∈ G (4.62)
Let’s now consider three elements of G: Rθ , Rφ Rχ . Then, from the above equation, we can check
Rθ=0 ≡ 1 , 1 ◦ Rθ = Rθ = Rθ ◦ 1 (4.64)
When these four properties are satisfied by the elements of a set, then mathematically, it is
called a group.
We leave it to the readers to show that this is true for any classical system: the set of all
symmetry transformations forms a group.
IISERB(2024) QMech I - 93
Now the question is, what do we mean by symmetry in quantum theory? The fundamental
observable in quantum mechanics is the probability, which is simply a mod squared of the
probability amplitude. The probability doesn’t change if we multiply the state vector/wave
function by a phase; ψ and eiθ ψ represent the same physical state. A collection/of vectors which
differ only by a phase is mathematically called a Ray. So a physical state in quantum mechanics
is described by Rays in Hilbert space.
In quantum systems, any transformation that leaves the probability invariant is called a
symmetry transformation. Let’s denote the ray as Ri . U is a symmetry transformation
(U ) (U ) (U )
U : Ri −→ Ri , |〈Ri |Rj 〉|2 = |〈Ri |Rj 〉|2 (4.66)
1. Ui , Uj are two symmetry transformations; then, from the definition, it follows that Ui Uj
is also a symmetry transformation. This is mathematically written as
Ui , Uj ∈ G =⇒ Ui Uj ∈ G (4.68)
Physically, this means that for every transformation, you can always do the opposite
transformation so that you can go back to “doing nothing.”
Whenever the members of a set satisfy these four properties, then that set is called a group.
If the product of any element doesn’t depend on the order Ui Uj = Uj Ui , then it is called an
abelian group. Otherwise, it is called a non-abelian group.
IISERB(2024) QMech I - 94
Wigner’s symmetry representation theorem is one of the important results in the math-
ematical formulation of quantum mechanics. The symmetries of a theory form a group. Ac-
cording to this theorem,
This article by David Gross beautifully explains the role and importance of symmetry in
Physics.
The proof of Wigner’s symmetry representation theorem can be found in chapter 2 of
QFT by Weinberg. You can also read this resonance article. In case you are interested in
learning the proof by solving a problem, please see problem 5 in this assignment by Alan
Guth.
For the moment we restrict to a linear unitary transformations. Let’s also consider con-
tinuous transformations. A continuous transformation is a transformation which depends on
continuous parameter(s). For simplicity, let’s also assume that the continuous transformation
depends only on one real parameter θ. We write the corresponding Unitary operator as
Since the set of continuous transformations forms a group, there is an identity element. Let’s
say for θ = θ0 , the unitary transformation becomes an identity transformation
U (θ0 ) = 1 (4.72)
Now let’s consider the value of θ = θ0 + δθ where δθ << 1. In this case, we Taylor expand U (θ)
6 and keep only the first-order terms.
U ′ (θ0 ) = iV , V† =V (4.75)
So the unitary transformation, infinitesimal away from the element, takes the form
IISERB(2024) QMech I - 95
We found that by considering a unitary transformation infinitesimal away from the identity ele-
ment, we obtained a self-adjoint operator. Here, we assume that the continuous transformation
only depends on one parameter. This statement can be generalised to multiple parameters. We
leave it to the readers to check that for a transformation that depends on n real parameters, we
get n self-adjoint operators colour red Exercise. The converse is also true. Every self-adjoint
operator provides us with a (infinitesimal) Unitary transformation.
What is a representation ? Let’s say G is a group and H is a Hilbert space. Now for every
element g, does there exist an operator U (g) that acts on the Hilbert space in such a way that
the product of two such operators obeys the group multiplication law upto a phase
then H is called a representation space, and the dimension of H is called the dimension of the
representation. If U (g)s are Linear, unitary (anti-linear, anti-unitary) operators, then it is called
a unitary (anti-unitary) representation.
Action of symmetry on states and operators Let H be the Hilbert space of a theory.
Then, according to Wigner’s theorem, if S is a symmetry, then there is an operator PS which is
either linear, unitary or anti-linear unitary. Let consider a state |α〉 and we define |α̃〉
Then
|〈α̃|α̃〉|2 = |〈α|α〉|2 (4.79)
The action of an operator on the bra is from the right. We need to settle how a symmetry
operator acts on other operators of the theory. In order to find that consider another state |β〉,
which is obtained by the action of an operator O on |α〉
[H, U ] = 0 (4.83)
IISERB(2024) QMech I - 96
where U is a symmetry transformation. If it is a continuous transformation, then we can expand
it infinitesimal far away from the identity element
Thus, for every continuous symmetry in quantum mechanics, there is a self-adjoint operator
which commutes with the Hamiltonian. The simplest example is translation symmetry. For
example, in the case of free particles, we can talk about the transformations
x→x+a (4.85)
Here, a is a continuous parameter. So, in quantum theory, there is a unitary operator for this
continuous transformation U (a). We can always consider infinitesimal translation, and Taylor
expands it
U (δa) = 1 + iP δa (4.86)
The self-adjoint operator that we get from infinite translation is called the momentum operator
p.
The position is also an operator in Quantum Mechanics. From the definition of translation
operators, we get
Thus, from the action of symmetry, we can see that X and P do not commute. This relation is
also known as the Canonical commutation relation.
4.3.4 Action of symmetry on the Hilbert space for one dim Harmonic oscil-
lator
We now demonstrate an example of Wigner’s symmetry representation theorem. We consider
a one-dimensional Harmonic Oscillator. We rewrite everything in the Bra-Ket notation. The
Hamiltonian is given by
1
H = (p2 + ω 2 x2 ) (4.89)
2
In order to quantize the system, we promote the variables x and p to operators and impose
canonical commutation relation between them
x̂, p̂ = i (4.90)
IISERB(2024) QMech I - 97
Then, the excited states are given by
(a† )n
|n〉 = √ |0〉 (4.93)
n!
n is a non-negative integer. Let’s now discuss various symmetries of the system and their action
of various symmetries on the Hilbert space. From Wigner’s theorem, we know that the action of
symmetry can either be linear, unitary, anti-linear, or anti-unitary. The way to settle between
these two is to demand the action of the symmetry must preserve the canonical commutation
relation.
Parity symmetry Let’s consider the Hamiltonian in (4.89). It has parity symmetry
According to the Wigner symmetry representation theorem, there is an operator P that acts
on the Hilbert space. We are yet to determine whether it is a linear unitary operator or an
anti-linear, anti-unitary operator. From the transformation of x and p, we know that
Consistency with the canonical commutation relation (??) demands that this symmetry must
be realized by a linear, unitary operator
Then, the action on the creation and annihilation operators are given by
Now, we want to find its action in the Hilbert space. This symmetry squares to identity
P2 = 1 (4.98)
So, the eigenvalue of this operator can only be ±1. The vacuum must be an eigenstate of P;
because if it is not an eigenstate, then the action of P will give another state of the same energy.
But we know the vacuum is unique. So, the vacuum must be an eigenstate of P. Let the action
on the vacuum state be
α is undetermined. Then, a state with occupation number n transforms in the following way
Any such definition won’t affect (??). Since we can do this redefinition, the phase α is not
physically observable. This phase can always be absorbed through redefinition.
Here, we take a small detour and show that the eigenvalues (which are often referred to
as “Quantum numbers” in physics literature) of a linear unitary operator, corresponding to a
IISERB(2024) QMech I - 98
symmetry, remain unchanged under time evolution. We demonstrate with the Parity operator.
Let U (t, t0 ) be the time evolution operator. Then
Please note that the linear unitary nature of the operator is important to show the above relation.
Now, let’s consider the state |n〉 at two different instances in time. Let t0 be the initial time,
and we know the eigenvalue at that time
We will now try to determine the eigenvalue of the parity operator at a later point in time
P|n(t)〉 = PU (t, t0 )|n(t0 )〉 = PU (t, t0 )P −1 P|n(t0 )〉 = U (t, t0 )(−1)n |n(t0 )〉 = (−1)n |n(t)〉
(4.104)
So, we can see that the eigenvalue of the parity operator does not change with time. In other
words, the eigenvalue of the parity operator is a conserved quantity. We already know that
in classical mechanics, every continuous symmetry is associated with a conservation law. This
remains true for continuous symmetry in Quantum Mechanics. In Quantum mechanics, the
statement can even be extended to linear unitary discrete symmetries. In that case, the corre-
sponding operator has eigenvalues, and the eigenvalues remain the same under time evolution.
Time reversal symmetry Now we will consider another discrete symmetry: Time reversal
symmetry. Under time reversal, the position remains invariant, but the momentum changes the
sign (and hence the direction)
Then, according to Wigner’s theorem, there must be a symmetry operator that acts on the
Hilbert space. Let the action of Time-reversal transformation be generated by T ; then
Then, the action on the creation and annihilation operators are given by
T H T −1 = H (4.109)
IISERB(2024) QMech I - 99
4.4 Uncertainty relations
In this section, we discuss uncertainty relations. Heisenberg’s Uncertainty principle is one of the
most celebrated results in Quantum Mechanics. A common misconception about Heisenberg’s
uncertainty principle is that the uncertainty in a quantum system is due to the disturbance that
one creates during observation. This is not correct. Uncertainty relation in quantum systems
is due to the fact that any observable is a self-adjoint operator, and the self-adjoints do not
necessarily commute with each other. Whenever two operators commute with each other, there
is some uncertainty between them.
θ is the angle between the two vectors. The equality holds only when θ = 0, π, i.e. A and B,
are along the same direction.
This is a special of a more general inequality known as the Cauchy-Schwarz inequality.
Consider two vectors φ and χ in an inner product space. The Cauchy-Schwarz inequality states
that
(φ, φ)(χ, χ) ≥ |(φ, χ)|2 (4.111)
Proof We start with two-dimensional real vector space to prove this identity
(X, X)(Y, Y ) − (X, Y )2 = (x21 + x22 )(y12 + y22 ) − (x1 y1 + x2 y2 )2 = (x2 y1 − x1 y2 )2 ≥ 0 (4.113)
N
N
N 2
2
(X, X)(Y, Y ) − (X, Y ) = x2i yi2 − x i yi (4.115)
i=1 i=1 i=1
When we expand the expression, the terms of the form x2i yi2 cancel. So we are left with
N
N
N
N
N
x2i yj2 − 2 (xi yi )(xj yj ) = (xi yj − xj yi )2 ≥ 0 (4.116)
i=1 j=1,j∕=i i,j=1 i=1 j=1,j∕=i
This concludes our proof for any finite-dimensional real vector space.
Now we generalize it to any inner product space. Consider two vectors φ and χ. In order
to prove (4.111), consider the vector φ + c χ (c is a complex number); from the property of the
inner product, it follows that the norm is non-negative
Note that the equality will only be if there is a choice of c such that
(φ + c χ, φ + c χ) = 0 (4.119)
φ + cχ = 0 (4.120)
Introducing Bra-ket Bra-ket notation was introduced by Dirac, and it is very widely used
in Quantum Mechanics literature. In this notation, a vector is denoted as follows
ψ −→ |ψ〉 (4.121)
〈| is called a bra.
For example, in the Bra-ket notation, the Cauchy-Schwarz inequality is written as
α is normalized. The expectation value depends on the state. For example, if |α〉 is an eigenstate,
then the expectation value is just the eigenvalue. Now, we define the variance of the operator
σO with respect to the state |α〉.
2
σO = 〈α|O2 |α〉 − 〈α|O|α〉 (4.125)
The variance depends on the state α. For example, if we choose |α〉 to be an eigenstate of O,
then the variance is zero.
Given a state |α〉 and operator A, we define the following state
and 〈A2 〉
〈A2 〉 = |c1 |2 λ21 + |c2 |2 λ22 = λ22 + |c1 |2 (λ21 − λ22 ) (4.132)
Then, the variance is given by
Now, we compute two more quantities that will be useful later. We start with Im 〈αA |αB 〉
1 1
Im 〈αA |αB 〉 = 〈αA |αB 〉 − 〈αB |αA 〉 = [A, B] (4.135)
2i 2i
Now we compute Re 〈αA |αB 〉. At first, we define the anti-commutator {} of two operators
{A, B} = AB + BA (4.136)
Let’s now apply Cauchy-Schwarz inequality for the two states |αA 〉 and |αB 〉
This relation is known as the Heisenberg uncertainty relation7 . We can see that if the operators
do not commute, we cannot set the variance of both operators to zero. The product of the
variance is bounded from below by the commutator. Note that the bound on the left-hand side
depends on the state. In general, the commutator of two operators is another operator. The
relation becomes even simpler when the commutator of the two operators is a complex number.
Let’s say
A, B = c (4.143)
Then we get
2 2
σA σB ≥
|c| (4.144)
4
So, if the commutator is a complex number, then the right-hand side becomes independent of
the state, and the uncertainty relation has to be satisfied by any state. One such example is x
and p; the commutator is given by
x, p = i (4.145)
Any Quantum state cannot have definite momentum and position simultaneously. Please note
that this has nothing to do with measurement and disturbing the system from the outside.
The uncertainty relations follow from the inherent non-commuting nature of the observables in
Quantum Mechanics.
7
This form of the uncertainty relation was given by H Robertson in this work.
Ni = a†i ai , N = N1 + N2 , H = H = ω (N + 1) (4.156)
To understand, note that we can construct four operators of the following form
a†i aj (4.158)
But these operators do not commute amongst themselves. Actually, one linear combination of
these operators gives the Hamiltonian. We can construct three other operators
† †
J3 = (a1 a1 − a†2 a2 ) , J1 = (a1 a2 + a†2 a1 ) , J2 = i (a†2 a1 − a†2 a1 ) (4.160)
2 2 2
Then we leave it to the reader to check that
4.5.1 A digression
We found that two-dimensional oscillator has three self-adjoint operator Ji which commute
with Hamiltonian and they have a non-trivial. So here we take a slightly general perspective.
Let’s consider any quantum mechanical system which has three self-adjoint operators satisfying
J1 , J2 &J3 such that they satisfy
Ji , Jj = iijk Jk (4.162)
J± = J1 ± iJ2 (4.163)
We want to find the representation of this algebra on a Hilbert space. There are four operators:
J 2 and J1 , J2 & J3 . The first one commutes with everything but the last three do not commute
amongst themselves; As a result, we cannot simultaneously diagonalize them. We write the states
in terms of the eigenvalue of J 2 and J3
From this, we can conclude that the action of J+ does not change the J 2 eigenvalue, but it
changes the J3 eigenvalue by +.
Putting this to compute expectation value in 〈α, βmax |J 2 |α, βmax 〉 we obtain
α = 〈α, βmax |J 2 |α, βmax 〉 = 〈α, βmax | J32 + J− J+ + J3 |α, βmax 〉 = βmax
2
+ βmax (4.177)
We obtain
α = βmax (βmax + ) = βmin (βmin − ) (4.179)
J3 |n1 , n2 〉 = (n1 − n2 ) (4.192)
2
The action of J± is given by
J+ |n1 , n2 〉 = (n1 + 1)n2 |n1 + 1, n2 − 1〉 , J− |n1 , n2 〉 = (n2 + 1)n1 |n1 − 1, n2 + 1〉
(4.193)
Alternatively, we can write the states on the following basis
ℓ is a half-integer.
This is precisely the action L+ as we know it from the representation theory of angular momen-
tum.
The ground state transforms trivially under su(2); all the states with energy ω(ℓ + 1)
transforms as spin ℓ/2 representation 8 .
Schwinger considered this model to compute various things in angular momentum, including
the Quantum Rotation matrix. One can check On Angular Momentum to find the original
treatment by Julian Schwinger. It can also be found in sec 3.8 of Sakurai Modern Q Mech.
8
It is a remarkable fact every representation of su(2) appears exactly once in the Hilbert space for two-
dimensional isotropic oscillators!
1. The system has three self-adjoint operators Ji which commute with the Hamiltonian
H, Ji = 0 , Ji , Jj =∕ 0 (4.197)
As a result, the action of these operators does not change the eigenvalue of H
So, not all of them are simultaneously diagonalizable! At most, one of them can be
diagonalized. Let’s choose it to be J3 . Then, every state in the Hilbert space can be
written in terms of the eigenvalue of H and J3
3. From J1 and J2 , we can construct two operators such that their commutator with J3 is
proportional to the operator.
J± = J1 + iJ2 =⇒ J3 , J± = ±J± (4.201)
Since J3 is a self-adjoint operator, then two states with eigenvalue for J3 are necessarily
orthogonal.
〈E, ℓ| J± |E, ℓ〉 = 0 (4.203)
As a result, the action of J± changes the state without changing energy value9 . This is
the reason behind the degeneracy of the system.
Now, the product of any finite number of unitary operators is also a unitary operator
All such matrices form a group. This gives a continuous symmetry group with N parameters.
The number of real independent parameters is called the dimension of the group. Hence, N self-
adjoint linearly independent conserved operators give a continuous symmetry group of dimension
N.
Abelian and non-abelian nature of the symmetry group We know the symmetries of
form a group. We have seen above that U operators form a group. If two different elements
commute with each other, the group is called an abelian group. Otherwise, it is a non-abelian
group. Here, we would like to understand whether the symmetry group is abelian or not.
Let’s start with the case when there is only one conserved self-adjoint operator. Then from
the expression given in (4.206), we can see that
So, the symmetry group is abelian if and only if the conserved self-adjoint operators commute
with each other. Otherwise, it is a non-abelian group.
From the previous discussion, then, it follows that the existence of a non-abelian symmetry
group implies that the spectrum is degenerate 10 . In fact, the bigger the non-abelian symmetry
group, the more degenerate the system is. One can check this statement from a very simple
example, considering N dimensional isotropic harmonic oscillator. In this case, the system has
a SU (N ) non-abelian symmetry. One can also check that degeneracy increases rapidly with N .
H , x , x2 , p , p2 (4.216)
p2
H= + V (x) (4.221)
2m
Compute the following commutators
(a) Find the expression for |λ〉 in terms of the energy eigenstates
(b) Compute the following inner product
(a) Consider energy eigenstates of the Harmonic oscillator and show that they satisfy the
Heisenberg uncertainty principle.
(b) Show that any Harmonic oscillator state satisfies the uncertainty principle.
(c) Please repeat the previous two steps to demonstrate that the Schrodinger uncertainty
principle is also satisfied.
(a) Consider the energy eigenstates of the system and show that they satisfy the Heisen-
berg uncertainty principle.
(b) Show that any state of the Particle in a box satisfies the uncertainty principle.
(c) Please repeat the previous two steps to demonstrate that the Schrodinger uncertainty
principle is also satisfied.
(d) Does any of these states saturate the bound?
Then compute
7. Consider N independent Harmonic oscillators with the same frequency. The creation and
annihilation operators satisfy
Find the energy eigenstates of this model and find the degeneracy of the states with energy
ω(N + n2 ).