11 - Rotations in Ordinary Space PDF
11 - Rotations in Ordinary Space PDF
Fall 2011
Notes 11
Rotations in Ordinary Space
1. Introduction
The theory of rotations is of direct importance in all areas of atomic, molecular, nuclear and
particle physics, and in large areas of condensed matter physics as well. The ideas developed in this
and subsequent sets of notes on rotations in quantum mechanics will recur many times in this course,
continuing through the second semester. The rotation group is the first nontrivial symmetry group
we encounter in a study of quantum mechanics, and serves as a paradigm for other symmetry groups
one may encounter later, such as the SU (3) symmetry that acts on the color degrees of freedom in
quantum chromodynamics. Furthermore, transformations that have the same mathematical form as
rotations but which have nothing to do with rotations in the usual physical sense, such as isotopic
spin transformations in nuclear physics, are also important. Rotations are also the spatial part of
Lorentz transformations, and Lorentz invariance is one of the basic principles of modern physics.
We will study Lorentz transformations in quantum mechanics in Physics 221B.
These notes will deal with rotations in ordinary 3-dimensional space, such as they would be
used in classical physics. We will deal with quantum representations of rotations later.
2. Inertial Frames
The concept of an inertial frame is the same in Newtonian (nonrelativistic) mechanics and in
special relativity, both in classical mechanics and in quantum mechanics. It requires modification
only in general relativity, that is, when gravitational fields are strong.
We define an inertial frame as a frame in which free particles (particles upon which no forces act)
move in straight lines with constant velocity. An inertial frame has three mutually orthogonal axes,
with respect to which coordinates of points can be measured. It assumed that the measurements
of distance obey the rules of Euclidean geometry in three-dimensional space. This is a physical
assumption which can be tested experimentally. In fact, the geometry of space is not exactly
Euclidean, due to the effects of gravitational fields; but in weak fields the deviations are very small
and we shall ignore them. It is another physical assumption that there exists a frame in which free
particles move on straight lines with constant velocity, but given the existence of one such frame,
a whole family of other inertial frames may be generated by applying translations, rotations and
boosts. The requirement regarding the uniform, rectilinear motion of free particles excludes rotating
and/or accelerating frames.
(1)
We associate points P and P with vectors r = OP and r = OP that are based at O. Then we can
also regard R as an operator acting on such vectors, and write
r = Rr.
(2)
Because R preserves all distances and angles, it maps straight lines into straight lines, and parallelograms into congruent parallelograms. It follows that R is a linear operator,
R(aA + bB) = aRA + bRB,
(3)
for all vectors A and B based at O and all real numbers a and b. It also follows that R is invertible,
and thus R1 exists. This is because a linear operator is invertible if the only vector it maps into
the zero vector 0 is 0 itself, something that is satisfied in this case because 0 is the only vector of
zero length.
The product of two rotations is denoted R1 R2 , which means, apply R2 first, then R1 . Rotations
do not commute in general, so that R1 R2 6= R2 R1 , in general. It follows from the definition that if
R, R1 and R2 are rotation operators, then so are R1 and R1 R2 . Also, the identity operator is a
rotation. These facts imply that the set of rotation operators R forms a group.
We can describe rotations in coordinate language as follows. We denote the mutually orthogonal
2 , e
3 ) or (
, z). We assume the frame is
unit vectors along the axes of the inertial frame by (
e1 , e
x, y
right-handed. Vectors such as r or r can be expressed in terms of their components with respect to
these unit vectors,
r=
X
i
i xi ,
e
r =
i xi .
e
(4)
(5)
where Rij are the components of the matrix R. (In these notes, we use sans serif fonts for matrices,
but ordinary italic fonts for their components.) The definition (5) can be cast into a more familiar
form if we use a round bracket notation for the dot product,
A B = (A, B),
(6)
where A and B are arbitrary vectors. Then the definition (5) can be written,
Rij = (
ei , R
ej ).
(7)
2 , e
3 ) in the much
This shows that Rij are the matrix elements of R with respect to the basis (
e1 , e
same manner in which we define matrix elements of operators in quantum mechanics.
The definition (7) provides an association between geometrical rotation operators R and ma2 , e
3 ). If R, R1 and R2 are rotation
trices R which depends on the choice of coordinate axes (
e1 , e
operators corresponding to R, R1 and R2 , then R1 R2 corresponds to R1 R2 (multiplication in the
same order) and R1 corresponds to R1 . As we say, the matrices R form a representation of the
geometrical rotation operators R.
Assuming that r and r are related by the rotation R as in Eq. (2), we can express the components xi of r in terms of the components xi of r. We have
X
X
j xj =
i r = e
i (Rr) = e
i R
Rij xj ,
(8)
e
xi = e
j
where we use the linearity of R and the definition (7) of Rij . We can write this in matrix-vector
notation as
r = Rr,
(9)
where now r, r are seen, not as geometrical vectors as in Eq. (2), but rather as triplets of numbers,
i .
that is, the coordinates of the old and new points with respect to the basis e
4. Active versus Passive Point of View
When dealing with rotations or any other symmetry group in physics, it is important to keep
distinct the active and passive points of view. In this course we will adopt the active point of view
unless otherwise noted (as does Sakurais book). Many other books, however, take the passive point
of view, including some standard monographs on rotations, such as Edmonds. The active point of
view is usually preferable, because it is more amenable to an abstract or geometrical treatment,
whereas the passive point of view is irrevocably chained to coordinate representations.
In the active point of view, we usually imagine one coordinate system, but think of operators
that map old points into new points. Then an equation such as r = Rr indicates that r and r are
the coordinates of the old and new points with respect to the given coordinate system. This is the
interpretation of Eq. (9) above. In the active point of view, we think of rotating our physical system
but keeping the coordinate system fixed.
In the passive point of view, we do not rotate our system or the points in it, but we do rotate
our coordinate axes. Thus, in the passive point of view, there is only one point, but two coordinate
systems. To incorporate the passive point of view into the discussion above, we would introduce the
rotated frame, defined by
i = R
e
ei ,
(10)
and then consider the coordinates of a given vector with respect to the two coordinate systems
and the relations between these components. In a book that adopts the passive point of view, an
equation such as r = Rr probably represents the coordinates r and r of a single point with respect
to two (the old and new) coordinate systems. With this interpretation, the matrix R has a different
meaning than the matrix R used in the active point of view (such as that in Eq. (9) and elsewhere in
these notes), being in fact the inverse of the latter. Therefore caution must be exercised in comparing
different references. In this course we will make little use of the passive point of view.
5. Properties of Rotation Matrices; the Groups O(3) and SO(3)
Since the rotation R preserves lengths, we have
|r |2 = |r|2 ,
(11)
when r, r are related by Eq. (9). Since this is true for arbitrary r, we have
Rt R = I.
(12)
This is the definition of an orthogonal matrix. The set of all 3 3 real orthogonal matrices is
denoted O(3) in standard notation, so our rotation matrices belong to this set. In fact, since every
orthogonal matrix in O(3) corresponds to a rotation operator R by our definition, the space of
rotations is precisely the set O(3). The set O(3) forms a group under matrix multiplication that is
isomorphic to the group of geometrical rotation operators R introduced above.
Equation (12) implies
Rt = R1 ,
(13)
(14)
(in the reverse order from Eq. (12)). Taken together, Eqs. (12) and (14) show that the rows of an
orthogonal matrix constitute a set of orthonormal vectors, as do the columns.
Taking determinants, Eq. (12) also implies (det R)2 = 1, or
det R = 1.
(15)
Orthogonal matrices for which det R = +1 are said to be proper rotations, while those with det R =
1 are said to be improper. Proper rotations have the property that they preserve the sense (righthanded or left-handed) of frames under the transformation (10), while improper rotations reverse
this sense.
The set of proper rotations by itself forms a group, that is, the property det R = +1 is preserved
under matrix multiplication and inversion. This group is a subgroup of O(3), denoted SO(3), where
the S stands for special, meaning in this case det R = +1. The set of improper rotations does
not form a group, since it does not contain the identity element. An improper rotation of some
importance is
1 0
0
P = 0 1 0 = I,
0
0 1
(16)
n
y
x
that defines an axis, and a rotation of angle about that axis.
Fig. 1. A proper rotation is specified by a unit vector n
The rotation follows the right-hand rule.
(17)
and the angles add under matrix multiplication as indicated. The rotations about the three coordinate axes are of special interest; these are
1
0
0
R(
x, ) = 0 cos sin ,
0 sin cos
cos 0 sin
R(
y, ) = 0
1
0 ,
sin 0 cos
cos sin 0
z, ) = sin
R(
cos 0 .
(18)
0
0
1
and some
One can show that any proper rotation can be represented as R(
n, ), for some axis n
angle . Thus, there is no loss of generality in writing a proper rotation in this form. We will call this
the axis-angle parameterization of the rotations. This theorem is not totally obvious, but the proof
is not difficult. It is based on the observation that every proper rotation must have an eigenvector
with eigenvalue +1. This eigenvector or any multiple of it is invariant under the rotation and defines
its axis.
7. Infinitesimal Rotations
A rotation that is close to the identity is called near-identity or infinitesimal. It has the form
R = I + A,
(19)
where is a small scale factor whose purpose is to make the correction term small, and where A
is a matrix. A near-identity rotation must be a rotation about some axis by a small angle. By
substituting Eq. (19) into the orthogonality condition (12), we find
A + At = 0,
(20)
3
0
a3 a2
X
A = a3
ai Ji ,
0
a1 =
i=1
a2 a1
0
where (J1 , J2 , J3 ) is a vector of matrices, defined by
0 0 0
J1 = 0 0 1 ,
0 1 0
0 0 1
J2 = 0 0 0 ,
1 0 0
0 1 0
J3 = 1 0 0 .
0 0 0
(21)
(22)
(23)
We will also write the sum in Eq. (21) as a J, that is, as a dot product of vectors, but we must
remember that a is a triplet of ordinary numbers, while J is a triplet of matrices. The notation is
similar to that which we use with the Pauli matrices, a vector of 2 2 matrices. See Prob. 1.1.
Equation (21) associates an antisymmetric matrix A with a corresponding 3-vector a. In the
following we will find it convenient to switch back and forth between these representations.
A useful property of the J matrices is the following. Let A be an antisymmetric matrix, associated with a vector a according to Eq. (21), and let u be another vector. Then
Au = (a J)u.
(24)
Now taking the i-th component of the right-hand side and using Eq. (23), we find
[(a J)u]i = (a J)ij uj = ak kij uj = +ikj ak uj = (au)i ,
(25)
(26)
This gives us an alternative way of writing the cross product (as a matrix multiplication) that is
often useful.
nu
nu.
(27)
Ru = u + au,
(28)
or
where u is an arbitrary vector. But another way of writing this equation appears when we examine
by small angle on the same vector u.
Fig. 2, which shows the action of a rotation about axis n
The result is
Ru = u +
nu.
(29)
Comparing this with Eq. (28), we see that the axis and angle of an infinitesimal rotation is given in
terms of the correction matrix A in Eq. (19) by
n = a,
(30)
or,
= |a|,
=
n
a
.
|a|
(31)
( 1).
(32)
We tabulate here some useful properties of the J matrices (see also Eqs. (23) and (26)). First,
the product of two J matrices can be written,
(33)
Ji Jj k = i kj ij k ,
as follows from Eq. (23). From this there follows the commutation relation,
Ji , Jj = ijk Jk ,
(34)
a J, b J = (ab) J.
(35)
or
The commutation relations (34) obviously resemble the commutation relations for angular momentum operators in quantum mechanics, but without the ih. As we shall see, the quantum angular
momentum commutation relations are a reflection of these commutation relations of the Ji matrices
at the classical level. Ultimately, these commutation relations represent the geometry of Euclidean
space.
8. Exponential Form of Finite Rotations
We will now establish a connection between infinitesimal rotations and the finite rotations
R(
n, ) which take place about a fixed axis. The argument is similar to what we did previously with
translation operators (see Eqs. (4.34) and (4.35)).
Our strategy is to set up a differential equation for (d/d)R(
n, ) and to solve it. By the
definition of the derivative, we have
d
R(
n, + ) R(
n, )
R(
n, ) = lim
.
0
d
(36)
But according to Eq. (17), the first factor in the numerator can be written
R(
n, + ) = R(
n, )R(
n, ),
(37)
R(
n, ) I
d
R(
n, ).
(38)
R(
n, ) = lim
0
d
In the limit, becomes a small angle, so we can use Eq. (32) to evaluate the limit, obtaining,
d
R(
n, ) = (
n J)R(
n, ).
d
(39)
(40)
This exponential can be expanded out in a power series that carries Eq. (32) to higher order terms,
R(
n, ) = I +
nJ+
2
(
n J)2 + . . . .
2
(41)
and .
This series converges for all values of n
The same result can be obtained in another way that is more pictorial. It is based on the idea
that a rotation about a finite angle can be built up out of a sequence of small angle rotations. For
example, a rotation of one radian is the product of a million rotations of 106 radians.
Let us take some angle and break it up into a product of rotations of angle /N , where N is
very large:
N
,
.
(42)
R(
n, ) = R n
N
We can make /N as small as we like by making N large. Therefore we should be able to replace
the small angle rotation in Eq. (42) by the small angle formula (32), so that
N
J .
I+ n
N
N
R(
n, ) = lim
(43)
This is a limit of matrices that is similar to the limit of numbers shown in Eq. (4.42). It turns out
that the matrix limit works in the same way, giving Eq. (40).
u
n
(
n u) n
Ru
u
10
(44)
(45)
or, as we say in words, the cross product transforms as a vector under proper rotations. The proof
of Eq. (45) will be left as an exercise, but if you do it, you will find it is equivalent to det R = +1.
In fact, if R is improper, there is a minus sign on the right hand side, an indication that the cross
product of two true vectors is not a true vector, but rather a pseudovector. More on this later, when
we return to the improper rotations and discuss parity in quantum mechanics.
Now let us use the notation (26) to express the cross products in Eq. (45) in terms of matrix
multiplications,
R(a J)u = [(Ra) J]Ru,
(46)
(47)
(48)
This formula is of such frequent occurrence in applications that we will give it a name. We call it
the adjoint formula, because of its relation to the adjoint representation of the group SO(3).
Let us now replace R by R0 to avoid confusion with other rotation matrices to appear momentarily, and let us replace a by
n for the axis and angle of a rotation. Then exponentiating both
sides of the adjoint formula (48), we obtain,
) J ,
(49)
n J)Rt0 = exp (R0 n
exp R0 (
n J)Rt0 = R0 exp(
where we have used the rule,
eABA
= A eB A1 ,
(50)
11
valid for matrices or operators A and B when A1 exists. This rule can be verified by expanding
the exponential series. The result (49) can be written,
, ).
R0 R(
n, )Rt0 = R(R0 n
(51)
R13
O(3)
I
I
R12
Improper Rotations, det R = 1
R11
Fig. 4. The rotation group O(3) can be thought of as a 3-dimensional surface imbedded in the 9-dimensional space of
all 33 matrices. It consists of two disconnected pieces, one of which contains the proper rotations and constitutes the
group SO(3), and the other of which contains the improper rotations. The proper rotations include the identity matrix
I, and the improper rotations include the parity matrix I.
It is often useful to think of the rotations and their parameters in geometrical terms. We
imagine setting up the 9-dimensional space of all 3 3 real matrices, in which the coordinates are
12
the components of the matrix, Rij . Of course it is difficult to visualize a 9-dimensional space, but we
can use our imagination as in Fig. 4. The 6 constraints implicit in RRt = I imply that the orthogonal
matrices lie on a 3-dimensional surface imbedded in this space. This surface is difficult to draw
realistically, so it is simply indicated as a nondescript blob in the figure. More exactly, this surface
consists of two disconnected pieces, containing the proper and improper matrices. This surface (both
pieces) is the group manifold for the group O(3), while the piece consisting of the proper rotations
alone is the group manifold for SO(3). The identity matrix I occupies a single point in the group
manifold SO(3), while the improper matrix I lies in the other piece of the group manifold O(3).
Any choice of three parameters for the proper rotations can be viewed as a coordinate system on
the group manifold SO(3).
The group manifolds O(3) and SO(3) are bounded, that is, they do not run off to infinity. The
correct topological designation for this is that the manifolds are compact. This follows from the fact
that every row and column of an orthogonal matrix is a unit vector, so all components lie in the
range [1, 1].
The group manifold SO(3) is the configuration space for a rigid body. This is because the
orientation of a rigid body is specified relative to a standard or reference orientation by a rotation
matrix R that maps the reference orientation into the actual one. In classical rigid body motion, the
orientation is a function of time, so the classical trajectory can be seen as a curve R(t) on the group
manifold SO(3). In quantum mechanics, a rigid body is described by a wave function defined on
the group manifold SO(3). Many simple molecules behave approximately as a rigid body in their
orientational degrees of freedom.
(52)
13
x
Fig. 5. The Euler angles and are the spherical angles of the rotated z -axis as seen in the unrotated frame.
(53)
since the first rotation by angle about the y-axis swings the z-axis down in the x-z plane, and
then the second rotation by angle about the z-axis rotates the vector in a cone, bringing it into
the final position for the z -axis. Rotation R1 is not in general equal to R, for R1 is only designed to
orient the z -axis correctly, while R puts all three primed axes into their correct final positions. But
R1 can get the x - and y -axes wrong only by some rotation in the x -y plane; therefore if we follow
the and rotations by a third rotation by some new angle, say, , about the z -axis, then we can
guarantee that all three axes achieve their desired orientations. That is, we can write an arbitrary
rotation R in the form,
R = R(
z , )R1 = R(z , )R(z, )R(
y, ),
(54)
(55)
in which the product of the last three rotations can be rewritten with the help of the adjoint formula
(51),
R1
z , )R1 = R R1
z , = R(z, ),
1 R(
1
(56)
(57)
(58)
14
Equation (58) constitutes the zyz-convention for the Euler angles, which is particularly appropriate for quantum mechanical applications. Other conventions are possible, and the zxz-convention
is common in books on classical mechanics, while the xyz-convention is used in areonautical engineering (the three rotations are called pitch, yaw and roll, familiar to airplane pilots). Also, you
should note that most books on classical mechanics and some books on quantum mechanics adopt
the passive point of view, which usually means that the rotation matrices in those books stand for
the transposes of the rotation matrices in these notes.
The geometrical meanings of the Euler angles and is particularly simple, since these are
just the spherical angles of the z -axis as seen in the unprimed frame. The geometrical meaning of
lying in the
is more difficult to see; it is in fact the angle between the y -axis and the unit vector n
line of nodes. The line of nodes is the line of intersection between the x-y plane and the x -y plane.
to lie in the direction zz .
This line is perpendicular to both the z- and z -axis, and we take n
The allowed ranges on the Euler angles are the following:
0 2,
0 ,
0 2.
(59)
The ranges on and follow from the fact that they are spherical angles, while the range on
follows from the fact that the -rotation is used to bring the x - and y -axes into proper alignment in
their plane. If the Euler angles lie within the interior of the ranges indicated, then the representation
of the rotations is unique; but if one or more of the Euler angles takes on their limiting values, then
the representation may not be unique. For example, if = 0, then the rotation is purely about the
z-axis, and depends only on the sum of the angles, + . In other words, apart from exceptional
points at the ends of the ranges, the Euler angles form a 1-to-1 coordinate system on the group
manifold SO(3).
13. The Noncommutativity of Rotations
As pointed out earlier, rotations do not commute, so R1 R2 6= R2 R1 , in general. An exception,
also noted above, is the case that R1 and R2 are about the same axis, but when rotations are taken
about different axes they generally do not commute. Because of this, the rotation group SO(3) is
said to be non-Abelian. This means that the rotation group is noncommutative.
Let us write R1 = R(
n1 , 1 ) and R2 = R(
n2 , 2 ) for two rotations in axis-angle form. If the axes
1 and n
2 are not identical, then it is not entirely simple to find the axis and angle of the product,
n
3 and 3 in
that is, n
R(
n1 , 1 )R(
n2 , 2 ) = R(
n3 , 3 ).
(60)
3 and 3 in terms of the axes and angles of R1 and R2 exists, but it is not trivial. If
A formula for n
we write
Ai = i (
ni J),
i = 1, 2, 3,
(61)
15
for the three antisymmetric matrices in the exponential expressions for the three rotations in Eq. (60),
then that equation can be written as
exp(A1 ) exp(A2 ) = exp(A3 ),
(62)
and the problem is to find A3 in terms of A1 and A2 . We see once again the problem of combining
exponentials of noncommuting objects (matrices or operators), one that has appeared more than
once previously in the course. We comment further on this problem below.
The situation is worse in the Euler angle parameterization; it is quite a messy task to find
(3 , 3 , 3 ) in terms of (1 , 1 , 1 ) and (2 , 2 , 2 ) in the product,
R(3 , 3 , 3 ) = R(1 , 1 , 1 )R(2 , 2 , 2 ).
(63)
The difficulties of these calculations are due to the noncommutativity of the rotation matrices.
A measure of the commutativity of two rotations R1 , R2 is the matrix
1
C = R1 R2 R1
1 R2 ,
(64)
which itself is a rotation, and which becomes the identity matrix if R1 and R2 should commute. The
matrix C is more interesting than the ordinary commutator, [R1 , R2 ], which is not a rotation. You
can think of C as taking a step in the 1-direction, then a step in the 2-direction, then a backwards
step in the 1-direction, and finally a backwards step in the 2-direction, all in the space of rotations,
and asking if we return to the identity. The answer is that in general we do not, although if the
steps are small then we trace out a path that looks like a small parallelogram that usually does not
quite close.
Let us examine C when the angles 1 and 2 are small. Let us write Ai for the antisymmetric
matrices in the exponents of the exponential form for the rotations, as in Eq. (61), and let us expand
the exponentials in power series. Then we have
C = eA1 eA2 eA1 eA2
= I + A1 + 21 A21 + . . . (I + A2 + 21 A22 + . . .
(I A1 + 21 A21 . . . (I A2 + 12 A22 . . .
(65)
= I + [A1 , A2 ] + . . . .
The first order term vanishes, and at second order we find the commutator of the A matrices, as
indicated. Writing these in terms of axes and angles as in Eq. (61), the matrix C becomes
2 J] + . . .
C = I + 1 2 [
n1 J, n
= I + 1 2 (
n1
n2 ) J + . . . ,
(66)
where we have used the commutation relations (35). We see that if R1 and R2 are near-identity
1
rotations, then so is C, which is about the axis n
n2 . Sakurai does a calculation of this form when
R1 and R2 are about the x- and y-axes, so that C is about the z-axis.
16
and
Matrix C is a near-identity rotation, so we can write it in axis-angle form, say, with axis m
angle ,
J).
C = I + (m
(67)
1
= 1 2 n
n2 .
m
(68)
(69)
where X and Y are given and it is desired to find Z. First, if X and Y commute, then the product
of exponentials follows the rules for ordinary numbers, and Z = X + Y . Next, if X and Y do not
commute but they do commute with their commutator, then the conditions of Glaubers theorem
hold (see Sec. 8.9) and we have
Z = X + Y + 21 [X, Y ].
(70)
If X and Y do not commute with their commutator, then no simple answer can be given, but we
can expand Z in a power series in X and Y . Carried through third order, this gives
Z = X + Y + 21 [X, Y ] +
1
12 [X, [X, Y
]] +
1
12 [Y, [Y, X]]
+ ....
(71)
The Baker-Campbell-Hausdorff theorem concerns this series, and asserts that the general term of this
Taylor series can be written in terms of commutators and iterated commutators of X and Y . This
fact can be seen through third order in Eq. (71). Thus, if we know how to compute commutators,
we can in principle compute all the terms of this Taylor series and obtain the multiplication rule for
products of exponentials.
For example, in the case of the rotations, knowledge of the commutation relations (34) or (35)
allows us to compute all the terms of the Taylor series (71) and thus obtain the multiplication law for
finite rotations in terms of axis-angle parameters, at least in principle. We do not intend to do this in
practice, and in fact we shall use the Baker-Campbell-Hausdorff theorem only for its suggestive and
intuitive value. But the point is that all the complexity of the noncommutative multiplication law
for finite rotations is implicitly contained in the commutation relations of infinitesimal rotations,
17
that is, of the J matrices. This explains the emphasis placed on commutation relations in these
Notes and in physics in general.
Problems
1. Prove the commutation relations (34), using Eq. (23) and the properties of the Levi-Civita symbol
ijk .
2. Show that if R SO(3), then
R(ab) = (Ra)(Rb).
(72)
(73)
This is essentially the definition of the determinant. This proves Eq. (45), and hence the adjoint
formulas (48) and (51).
3. The geometrical meaning of Eq. (44) is illustrated in Fig. 3. The rotation leaves the component
invariant, while rotating the orthogonal component in the plane perpendicular
of u along the axis n
.
to n
By expressing powers of the J matrices in terms of lower powers, sum the exponential series
(40) and obtain another proof of Eq. (44).
of a proper rotation R is invariant under the action of R, that is, R
. Therefore
4. The axis n
n=n
is a real, normalized eigenvector of R with eigenvalue +1.
n
Prove that every proper rotation has an axis. Show that the axis is unique (apart from the
change in sign, n
n) as long as the angle of rotation is nonzero. This proof is the essential
step in showing that every rotation can be expressed in axis-angle form. Do proper rotations in 4
dimensions have an axis?
5. It is claimed that every proper rotation can be written in Euler angle form. Find the Euler angles
(, , ) for the rotation R(
x, /2).