Notes On Differential Forms: Department of Mathematics, The University of Texas at Austin, Austin, TX 78712
Notes On Differential Forms: Department of Mathematics, The University of Texas at Austin, Austin, TX 78712
Lorenzo Sadun
arXiv:1604.07862v1 [math.AT] 26 Apr 2016
Forms on Rn
This is a series of lecture notes, with embedded problems, aimed at students studying
differential topology.
Many revered texts, such as Spivak’s Calculus on Manifolds and Guillemin and Pollack’s
Differential Topology introduce forms by first working through properties of alternating ten-
sors. Unfortunately, many students get bogged down with the whole notion of tensors and
never get to the punch lines: Stokes’ Theorem, de Rham cohomology, Poincare duality, and
the realization of various topological invariants (e.g. the degree of a map) via forms, none
of which actually require tensors to make sense!
In these notes, we’ll follow a different approach, following the philosophy of Amy’s Ice
Cream:
Life is uncertain. Eat dessert first.
We’re first going to define forms on Rn via unmotivated formulas, develop some profi-
ciency with calculation, show that forms behave nicely under changes of coordinates, and
prove Stokes’ Theorem. This approach has the disadvantage that it doesn’t develop deep
intuition, but the strong advantage that the key properties of forms emerge quickly and
cleanly.
Only then, in Chapter 3, will we go back and show that tensors with certain (anti)symmetry
properties have the exact same properties as the formal objects that we studied in Chapters
1 and 2. This allows us to re-interpret all of our old results from a more modern perspective,
and move onwards to using forms to do topology.
1. What is a form?
On Rn , we start with the symbols dx1 , . . . , dxn , which at this point are pretty much
meaningless. We define a multiplication operation on these symbols, denoted by a ∧, subject
to the condition
dxi ∧ dxj = −dxj ∧ dxi .
Of course, we also want the usual properties of multiplication to also hold. If α, β, γ are
arbitrary products of dxi ’s, and if c is any constant, then
(α + β) ∧ γ = α ∧ γ + β ∧ γ
3
4 1. FORMS ON Rn
α ∧ (β + γ) = α ∧ β + α ∧ γ
(α ∧ β) ∧ γ = α ∧ (β ∧ γ)
(1) (cα) ∧ β = α ∧ (cβ) = c(α ∧ β)
Note that the anti-symmetry implies that dxi ∧ dxi = 0. Likewise, if I = {i1 , . . . , ik } is a
list of indices where some index gets repeated, then dxi1 ∧ · · · ∧ dxik = 0, since we can swap
the order of terms (while keeping track of signs) until the same index comes up twice in a
row. For instance,
Of course, if I and J intersect, then dxI ∧ dxJ = 0. Since going from (I, J) to (J, I) involves
kℓ swaps, we have
dxJ ∧ dxI = (−1)kℓ dxI ∧ dxJ ,
and likewise β ∧ α = (−1)kℓ α ∧ β. Note that the wedge product of a 0-form (aka function)
with a k-form is just ordinary multiplication.
2. Derivatives of forms
X
If α = αI dxI is a k-form, then we define the exterior derivative
I
X ∂αI (x)
dα = dxj ∧ dxI .
I,j
∂xj
Note that j is a single index, not a multi-index. For instance, on R2 , if α = xydx + ex dy,
then
dα = ydx ∧ dx + xdy ∧ dx + ex dx ∧ dy + 0dy ∧ dy
(2) = (ex − x)dx ∧ dy.
If f is a 0-form, then we have something even simpler:
X ∂f (x)
df (x) = dxj ,
∂xj
which should look familiar, if only as an imprecise calculus formula. One of our goals is to
make such statements precise and rigorous. Also, remember that xi is actually a function on
Rn . Since ∂j xi = 1 if i = j and 0 otherwise, d(xi ) = dxi , which suggests that our formalism
isn’t totally nuts.
The key properties of the exterior derivative operator d are listed in the following
Proof. For simplicity, we prove this for the case where α = αI dxI and β = βJ dxJ each
have only a single term. The general case then follows from linearity.
The first property is essentially the product rule for derivatives.
α∧β = X αI (x)βJ (x)dxI ∧ dxJ
d(α ∧ β) = ∂j (αI (x)βJ (x))dxj ∧ dxI ∧ dxJ
j
X
= (∂j αI (x))βJ (x)dxj ∧ dxI ∧ dxJ
j
6 1. FORMS ON Rn
X
+ αI (x)∂j βJ (x)dxj ∧ dxI ∧ dxJ
Xj
= (∂j αI (x))dxj ∧ dxI ∧ βJ (x)dxJ
j
X
+(−1)k αI (x)dxI ∧ ∂j βJ (x)dxj ∧ dxJ
j
(3) = (dα) ∧ β + (−1)k α ∧ dβ.
The second property for 0-forms (aka functions) is just “mixed partials are equal”:
X
d(df ) = d( ∂i f dxi )
X iX
= ∂j ∂i f dxj ∧ dxi
j
Xi
= − ∂i ∂j f dxi ∧ dxj
i,j
(4) = −d(df ) = 0,
where in the third line we used ∂j ∂i f = ∂i ∂j f and dxi ∧ dxj = −dxj ∧ dxi . We then use the
first property, and the (obvious) fact that d(dxI ) = 0, to extend this to k-forms:
where in the second line we used the fact that dαI is a 1-form, and in the third line used the
fact that d(dαI ) is d2 applied to a function, while d(dxI ) = 0.
Exercise 1: On R3 , there are interesting 1-forms and 2-forms associated with each vector
field ~v (x) = (v1 (x), v2 (x), v3 (x)). (Here vi is a component of the vector ~v , not a vector in its
own right.) Let ω~v1 = v1 dx + v2 dy + v3 dz, and let ω~v2 = v1 dy ∧ dz + v2 dz ∧ dx + v3 dx ∧ dy. Let
1
f be a function. Show that (a) df = ω∇f , (b) dω~v1 = ω∇×~
2 2
v , and (c) dω~
v = (∇ · ~
v ) dx ∧ dy ∧ dz,
where ∇, ∇× and ∇· are the usual gradient, curl, and divergence operations.
3. Pullbacks
Theorem 3.1. There is a unique linear map g ∗ taking forms on Y to forms on X such
that the following properties hold:
(1) If f : Y → R is a function on Y , then g ∗ f = f ◦ g.
(2) If α and β are forms on Y , then g ∗ (α ∧ β) = (g ∗α) ∧ (g ∗β).
(3) If α is a form on Y , then g ∗ (dα) = d(g ∗ (α)). (Note that there are really two different
d’s in this equation. On the left hand side d maps k-forms on Y to (k + 1)-forms
on Y . On the right hand side, d maps k forms on X to (k + 1)-forms on X. )
Proof. The pullback of 0-forms is defined by the first property. However, note that
on Y , the form dy i is d of the function y i (where we’re using coordinates {y i } on Y and
reserving x’s for X). This means that g ∗ (dy i )(x) = d(y i ◦ g)(x) = dg i (x), X
where g i (x) is the
i-th component of g(x). But that gives us our formula in general! If α = αI (y)dy I , then
I
X
(6) g ∗ α(x) = αI (g(x))dg i1 ∧ dg i2 ∧ · · · ∧ dg ik .
I
Using the formula (6), it’s easy to see that g ∗(α ∧ β) = g ∗ (α) ∧ g ∗(β). Checking that
g ∗ (dα) = d(g ∗ α) in general is left as an exercise in definition-chasing.
(a) Compute g ∗ (x), g ∗(y), g ∗ (dx), g ∗ (dy), g ∗ (dx ∧ dy) and g ∗α (preferably in that order).
(b) Now compute h∗ (r), h∗ (θ), h∗ (dr) and h∗ (dθ).
The upshot of this exercise is that pullbacks are something that you have been doing for
a long time! Every time you do a change of coordinates in calculus, you’re actually doing a
pullback.
4. Integration
The left hand side is the integral of a form that involves wedges of dxi ’s. The right hand
side is an ordinary Riemann integral, in which |dx1 · · · dxn | is the usual volume measure
(sometimes written dV or dn x). Note that the order of the variables in the wedge product,
x1 through xn , is implicitly using the standard orientation of Rn . Likewise, we can define
the integral of α over any open subset U of Rn , as long as α restricted to U is compactly
supported.
n
We have to be a little careful with the left-hand-side of (7) when n = 0. In
Z this case, R
is a single point (with positive orientation), and α is just a number. We take α to be that
number.
Exercise 7: Suppose g is an orientation-preserving diffeomorphism from an open subset U
of Rn to another open subset V (either or both of which may be all of Rn ). Let α be a
compactly supported n-form on V . Show that
Z Z
∗
g α= α.
U V
How would this change if g were orientation-reversing? [Hint: use the change-of-variables
formula for multi-dimensional integrals. Where does the Jacobian come in?]
Now we see what’s so great about differential forms! The way they transform under
change-of-coordinates is perfect for defining integrals. Unfortunately, our development so
far only allows us to integrate n-forms over open subsets of Rn . More generally, we’d like
to integrate k-forms over k-dimensional objects. But this requires an additional level of
abstraction, where we define forms on manifolds.
5. DIFFERENTIAL FORMS ON MANIFOLDS 9
This also tells us how to do calculus with forms on manifolds. If µ and ν are forms on
X, then
• The wedge product µ ∧ ν is the form whose realization on Ui is ψi∗ (µ) ∧ ψi∗ (ν). In
other words, ψi∗ (µ ∧ ν) = ψi∗ µ ∧ ψi∗ ν.
• The exterior derivative dµ is the form whose realization on Ui is d(ψi∗ (µ)). In other
words, ψi∗ (dµ) = d(ψi∗ µ).
Exercise 8: Show that µ ∧ ν and dµ are well-defined.
Now suppose that we have a map f : X → Y of manifolds and that α is a form on Y . The
pullback f ∗ (α) is defined via coordinate patches. If φ : U ⊂ Rn → X and ψ : V ⊂ Rm → Y
are parametrizations of X and Y , then there is a map h : U → V such that ψ(h(x)) =
f (φ(x)). We define f ∗ (α) to be the form of X whose realization in U is h∗ ◦ (ψ ∗ α). In other
words,
(10) φ∗ (f ∗ α) = h∗ (ψ ∗ α).
An important special case is where X is a submanifold of Y and f is the inclusion map.
Then f ∗ is the restriction of α to X. When working with manifolds in RN , we often write
down formulas for k-forms on RN , and then say “consider this form on X”. E.g., one might
say “consider the 1-form xdy − ydx on the unit circle in R2 ”. Strictly speaking, this really
should be “consider the pullback to S 1 ⊂ R2 by inclusion of the 1-form xdy − ydx on R2 ,”
but (almost) nobody is that pedantic!
Exercise 9: Show that this definition does not depend on the choice of coordinates. That
is,
Z if ψ1,2 :ZU1,2 → V are two sets of coordinates for V , both orientation-preserving, that
ψ1∗ ν = ψ2∗ ν.
U1 U2
If a form is not supported in a single coordinate chart, we pick an open cover of X
consisting of coordinate neighborhoods, pick a partition-of-unity subordinate to that cover,
and define Z XZ
ν= ρi ν.
X X
6. INTEGRATION ON ORIENTED MANIFOLDS 11
We need a little bit of notation to specify when this makes sense. If α = αI (x)dx1 ∧ · · · ∧ dxk
is a k-form on Rk , let |α| = |αI (x)|dx1 ∧ · · · ∧ dxk . We say that ν is absolutely integrable if
each |ψi∗ (ρi ν)| is integrable over Ui , and if the sum of those integrals converges. It’s not hard
to show that being absolutely integrable with respect to one set of coordinates and partition
of unity implies absolute integrability with respect Z to arbitrary coordinates and partitions
of unity. Those are the conditions under which ν unambiguously makes sense.
X
When X is compact and ν is smooth, absolute integrability is automatic. In practice, we
rarely have to worry about integrability when doing differential topology.
The upshot is that k-forms are meant to be integrated on k-manifolds. Sometimes
these are stand-alone abstract k-manifolds, sometimes they are k-dimensional submanifolds
of larger manifolds, and sometimes they are concrete k-manifolds embedded in RN .
Finally, a technical point. If X is 0-dimensional, then we can’t construct orientation-
Z
preserving maps from R0 to the connected components of X. Instead, we just take α=
X X
±α(x), where the sign is the orientation of the point x. This follows the general principle
x∈X
that reversing the orientation of a manifold should flip the sign of integrals over that manifold.
Exercise Z10: Let X = S 1 ⊂ R2 be the unit circle, oriented as the boundary of the unit disk.
Compute (xdy − ydx) by explicitly pulling this back to R with an orientation-preserving
X
chart and integrating over R. (Which is how you learned to do line integrals way back in
calculus.) [Note: don’t worry about using multiple charts and partitions of unity. Just use
a single chart for the unit circle minus a point.]
Exercise 11: Now do the same thing one dimension up.Z Let Y = S 2 ⊂ R3 be the unit
sphere, oriented as the boundary of the unit ball. Compute (xdy ∧dz +ydz ∧dx+zdx∧dy)
X
by explicitly pulling this back to a subset of R2 with an orientation-preserving chart and
integrating over that subset of R2 . As with the previous exercise, you can use a single
coordinate patch that leaves out a set of measure zero, which doesn’t contribute to the
integral. Strictly speaking this does not follow the rules listed above, but I’ll show you how
to clean it up in class.
CHAPTER 2
Stokes’ Theorem
Theorem 1.1 (Stokes’ Theorem, Version 1). Let ω be any compactly-supported (n − 1)-
form on X. Then
Z Z
(12) dω = ω.
X ∂X
Proof. Let Ij be the ordered subset of {1, . . . , n} in which the element j is deleted.
Suppose that the (n − 1)-form ω can be expressed as ω(x) = ωj (x)dxIj , where ωj (x) is a
compactly supported function. Then dω = (−1)j−1 ∂j ωj dx1 ∧ · · · ∧ dxn . There are two cases
to consider: Z
n
If j < n, then the restriction to ∂X of ω is zero, since dx = 0, so ω = 0. But then
∂X
Z Z Z ∞
∂j ωj dx1 · · · dxn = ∂j ωj (x) dxj dx1 · · · dxj−1 dxj+1 · · · dxn
Hn −∞
Since ωj is compactly supported, the inner integral is zero by the fundamental theorem of
calculus. Both sides of (12) are then zero, and the theorem holds.
If j = n, then
Z Z
dω = (−1)n−1 ∂n ωn (x) dn x
X n
Z H Z ∞
= (−1) ∂n ωn (x1 , . . . , xn ) dx dx1 · · · dxn−1
n−1 n
Rn−1 0
(13) = ω.
∂X
Here we have usedZ the fundamental theorem of calculus and the fact that ωn is compactly
∞
supported to get ∂n ωn (x)dxn = −ωn (x1 , . . . , xn−1 , 0).
0
Of course, not every (n − 1)-form can be written as ωj dxIj with ωj compactly supported.
However, every compactly-supported (n − 1)-form can be written as a finite sum of such
terms, one for each value of j. Since equation (12) applies to each term in the sum, it also
applies to the total.
The amazing thing about this proof is how easy it is! The only analytic ingredients
are Fubini’s Theorem (which allows us to first integrate over xj and then over the other
variables) and the 1-dimensional Fundamental Theorem of Calculus. The hard work came
earlier, in developing the appropriate definitions of forms and integrals.
Having so far avoided all the geometry and topology of manifolds by working on Eu-
clidean space, we now turn back to working on manifolds. Thanks to the properties of forms
developed in the previous set of notes, everything will carry over, giving us
Theorem 2.1 (Stokes’ Theorem, Version 2). Let X be a compact oriented n-manifold-
with-boundary, and let ω be an (n − 1)-form on X. Then
Z Z
(14) dω = ω,
X ∂X
where ∂X is given the boundary orientation and where the right hand side is, strictly speaking,
the integral of the pullback of ω to ∂X by the inclusion map.
where we have used (a) the definition of integration of forms on manifolds, (b) the fact that
d commutes with pullbacks, (c) the fact that ψ ∗ ωi and d(ψ ∗ ωi ) can be extended by zero
to all of H n , (d) Stokes’ Theorem on H n , and (e) the definition of integration of forms on
manifolds. Finally, we add everything up.
Z Z X
dω = d ωi
X X i
XZ
= dωi
i X
XZ
= ωi
∂X
Zi X
= ωi
∂X i
Z
(16) = ω.
∂X
Note that X being compact is essential. If X isn’t compact, then you can still prove
Stokes’ Theorem for forms that are compactly supported,
Z but not Zfor forms in general. For
instance, if X = [0, ∞) and ω = 1 (a 0-form), then dω = 0 but ω = −1.
X ∂X
To relate Stokes’ Theorem for forms and manifolds to the classical theorems of vector
calculus, we need a correspondence between line integrals, surface integrals, and integrals of
differential forms.
16 2. STOKES’ THEOREM
Z
3
Exercise 1 If γ is an oriented path in R and ~v (x) is a vector field, show that ω~v1 is the
Z γ
line integral ~v · T ds, where T is the unit tangent to the curve and ds is arclength measure.
(Note that this works for arbitrary smooth paths, and not just for embeddings. It makes
perfectly good sense to integrate around a figure-8.)
Z
3
Exercise 2 If S is an oriented surface in R and ~v is a vector field, show that ω~v2 is the
S
flux of ~v through S.
Exercise 3 Suppose that X is a compact connected oriented 1-manifold-with-boundary in
Rn . (In other words, a path without self-crossings from a to b, where a and b might be
the same point.) Show that Stokes’ Theorem, applied to X, is essentially the Fundamental
Theorem of Calculus.
Exercise 4 Now suppose that X is a bounded domain in R2 . Write down Stokes’ Theorem
in this setting and relate it to the classical Green’s Theorem.
Exercise 5 Now suppose that S is an oriented surface in R3 with boundary curve C = ∂S.
Let ~v be a vector field. Apply Stokes Theorem to ω~v1 and to S, and express the result in terms
of line integrals and surface integrals. This should give you the classical Stokes’ Theorem.
Exercise
Z 6 On R3 , let ω = (x2 + y 2 )dx ∧ dy + (x + yez )dy ∧ dz + ex dx ∧ dz. Compute
ω, where S is the upper hemisphere of the unit sphere. The answer depends on which
S
orientation you pick for S of course. Pick one, and compute! [Hint: Find an appropriate
surfaceZ S ′ so that ′
Z S − S is the boundary of a 3-manifold. Then use Stokes’ Theorem to
relate ω to ω.]
S S′
Exercise 7 On R2 with the origin removed, let α = (xdy − ydx)/(x2 + y 2). You previously
showed that dα = 0 (aka “α is closed”). Show that α is not d of any function (“α is not
exact”)
Exercise 8 On R3 with the origin removed, show that β = (xdy ∧ dz − ydx ∧ dz + zdx ∧
dy)/(x2 + y 2 + z 2 )3/2 is closed but not exact.
Exercise 9 Let X be a compact oriented n-manifold (without boundary), let Y be a mani-
fold, and letZ ω be a closed
Z n-form on Y . Suppose that f0 and f1 are homotopic maps X → Y .
Show that f0∗ ω = f1∗ ω.
X X
Exercise 10 Let f : SZ → R2 − {0} be a smooth map whose winding number around the
1
Tensors
1. What is a tensor?
Let V be a finite-dimensional vector space.1 It could be Rn , it could be the tangent space
to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map
T : V ×···×V → R
(where there are k factors of V ) that is linear in each factor.2 That is, for fixed ~v2 , . . . , ~vk ,
T (~v1 , ~v2 , . . . , ~vk−1 , ~vk ) is a linear function of ~v1 , and for fixed ~v1 , ~v3 , . . . , ~vk , T (~v1 , . . . , ~vk ) is a
linear function of ~v2 , and so on. The space of k-tensors on V is denoted T k (V ∗ ).
Examples:
• If V = Rn , then the inner product P (~v , w) ~ = ~v · w
~ is a 2-tensor. For fixed ~v it’s
linear in w,
~ and for fixed w ~ it’s linear in ~v .
n
• If V = R , D(~v1 , . . . , ~vn ) = det ~v1 · · · ~vn is an n-tensor.
• If V = Rn , T hree(~v ) = “the 3rd entry of ~v ” is a 1-tensor.
• A 0-tensor is just a number. It requires no inputs at all to generate an output.
Note that the definition of tensor says nothing about how things behave when you rotate
vectors or permute their order. The inner product P stays the same when you swap the two
vectors, but the determinant D changes sign when you swap two vectors. Both are tensors.
For a 1-tensor like T hree, permuting the order of entries doesn’t even make sense!
Let {~b1 , . . . , ~bn } be a basis for V . Every vector ~v ∈ V can be uniquely expressed as a
linear combination:
v i~bi ,
X
~v =
i
where each v i is a number. Let φi (~v ) = v i . The map φi is manifestly linear (taking ~bi to 1
and all the other basis vectors to zero), and so is a 1-tensor. In fact, the φi ’s form a basis
1Or even an infinite-dimensional vector space, if you apply appropriate regularity conditions.
2Strictlyspeaking, this is what is called a contravariant tensor. There are also covariant tensors and
tensors of mixed type, all of which play a role in differential geometry. But for understanding forms, we only
need contravariant tensors.
17
18 3. TENSORS
v i~bi )
X
α(~v ) = α(
Xi
= v i α(~bi ) by linearity
i
α(~bi )φi (~v) since v i = φi (~v )
X
=
i
α(~bi )φi )(~v) by linearity, so
X
=(
i
α(~bi )φi .
X
(17) α =
i
A bit of terminology: The space of 1-tensors is called the dual space of V and is often
denoted V ∗ . The basis {φi } for V ∗ is called the dual basis of {bj }. Note that
(
1 i=j
φi (~bj ) = δji := ,
0 i 6= j
and that there is a duality between vectors and 1-tensors (also called co-vectors).
v i~bi
X
~v = where v i = φi (~v)
where αj = α(~bj )
X
α= αj φ j
X
α(~v) = αi v i .
It is sometimes convenient to express vectors as columns and co-vectors as rows. The basis
vector ~bi is represented by a column with a 1 in the i-th slot and 0’s everywhere else, while
φj is represented by a row with a 1 in the jth slot and the rest zeroes. Unfortunately,
representing tensors of order greater than 2 visually is difficult, and even 2-tensors aren’t
properly described by matrices. To handle 2-tensors or higher, you really need indices.
If α is a k-tensor and β is an ℓ-tensor, then we can combine them to form a k + ℓ tensor
that we denote α ⊗ β and call the tensor product of α and β:
For instance,
(φi ⊗ φj )(~v , w)
~ = φi (~v ) φj (w)
~ = viwj .
Not only are the φi ⊗ φj ’s 2-tensors, but they form a basis for the space of 2-tensors. The
proof is a generalization of the description above for 1-tensors, and a specialization of the
following exercise.
Exercise 1: For each ordered k-index I = {i1 , . . . , ik } (where each number can range from
1 to n), let φ̃I = φi1 ⊗ φi2 ⊗ · · · ⊗ φik . Show that the φ̃I ’s form a basis for T k (V ∗ ), which thus
2. ALTERNATING TENSORS 19
2. Alternating Tensors
Our goal is to develop the theory of differential forms. But k-forms are made for integrat-
ing over k-manifolds, and integration means measuring volume. So the k-tensors of interest
should behave qualitatively like the determinant tensor on Rk , which takes k vectors in Rk
and returns the (signed) volume of the parallelpiped that they span. In particular, it should
change sign whenever two arguments are interchanged.
Let Sk denote the group of permutations of (1, . . . , k). A typical element will be denoted
σ = (σ1 , . . . , σk ). The sign of σ is +1 if σ is an even permutation, i.e. the product of an even
number of transpositions, and −1 if σ is an odd permutation.
We say that a k-tensor α is alternating if, for any σ ∈ Sk and any (ordered) collection
{~v1 , . . . , ~vk } of vectors in V ,
α(~vσ1 , . . . , ~vσk ) = sign(σ)α(~v1 , . . . , vk ).
The space of alternating k-tensors on V is denoted Λk (V ∗ ). Note that Λ1 (V ∗ ) = T 1 (V ∗ ) = V ∗
and that Λ0 (V ∗ ) = T 0 (V ∗ ) = R.
If α is an arbitrary k-tensor, we define
1 X
Alt(α) = sign(σ)α ◦ σ,
k! σ∈S
k
or more explicitly
1 X
Alt(α)(~v1 , . . . , ~vk ) = sign(σ)α(~vσ1 , . . . , ~vσk ).
k! σ∈S
k
(2) Show that Alt, restricted to Λk (V ∗ ), is the identity. Together with (1), this implies
that Alt is a projection from T k (V ∗ ) to Λk (V ∗ ).
(3) Suppose that α is a k-tensor with Alt(α) = 0 and that β is an arbitrary ℓ-tensor.
Show that Alt(α ⊗ β) = 0.
(α ∧ β) ∧ γ = α ∧ (β ∧ γ).
[If you get stuck, look on page 156 of Guillemin and Pollack].
(k + ℓ)! k+ℓ
Exercise 6: Using the convention Ck,ℓ = = , show that φi1 ∧ · · · ∧ φik =
k!ℓ! k
k!Alt(φi1 ⊗ · · · ⊗ φik ). (If we had picked Ck,ℓ = 1 as in Guillemin and Pollack, we would have
gotten the same formula, only without the factor of k!.)
Let’s take a step back and see what we’ve done.
2. ALTERNATING TENSORS 21
• Starting with a vector space V with basis {~bi }, we created a vector space V ∗ =
T 1 (V ∗ ) = Λ1 (V ∗ ) with dual basis {φj }.
• We defined an associative product ∧ with the property that φj ∧ φi = −φi ∧ φj and
with no other relations.
• Since tensor products of the φj ’s span T k (V ∗ ), wedge products of the φj ’s must
span Λk (V ∗ ). In other words, Λk (V ∗ ) is exactly the space that you get by taking
formal products of the φi ’s, subject to the anti-symmetry rule.
• That’s exactly what we did with the formal symbols dxj to create differential forms
on Rn . The only difference is that the coefficients of differential forms are functions
rather than real numbers (and that we have derivative and pullback operations on
forms).
i k ∗
• Carrying over our old results from wedges of dx ’s, we conclude that Λ (V ) had
n n!
dimension = and basis φI := φi1 ∧ · · · ∧ φik , where I = {i1 , . . . , ik }
k k!(n − k)!
is an arbitrary subset of (1, . . . , n) (with k distinct elements) placed in increasing
order.
• Note the difference between φ̃I = φi1 ⊗ · · · ⊗ φik and φI = φi1 ∧ · · · ∧ φik . The tensors
φ̃I form a basis for T k (V ∗ ), while the tensors φI form a basis for Λk (V ∗ ). They are
related by φI = k!Alt(φ̃I ).
Exercise 7: Let V = R3 with the standard basis, and let π : R3 → R2 , π(x, y, z) = (x, y)
be the projection onto the x-y plane. Let α(~v, w)~ be the signed area of the parallelogram
spanned by π(~v ) and π(w)~ in the x-y plane. Similarly, let β and γ be be the signed areas
of the projections of ~v and w~ in the x-z and y-z planes, respectively. Express α, β and γ
as linear combinations of φ ∧ φj ’s. [Hint: If you get stuck, try doing the next two exercises
i
Exercise 8: Let V be arbitrary. Show that (φi1 ∧ · · · ∧ φik )(~bj1 , . . . , ~bjk ) equals +1 if
(j1 , . . . , jk ) is an even permutation of (i1 , . . . , ik ), −1 if it is an odd permutation, and 0
if the two lists are not permutations of one another.
Exercise 10: Now let α1 , . . . , αk be an arbitrary ordered list of covectors, and that ~v1 , . . . , ~vk
is an arbitrary ordered list of vectors. Show that (α1 ∧ · · · ∧ αk )(~v1 , . . . , ~vk ) = det A, where
A is the k × k matrix whose i, j entry is αi (~vj ).
22 3. TENSORS
3. Pullbacks
This has some important properties. Pick bases (~b1 , . . . , ~bn ) and (d~1 , . . . , d~m ) for V and
W , respectively, and let {φj } and {ψ j } be the corresponding dual bases for V ∗ and W ∗ . Let
A be the matrix of the linear transformation L relative to the two bases. That is
X
L(~v )j = Aji v i .
j
Exercise 11: Show that the matrix of L∗ : W ∗ → V ∗ , relative to the bases {ψ j } and {φj },
is AT . [Hint: to figure out the components of a covector, act on a basis vector]
Exercise 12: If α is a k-tensor, and if I = {i1 , . . . , ik }, show that
X
(L∗ α)I = Aj1 ,i1 Aj2 ,i2 · · · Ajk ,ik α(j1 ,...,jk ) .
j1 ,...,jk
Exercise 13: Suppose that α is alternating. Show that L∗ α is alternating. That is, L∗
restricted to Λk (W ∗ ) gives a map to Λk (V ∗ ).
Exercise 14: If α and β are alternating tensors on W , show that L∗ (α ∧β) = (L∗ α) ∧(L∗ β).
• The trivial bundle X × V , where V is any n-dimensional vector space. Here we can
pick a constant basis for V .
• The normal bundle of X in Y (where X is a submanifold of Y ).
• The cotangent bundle whose fiber over x is the dual space of Tx (X). This is often
denoted Tx∗ (X), and the entire bundle is denoted T ∗ (X). Given a smoothly varying
basis for Tx (X), we can take the dual basis for Tx∗ (X).
• The k-th tensor power of T ∗ (X), which we denote T k (T ∗ (X)), i.e. the vector bundle
whose fiber over x is T k (Tx∗ (X)).
• The alternating k-tensors in T k (T ∗ (X)), which we denote Λk (T ∗ (X)).
Some key definitions:
• A section of a vector bundle E → X is a smooth map s : X → E such that π ◦ s is
the identity on X. In other words, such that s(x) is an element of the fiber over x
for every x.
• A differential form of degree k is a section of Λk (T ∗ (X)). The (infinite-dimensional)
space of k-forms on X is denoted Ωk (X).
• If f : X → R is a function, then dfx : Tx (X) → Tf (x) (R) = R is a covector at X.
Thus every function f defines a 1-form df .
• If f : X → Y is a smooth map of manifolds, then dfx is a linear map Tx (X) →
Tf (x) (Y ), and so induces a pullback map f ∗ : Λk (Tf∗(x) (Y )) → Λk (Tx∗ (X)), and hence
a linear map (also denoted f ∗ ) from Ωk (Y ) to Ωk (X).
Exercise 15: If f : X → Y and g : Y → Z are smooth maps of manifolds, then g ◦ f is a
smooth map X → Z. Show that (g ◦ f )∗ = f ∗ ◦ g ∗ .
5. Reconciliation
We have developed two different sets of definitions for forms, pullbacks, and the d oper-
ator. Our task in this section is to see how they’re really saying the same thing.
Old definitions:
X
• A differential form on Rn is a formal sum αI (x)dxI , where αI (x) is an ordinary
function and dxI is a product dxi1 ∧ · · · ∧ dxik of meaningless symbols that anti-
commute. X
• The exterior derivative is dα = (∂j αI (x))dxj ∧ dxI .
I,j
24 3. TENSORS
• Forms on n-manifolds are defined via forms on Rn and local coordinates and have
no intrinsic meaning.
New definitions:
• A differential form on Rn is a section of Λk (T ∗ (Rn )). Its value at each point x is an
alternating tensor that takes k tangent vectors at that point as inputs and outputs
a number.
• The exterior derivative on functions is defined as the usual derivative map df :
T (X) → R. We have not yet defined it for higher-order forms.
• If g : X → Y , then the pullback map Ωk (Y ) → Ωk (X) is induced by the derivative
map dg : T (X) → T (Y ).
• Forms on manifolds do not require a separate definition from forms on Rn , since
tangent spaces, dual spaces, and tensors on tangent spaces are already well-defined.
Next we want to extend the (new) definition of d to cover arbitrary forms. We would
like it to satisfy d(α ∧ β) = (dα) ∧ β + (−1)k α ∧ dβ and d2 = 0, and that is enough.
d(dxi ) = d2 (xi ) = 0
d(dxi ∧ dxj ) = d(dxi ) ∧ dxj − dxi ∧ d(dxj ) = 0, and similarly
(20) d(dxI ) = 0 by induction on the degree of I.
which is exactly the same formula as before. Note that this construction also works to define
d uniquely on manifolds, as long as we can find functions f i on a neighborhood of a point
p such that the df i ’s span Tp∗ (X). But such functions are always available via the local
parametrization. If ψ : U → X is a local parametrization, then we can just pick f i to be the
i-th entry of ψ −1 . That is f i = xi ◦ ψ −1 . This gives a formula for d on X that is equivalent
to “convert to Rn using ψ, compute d in Rn , and then convert back”, which was our old
definition of d on a manifold.
We now check that the definitions of pullback are the same. Let g : Rn → Rm . Under
the new definition, g ∗(dy i )(~v ) = dy i (dg(~v)), which is the ith entry of dg(~v), where we are
using coordinates {y i } on Rm . But that is the same as dg i (~v), so g ∗ (dy i ) = dg i . Since the
pullback of a function f is just the composition f ◦ g, and since g ∗ (α ∧ β) = (g ∗ (α)) ∧ (g ∗(β))
(see the last exercise in the “pullbacks” section), we must have
X X
g∗( αI dy I )(x) = αI (g(x))dg i1 ∧ · · · ∧ dg ik ,
I I
exactly as before. This also shows that g (dα) = d(g ∗α), since that identity is a consequence
∗
= dxj (~ei ), so
(22) ψ ∗ (φj ) = dxj .
Under the old definition, forms on X were abstract objects that corresponded, via pullback,
to forms on U, such that changes of coordinates followed the rules for pullbacks of maps
Rn → Rn . Under the new definition, ψ ∗ automatically pulls a basis for Tp∗ (X) to a basis
for Ta∗ (Rn ), and this extends to an isomorphism between forms on a neighborhood of p and
forms on a neighborhood of a. Furthermore, if ψ1,2 are two different parametrizations of the
same neighborhood of p, and if ψ1 = ψ2 ◦ g12 (so that g12 maps the ψ1 coordinates to the ψ2
coordinates), then we automatically have ψ1∗ = g12 ∗
◦ ψ2∗ , thanks to Exercise 15.
Bottom line: It is perfectly legal to do forms the old way, treating the dx’s as meaningless
symbols that follow certain axioms, and treating forms on manifolds purely via how they
appear in various coordinate systems. However, sections of bundles of alternating tensors on
T (X) give an intrinsic realization of the exact same algebra. The new definitions allow us
to talk about what differential forms actually are, and to develop a cleaner intuition on how
forms behave. In particular, they give a very simple explanation of what integration over
manifolds really means.
CHAPTER 4
Integration
where ∆k x = xk − xk−1 is the length of the kth interval, and x∗k is an arbitrarily chosen point
in the kth interval. As long as f is continuous and each interval is small, all values of f (x)
in the kth interval are close to f (x∗k ), so f (x∗k )∆k x is a good approximation to the amount
of stuff in the kth interval. As N → ∞ and the intervals are chosen smaller and smaller, the
errors go to zero, and we have
Z b N
X
f (x)dx = lim f (x∗k )∆k x.
a N →∞
k=1
27
28 4. INTEGRATION
Note that I have not required that all of the intervals [xk−1 , xk ] be the same size! While
that’s convenient, it’s not actually necessary. All we need for convergence is for all of the
sizes to go to zero in the N → ∞ limit.
The same idea goes in higher dimensions, when we want to integrate any continuous
bounded function over any bounded region. We break the region into tiny pieces, estimate
the contribution of each piece, and add up the contributions. As the pieces are chosen smaller
and smaller, the errors in our estimates go to zero, and the limit of our sum is our exact
integral.
If we want to integrate an unbounded function, or integrate over an unbounded region,
we break things up into bounded pieces and add up the integrals over the (infinitely many)
pieces. A function is (absolutely) integrable if the pieces add up to a finite sum, no matter
how we slice upZ the pieces. Calculus books sometimes distinguish between “Type I” improper
∞ Z 1
−3/2
integrals like x dx and “Type II” improper integrals like y −1/2 dy, but they are
1 0
really the same. Just apply the change of variables y = 1/x:
Z ∞ X∞ Z k+1
−3/2
x dx = x−3/2 dx
1 k=1 k
∞ Z
X 1/k
= y −1/2 dy
1/(k+1)
Zk=11
(24) = y −1/2 dy.
0
When doing such a change of variables, the width of the intervals can change drastically.
1
∆y is not ∆x, and x-intervals of size 1 turn into y-intervals of size . Likewise, the
k(k + 1)
integrand is not the same. However, the contribution of the interval, whether written as
x−3/2 ∆x or y −1/2 ∆y, is the same (at least in the limit of small intervals).
In other words, we need to stop thinking about f (x) and dx separately, and think instead
of the combination f (x)dx, which is a machine for extracting the contribution of each small
interval.
But that’s exactly what the differential form f (x)dx is for! In one dimension, the covector
dx just gives the value of a vector in R1 . If we evaluate f (x)dx at a sample point x∗k and
apply it to the vector xk − xk−1 , we get
f (x)dx(~xk − ~xk−1 ) = f (x∗k )∆k x.
Likewise, let’s try to interpret the integral of f (x, y)dxdy over a rectangle R = [a, b]×[c, d]
in R2 . The usual approach is to break the interval [a, b] into N pieces and the interval
2. INTEGRALS IN 2 OR MORE DIMENSIONS 29
[c, d] into M pieces, and hence the rectangle R into NM little rectangles with vertices at
(xi−1 , yj−1), (xi , yj−1), (xi−1 , yj ) and (xi , yj ), where i = 1, . . . , N and j = 1, . . . , M.
So what is the contribution of the (i, j)-th sub-rectangle Rij ? We evaluate f (x, y) at a
sample point (x∗i , yj∗) and multiply by the area of Rij . However, that area is exactly what
you get from applying dx ∧ dy to the vectors ~v1 = (xi − xi−1 , 0) and ~v2 = (0, yj − yj−1) that
span the sides of the rectangle. In other words, f (x∗i , yj∗ )∆i x∆j y is exactly what you get
when you apply the 2-form f (x, y)dx ∧ dy to the vectors (~v1 , ~v2 ) at the point (x∗i , yj∗ ). [Note
(k + ℓ)!
that this interpretation requires the normalization Ck,ℓ = for wedge products. If we
k!ℓ!
had used Ck,ℓ = 1, as in Guillemin and Pollack, then dx ∧ dy(~v1 , ~v2 ) would only be half the
area of the rectangle.]
n
Z The same process works for integrals over any bounded domain R in R . To compute
f (x)dn x:
R
(1) Break R into a large number of small pieces {RI }, which we’ll call “boxes”, each
of which is approximately a parallelpiped spanned by vectors ~v1 , . . . , ~vn , where the
vectors don’t have to be the same for different pieces.
(2) To get the contribution of a box RI , pick a point x∗I ∈ RI , evaluate the n-form
f (x)dx1 ∧ · · · ∧ dxn at x∗I , and apply it to the vectors ~v1 , . . . , ~vk . Your answer will
depend on the choice of x∗I , but all choices will give approximately the same answer.
(3) Add up the contributions of all of the different boxes.
(4) Take a limit as the sizes of the boxes go to zero uniformly. Integrability means that
this limit does not depend on the choices of the sample points x∗I , or on the way
that we defined the boxes. When f is continuous and bounded, this always works.
When f is unbounded or discontinuous, or when R is unbounded, work is required
to show that the limit is well-defined.
2 2
For instance, to integrate e−(x +y ) dxdy over the unit disk, we need to break the disk into
pieces. One way is to use Cartesian coordinates, where the boxes are rectangles aligned with
the coordinate axes and of size ∆x × ∆y. Another way is to use polar coordinates, where
the boxes have r and θ ranging over small intervals.
Exercise 1: Let RI be a “polar rectangle” whose vertices p1 , p2 , p3 and p4 have polar
coordinates (r0 , θ0 ), (r0 + ∆r, θ0 ), (r0 , θ0 + ∆θ) and r0 + ∆r, θ0 + ∆θ), respectively, where we
assume that ∆r is much smaller than r0 and that ∆θ is small in absolute terms. Let ~v1 be
the vector from p1 to p2 and ~v2 is the vector from p1 to p3 .
(a) Compute dx ∧ dy(~v1, ~v2 ).
(b) If our sample point x∗I has polar coordinates (r ∗ , θ∗ ), evaluate the approximate contribu-
tion of this box.
30 4. INTEGRATION
(c) Express the limit of the sum over all boxes as a double integral over r and θ.
(d) Evaluate this integral.
Exercise 3: Let S be a surface in R3 and let ~v (x) be a vector field. Show directly that
Z
iv (dx ∧ dy ∧ dz) is the flux of ~v through S. That is, show that iv (dx ∧ dy ∧ dz) applied to a
S
pair of (small) vectors gives (approximately) the flux of ~v through a parallelogram spanned
by those vectors.
Exercise 4: In R3 we have already seen iv (dx ∧ dy ∧ dz). What did we call it?
Exercise 5: Let ~v be any vector field in Rn . Compute d(iv (dx1 ∧ · · · ∧ dxn )).
X
Exercise 6: Let α = αI (x)dxI be a k-form on Rn and let ~v(x) = ~ei , the i-th standard
basis vector for Rn . Compute d(iv α) + iv (dα). Generalize to the case where ~v is an arbitrary
constant vector field.
When ~v is not constant, the expression d(iv α) + iv (dα) is more complicated, and depends
both on derivatives of v and derivatives of αI , as we saw in the last two exercises. This
quantity is called the Lie derivative of α with respect to ~v.
It is certainly possible to feed more than one vector field to a k-form, thereby reducing
its degree by more than 1. It immediately follows that iv iw = −iw iv as a map Ωk (X) →
Ωk−2 (X).
CHAPTER 5
de Rham Cohomology
β ′ = β + dν
But then
α′ ∧ β ′ = (α + dµ) ∧ (β + dν)
= α ∧ β + (dµ) ∧ β + α ∧ dν + dµ ∧ dν
= α ∧ β + d(µ ∧ β) + (−1)k d(α ∧ ν) + d(µ ∧ dν)
= α ∧ β + exact forms,
where we have used the fact that d(µ ∧ β) = dµ ∧ β + (−1)k−1 µ ∧ dβ = dµ ∧ β, and similar
expansions for the other terms. That is,
2. Pullbacks in Cohomology
We are using notation to distinguish between the pullback map f ∗ on forms and the pullback
map f ♯ on cohomology. Guillemin and Pollack also follow this convention. However, most
authors use f ∗ to denote both maps, hoping that it is clear from context whether we are
talking about forms or about the classes they represent. (Still others use f ♯ for the map on
forms and f ∗ for the map on cohomology. Go figure.)
Note that f ♯ is a contravariant functor, which is a fancy way of saying that it reverses
the direction of arrows. If f : X → Y , then f ♯ : H k (X) ← H k (Y ). If f : X → Y and
g : Y → Z, then g ♯ : H k (Z) → H k (Y ) and f ♯ : H k (Y ) → H k (X). Since (g ◦ f )∗ = f ∗ ◦ g ∗, it
follows that (g ◦ f )♯ = f ♯ ◦ g ♯ .
We will have more to say about pullbacks in cohomology after we have established some
more machinery.
We define
XZ t
P (α)(t, x) = ( βJ (s, x)ds)dxJ .
J 0
X
P (α) is called the integral along the fiber of α. Note that s∗0 α, evaluated at x, is γ(0, x)dxI ,
I
and that
X
(1 − π ∗ s∗0 )α(t, x) = dt ∧ β(t, x) + (γ(t, x) − γ(t, 0))dxI .
I
Now we compute dP (α) and P (dα). Since
X X X
dα(t, x) = −dt ∧ (∂j βJ (t, x))dxj ∧ dxJ + ∂t γI (t, x)dt ∧ dxI + ∂j γI (t, x)dxj ∧ dxI ,
j,J I I,j
Exercise 1: In this proof, the operator P was defined relative to local coordinates on X.
Show that this is in fact well-defined. That is, if we have two parametrizations ψ and φ, and
we compute P (α) using the φ coordinates and then convert to the ψ coordinates, we get the
same result as if we computed P (α) directly using the ψ coordinates.
An immediate corollary of this theorem is that H k (Rn ) = H k (Rn−1 ) = · · · = H k (R0 ). In
particular,
3. INTEGRATION OVER A FIBER AND THE POINCARE LEMMA 37
Suppose that a manifold X can be written as the union of two open submanifolds, U
and V . The Mayer-Vietoris Sequence is a technique for computing the cohomology of X
from the cohomologies of U, V and U ∩ V . This has direct practical importance, in that it
allows us to compute things like H k (S n ) and many other simple examples. It also allows us
to prove many properties of compact manifolds by induction on the number of open sets in
a “good cover” (defined below). Among the things that can be proved with this technique
(of which we will only prove a subset) are:
(1) H k (S n ) = R if k = 0 or k = n and is trivial otherwise.
(2) If X is compact, then H k (X) is finite-dimensional. This is hardly obvious, since
H k (X) is the quotient of the infinite-dimensional vector space of closed k-forms by
another infinite-dimensional space of exact k-forms. But as long as X is compact,
the quotient is finite-dimensional.
(3) If X is a compact, oriented n-manifold, then H n (X) = R.
(4) If X is a compact, oriented n-manifold, then H k (X) is isomorphic to H n−k (X).
(More precisely to the dual of H n−k (X), but every finite-dimensional vector space
is isomorphic to its own dual.) This is called Poincare duality.
(5) If X is any compact manifold, orientable or not, then H k (X) is isomorphic to
Hom(Hk (X), R), where Hk (X) is the k-th homology group of X.
(6) A formula for H k (X × Y ) in terms of the cohomologies of X and Y .
Suppose we have a sequence
L L L
V 1 −→
1
V 2 −→
2
V 3 −→
3
···
where each V i is a vector space and each Li : V i → V i+1 is a linear transformation. We say
that this sequence is exact if the kernel of each Li equals the image of the previous Li−1 . In
particular,
L
0−→V − →W − →0
is exact if and only if L is an isomorphism, since the kernel of L has to equal the image of
0, and the image of L has to equal the kernel of the 0 map on W .
Exercise 7: A short exact sequence involves three spaces and two maps:
i j
0→U −
→V −
→W →0
Show that if this sequence is exact, there must be an isomorphism h : V → U ⊕ W , with
h ◦ i(u) = (u, 0) and j ◦ h−1 (u, w) = w.
Exact sequences can be defined for homeomorphisms between arbitrary Abelian groups,
and not just vector spaces, but are much simpler when applied to vector spaces. In particular,
5. PROOF OF MAYER-VIETORIS 39
the analogue of the previous exercise is false for groups. (E.g. one can define a short exact
sequence 0 → Z2 → Z4 → Z2 → 0 even though Z4 is not isomorphic to Z2 × Z2 .)
Suppose that X = U ∪ V , where U and V are open submanifolds of X. There are
natural inclusion maps iU and iV of U and V into X, and these induce maps i∗U and i∗V from
Ωk (X) to Ωk (U) and Ωk (V ). Note that i∗U (α) is just the restriction of α to U, while i∗V (α)
is the restriction of α to V . Likewise, there are inclusions ρU and ρV of U ∩ V in U and V ,
respectively, and associated restrictions ρ∗U and ρ∗V from Ωk (U) and Ωk (V ) to Ωk (U ∩ V ).
Together, these form a sequence:
i j
(25) 0 → Ωk (X) −
→k
Ωk (U) ⊕ Ωk (V ) −
→k
Ωk (U ∩ V ) → 0,
where the maps are defined as follows. If α ∈ Ωk (X), β ∈ Ωk (U) and γ ∈ Ωk (V ), then
ik (α) = (i∗U α, i∗V α)
jk (β, γ) = rU∗ β − rV∗ γ.
Note that d(ik (α)) = ik+1 (dα) and that d(jk (β, γ)) = jk+1 (dβ, dγ). That is, the diagram
i j
0 −−−→ Ωk (X) −−−
k
→ Ωk (U) ⊕ Ωk (V ) k
−−−→ Ωk (U ∪ V ) −−−→ 0
d d d
yk yk yk
ik+1 jk+1
0 −−−→ Ωk+1 (X) −−−→ Ωk+1 (U) ⊕ Ωk+1 (V ) −−−→ Ωk+1 (U ∪ V ) −−−→ 0
commutes. Thus ik and jk send closed forms to closed forms and exact forms to exact forms,
and induce maps
i♯k : H k (X) → H k (U) ⊕ H k (V ); jk♯ : H k (U) ⊕ H k (V ) → H k (U ∩ V ).
Theorem 4.1 (Mayer-Vietoris). There exists a map d♯k : H k (U ∩ V ) → H k+1 (X) such
that the sequence
i♯k jk♯ d♯k i♯k+1
k k k k k+1
· · · H (X) −
→ H (U) ⊕ H (V ) −
→ H (U ∩ V ) −→ H (X) −−→ H k+1(U) ⊕ H k+1 (V ) → · · ·
is exact.
The proof is a long slog, and warrants a section of its own. Then we will develop the
uses of Mayer-Vietoris sequences.
5. Proof of Mayer-Vietoris
(3) Having constructed the maps, we show exactness at H k (U) ⊕ H k (V ), i.e., that the
image of i♯k equals the kernel of jk♯ .
(4) We show exactness at H k (U ∩ V ), i.e., that the image of jk♯ equals the kernel of d♯k .
(5) We show exactness at H k+1 (X), i.e. that the kernel of i♯k+1 equals the image of d♯k .
(6) Every step but the first is formal, and applies just as well to any short exact sequence
of (co)chain complexes. This construction in homological algebra is called the snake
lemma, and may be familiar to some of you from algebraic topology. If so, you can
skip ahead after step 2. If not, don’t worry. We’ll cover everything from scratch.
Step 1: Showing that
i j
0 → Ωk (X) −
→k
Ωk (U) ⊕ Ωk (V ) −
→k
Ωk (U ∩ V ) → 0
amounts to showing that ik is injective, that Im(ik ) = Ker(jk ), and that jk is surjective.
The first two are easy. The subtlety is in showing that jk is surjective.
Recall that i∗U , i∗V , rU∗ and rV∗ are all restriction maps. If α ∈ Ωk (X) and ik (α) = 0, then
the restriction of α to U is zero, as is the restriction to V . But then α itself is the zero form
on X. This shows that ik is injective.
Likewise, for any α ∈ Ωk (X), i∗U (α) and i∗V (α) agree on U ∩ V , so rU∗ i∗U (α) = rV∗ i∗V (α), so
jk (ik (α)) = 0. Conversely, if jk (β, γ) = 0, then rU∗ (β) = rV∗ (γ), so we can stitch β and γ into
a form α on X that equals β on U and equals γ on V (and equals both of them on U ∩ V ),
so (β, γ) ∈ Im(ik ).
Now suppose that µ ∈ Ωk (U ∩ V ) and that {ρU , ρV } is a partition of unity of X relative
to the open cover {U, V }. Since the function ρU is zero outside of U, the form ρU µ can be
extended to a smooth form on V by declaring that ρU µ = 0 on V − U. Note that ρU µ is
not a form on U, since µ is not defined on the entire support of ρU . Rather, ρU µ is a form
on V , since µ is defined at all points of V where ρU 6= 0. Likewise, ρV µ is a form on U. On
U ∩ V , we have µ = ρV µ − (−ρU µ). This means that µ = jk (ρV µ, −ρU µ).
The remaining steps are best described in the language of homological algebra. A cochain
complex A is a sequence of vectors spaces1 {A0 , A1 , A2 , . . .} together with maps dk : Ak →
Ak+1 such that dk ◦ dk−1 = 0. We also define A−1 = A−2 = · · · to be 0-dimensional vector
spaces (“0”) and d−1 = d−2 = · · · to be the zero map. The k-th cohomology of the complex
is
kernel of dk
H k (A) = .
image of dk−1
1For
(co)homology theories with integer coefficients one usually considers sequences of Abelian groups.
For de Rham theory, vector spaces will do.
5. PROOF OF MAYER-VIETORIS 41
We will use the letters α, β, γ, with appropriate subscripts and other markers, to denote
elements of A, B and C, respectively. For simplicity, we will write “d” for dA B C A
k , dk , dk , dk+1 ,
etc. Our first task is to define d♯k [γk ], where [γk ] is a class in H k (C).
Since jk is surjective, we can find βk ∈ B k such that γk = jk (βk ). Now,
jk+1 (dβk ) = d(jk (βk )) = d(γk ) = 0,
since γk was closed. Since dβk is in the kernel of jk+1 , it must be in the image of ik+1 . Let
αk+1 be such that ik+1 (αk+1) = dβk . Furthermore, αk+1 is unique, since ik+1 is injective. We
42 5. DE RHAM COHOMOLOGY
define
d♯k [γk ] = [αk+1 ].
The construction of [αk+1 ] from [γk ] is summarized in the following diagram:
k j
βk −−−→ γk
yd
ik+1
αk+1 −−−→ dβk
For this definition to be well-defined, we must show that
• αk+1 is closed. However,
ik+2 (dαk+1 ) = d(ik+1 αk+1 ) = d(dβk ) = 0.
Since ik+2 is injective, dαk+1 must then be zero.
• The class [αk+1 ] does not depend on which βk we chose, so long as jk (βk ) = αk . The
argument in displayed in the diagram
j
(αk ) βk′ = βk + ik (αk ) −−−
k
→ γk
yd
′ ik+1
αk+1 = αk+1 + dαk −−−→ dβk′ = dβk + dik αk
To see this, suppose that we pick a different β ′ ∈ B k with jk (βk′ ) = γk = jk (βk ).
Then jk (βk′ − βk ) = 0, so βk′ − βk must be in the image of ik , so there exists αk ∈ Ak
such that βk′ = βk + ik (αk ). But then
dβ ′ = dβ + d(ik (αk )) = dβ + ik+1 (dαk ) = ik+1 (αk+1 + dαk ),
′ ′
so αk+1 = αk+1 + dαk . But then [αk+1 ] = [αk+1 ], as required.
• The class [αk+1 ] does not depend on which cochain γk we use to represent the class
[γk ].
jk−1
βk−1 −−−→ γk−1
k j
βk + dβk−1 −−−→ γk + dγk−1
yd
ik+1
αk+1 −−−→ dβk
′
Suppose γk = γk + dγk−1 is another representative of the same class. Then there
exists a βk−1 such that γk−1 = jk−1 βk−1 . But then
jk (βk + dβk−1 ) = jk (βk ) + jk (d(βk−1)) = γk + d(jk−1βk−1 ) = γk + dγk−1 = γ ′ .
5. PROOF OF MAYER-VIETORIS 43
Thus we can take βk′ = βk + dβk−1 . But then dβk′ = dβk , and our cochain αk+1 is
exactly the same as if we had worked with γ instead of γ ′ .
Before moving on to the rest of the proof of the Snake Lemma, let’s stop and see how
this works for Mayer-Vietoris.
(1) Start with a class in H k (U ∩ V ), represented by a closed form γk ∈ Ωk (U ∩ V ).
(2) Pick βk = (ρV γk , −ρU γk ), where (ρU , ρV ) is a partition of unity.
(3) dβk = (d(ρV γk ), −d(ρU γk )). This is zero outside of U ∩ V , since ρV γk and ρU γk were
constructed to be zero outside U ∩ V .
(4) Since ρU = 1 − ρV where both are defined, dρU = −dρV . Since dγk = 0, we then
have −d(ρU γk ) = d(ρV γk ) on U ∩ V . This means that the forms d(ρV γk ) on U and
−d(ρU γk ) on V agree on U ∩ V , and can be stitched together to define a closed form
αk+1 on all of X. That is, d♯k [γk ] is represented by the closed form
(
d(ρV γk ) on U
d∗k (γk ) =
−d(ρU γk ) on V ,
and the two definitions agree on U ∩ V .
Returning to the Snake Lemma, we must show six inclusions:
• Im(i♯k ) ⊂ Ker(jk♯ ), i.e. that jk♯ i♯k [αk ] = 0 for any closed cochain αk . This follows
from
jk♯ (i♯k [αk ]) = jk♯ [ik αk ] = [jk (ik (αk ))] = 0,
since jk ◦ ik = 0.
• Im(jk♯ ) ⊂ Ker(d♯k ), i.e. that d♯k ◦ jk♯ = 0. If [βk ] is a class in H k (B), then jk♯ [βk ] =
[jk βk ]. To apply d♯k to this, we must
(a) find a cochain in B k that maps to jk βk . Just take βk itself!
(b) Take d of this cochain. That gives 0, since βk is closed.
(c) Find an αk+1 that maps to this. This is αk+1 = 0.
• Im(d♯k ) ⊂ Ker(i♯k+1 ). If αk+1 represents d♯k [γk ], then ik+1 αk+1 = dβk is exact, so
i♯k+1 [αk ] = 0.
• Ker(jk♯ ) ⊂ Im(i♯k ). If [jk βk ] = 0, then jk β = d(γk−1). Since jk−1 is surjective, we
can find a βk−1 such that jk−1 βk−1 = γk−1. But then jk (dβk−1) = d(jk−1 (βk−1)) =
dγk−1 = γk . Since jk (βk − dβk−1) = 0, we must have βk − dβk−1 = ik (αk ) for some
αk ∈ Ak . Note that ik+1 dαk = d(ik αk ) = dβk − d(dβk−1 ) = 0, since βk is closed. But
ik+1 is injective, so dαk must be zero, so αk represents a class [αk ] ∈ H k (A). But
then i♯k [αk ] = [ik (αk )] = [βk + dβk−1] = [βk ], so [βk ] is in the image of i♯k .
• Ker(d♯k ) ⊂ Im(jk♯ ). Suppose that d♯k [γk ] = 0. This means that the αk+1 constructed
to satisfy ik+1 αk+1 = dβk , where γk = jk (βk ), must be exact. That is, αk+1 = dαk
44 5. DE RHAM COHOMOLOGY
6. Using Mayer-Vietoris
is
0 → H k (S n−1 ) → H k+1(S n ) → 0,
so H k+1 (S n ) is isomorphic to H k (S n−1 ). By induction on n, this shows that H k (S n ) = R
when k = n and is 0 when 0 < k 6= n. Furthermore, the generator of H n (S n ) can be realized
as a bump form, equal to dρV wedged with the generator of H n−1 (S n−1 ), which in turn is a
46 5. DE RHAM COHOMOLOGY
bump 1-form wedged with the generator of H n−1 (S n−2). Combining steps, this gives a bump
n-form of total integral 1, localized near the point (1, 0, 0, . . . , 0).
Exercise 10: Let T = S 1 × S 1 be the 2-torus. By dividing one of the S 1 factors circle two
(overlapping) open sets, we can divide T into two cylinders U and V , such that U ∩ V is
itself the disjoint union of two cylinders. Use this partition and the Mayer-Vietoris sequence
to compute the cohomology of X. Warning: unlike with the circle, the dimensions of H k (U),
H k (V ) and H k (U ∩ V ) are not enough to solve this problem. You have to actually study
what i♯k , jk♯ and/or d♯k are doing. [Note: H 1 (R × S 1 ) = R, by integration over the fiber. This
theorem also implies that the generator of H 1 (R × S 1 ) is just the pullback of a generator of
H 1 (S 1 ) to R × S 1 . You should be able to explicitly write generators for H 1 (U), H 1 (V ) and
H 1 (U ∩ V ) and see how the maps i♯1 and j1♯ behave.]
Exercise 11: Let K be a Klein bottle. Find open sets U and V such that U ∪ V = K, such
that U and V are cylinders, and such that U ∩ V is the disjoint union of two cylinders. In
other words, the exact same data as with the torus T . The difference is in the ranks of some
of the maps. Use Mayer-Vietoris to compute the cohomology of K.
A set is contractible if it deformation retracts to a single point, in which case it has the
same cohomology as a single point, namely H 0 = R and H k = 0 for k > 0. In the context
of n-manifolds, an open contractible set is something diffeomorphic to Rn .
Exercise 12: Suppose that we are on a manifold with a Riemannian metric, so that there
is well-defined notion of geodesics. Suppose furthermore that we are working on a region on
which geodesics are unique: there is one and only one geodesic from a point p to a point q.
An open submanifold A is called convex if, for any two points p, q ∈ A, the geodesic from p
to q is entirely in A. Show that a convex submanifold is contractible.
Exercise 13: Show that the (non-empty) intersection of any collection of convex sets is
convex, and hence contractible.
A good cover of a topological space is an open cover {Ui } such that any finite intersection
Ui1 ∩ Ui1 ∩ · · · ∩ Uik is either empty or contractible.
Proof. First suppose that X has a Riemannian metric. Then each point has a convex
geodesic (open) neighborhood. Since the intersection of two (or more) convex sets is convex,
these neighborhoods form a good cover. Since X is compact, there is a finite sub-cover,
which is still good.
7. GOOD COVERS AND THE MAYER-VIETORIS ARGUMENT 47
If X is embedded in RN , then the Riemannian structure comes from the embedding. The
only question is how to get a Riemannian metric when X is an abstract manifold. To do
this, partition X into coordinate patches. Use the Riemannian metric on Rn on each patch.
Then stitch them together using partitions of unity. Since any positive linear combination of
inner products still satisfies the axioms of an inner product, this gives a Riemannian metric
for X.
We next consider how big the cohomology of a manifold X can be. H k (X) is the quotient
of two infinite-dimensional vector spaces. Can the quotient be infinite-dimensional?
If X is not compact, it certainly can. For example, consider the connected sum of an
infinite sequence of tori. H 1 of such a space would be infinite-dimensional. However,
This proof was an example of the Mayer-Vietoris argument. In general, we might want
to prove that all spaces with finite good covers have a certain property P . Examples of
such properties include finite-dimensional cohomology, Poincare duality (between de Rham
cohomology and something called “compactly supported cohomology”), the Kunneth formula
for cohomologies of product spaces, and the isomorphism between de Rham cohomology and
48 5. DE RHAM COHOMOLOGY
singular cohomology with real coefficients. The steps of the argument are the same for all of
these theorems:
(1) Show that every contractible set has property P .
(2) Using the Mayer-Vietoris sequence, show that if U, V , and U ∩ V have property P ,
then so does U ∪ V .
(3) Proceeding by induction as above, showing that all manifolds with finite good covers
have property P .
(4) Conclude that all compact manifolds have property P .
For lots of examples of the Mayer-Vietoris principle in action, see Bott and Tu’s excellent
book Differential forms in algebraic topology.
CHAPTER 6
d d d d
0 → Ω1c (X) −
→ Ω2c (X) −
→ Ω3c (X) − → Ωnc (X) → 0,
→ ··· −
and we define Hck (X) to be the k-th cohomology of this complex. That is,
Proof. I will treat the cases n = 0, n = 1 and n = 2 by hand, and then show how to
get all larger values of n by induction.
If n = 0, then Rn is compact, and Hc0 (R0 ) = H 0 (R0 ) = R.
If n = 1, then Hc0 consists of compactly supported constant functions. But a constant
function is only compactly supported if the constant is zero! Thus Hc0 (R) = 0. 1
Z As for Hc ,
x
suppose that α is a 1-form supported on the interval [−R, R]. Let f (x) = α. Then
−R
α = df , and f is the only
Z antiderivative of α that is zero for x < −R. Meanwhile, f (x) = 0
for x > R if and only if α = 0, so α is d of a compactly supported function if and only if
R
49
50 6. TOP COHOMOLOGY, POINCARE DUALITY, AND DEGREE
Z
α = 0, and
R
d◦i = i◦d
d◦j = j◦d
j◦i = 1
(26) (1 − i ◦ j) = ±dP ± P d,
where we have suppressed the subscripts on ik , jk , dk , and Pk and the identity map 1, and
where the signs in the last equation may depend on k. The first line implies that i induces a
map i♯ : Hck (X) → Hck+1 (X × R), and the second that j induces a map j ♯ : Hck+1(X × R) →
Hck (X). The third line implies that j ♯ ◦ i♯ is the identity, and the fourth implies that i♯ ◦ j ♯
is also the identity. Thus i♯ and j ♯ are isomorphisms, and Hck+1 (X × R) = Hck (X). In
particular,XHck+1 (Rn+1 ) = Hck (Rn ), providing the inductive step of the proof of our theorem.
If α = αI (x)dxI is a compactly supported k-form on X, let
X X
i(α) = αI (x)f ′ (s)dxI ∧ ds = (−1)k f ′ (s)αI (x)ds ∧ dxI ,
I I
′
where f (s)ds is a bump form on R with integral 1. Let φ be the pullback of this bump form
to X × R. Another way of writing the formula for i is then
since dφ = 0.
Next we define j. Every compactly supported k-form α on X × R can be written as a
sum of two pieces:
X X
α= αI (x, s)dxI + γJ (x, s)dxJ ∧ ds,
I J
1In this construction we work with X × R rather than R × X to simplify the signs in some of our
computations.
52 6. TOP COHOMOLOGY, POINCARE DUALITY, AND DEGREE
Note that
!
X X X
j(dα) = j ∂j αI (x, s)dxj ∧ dxI + ∂s αI (x, s)ds ∧ dxI + ∂j γJ (x, s)dxj ∧ dxJ ∧ ds
I,j I j,J
X Z ∞ X Z ∞
k I
= (−1) ∂s αI (x, s)ds dx + ∂j γJ (x, s) dxj ∧ dxJ
I −inf ty j,J −∞
X Z∞
= 0+ ∂j γJ (x, s) dxj ∧ dxJ
j,J −∞
(27) = d(j(α)),
Z ∞
where we have used the fact that αI (x, s) is compactly supported, so ∂s αI (x, s)ds =
−∞
αI (x, ∞) − αI (x, −∞) = 0. In the computation of Hc2 (R2 ), the form β̃ was precisely i ◦ j(β).
Now, for α ∈ Ωkc (X × R), let
X Z s Z ∞
P (α)(x, s) = γJ (x, t)dt − f (s) γJ (x, t)dt dxJ .
J −∞ −∞
This gives a compactly supported form, since for s large and positive the two terms cancel,
while for s large and negative both terms are zero.
Exercise 1: For an arbitrary form α ∈ Ωkc (X × R), compute d(P (α)) and P (dα), and show
that α − i(j(α)) = ±dP (α) ± P (dα).
For the second and third steps, we will show that an arbitrary n-form α is cohomologous
to a finite sum of bump forms, where two closed forms are said to be cohomologous if they
differ by an exact form. (That is, if they represent the same class in cohomology.) We then
show that any bump form is cohomologous to a multiple of a specific bump form, which
then generates H n (X). If X is not orientable, we will then show that this generator is
cohomologous to minus itself, and hence is exact.
Pick a partition of unity {ρ
Xi } subordinate to a collection of coordinate patches. Any n-
form α can then be written as ρi αi . Since X is compact, this is a finite sum. Now suppose
i
that αi = ρi α is compactly supported in the image of a parametrization ψi : Ui → X.Z Let φi
be a bump form on Ui of total integral 1 localized around a point ai , and let ci = ψi∗ αi ,
Ui
where we are using the canonical orientation of Rn . Then ψi∗ αi − ci φi is d of a compactly
supported (n − 1)-form on Ui . This implies that αi − ci (ψi−1 )∗ φi is d of an (n − 1)-form that is
compactly supported on ψi (Ui ), and can thus be extended (by zero) to be a form on all of X.
Thus αi is cohomologous to ci (ψi−1 )∗ φi . That is, to a bump form localized near pi = ψi (ai ).
The choice of ai was arbitrary, and the precise formula for the bump form was arbitrary,
so bump forms with the same integral supported near different points of the same coordinate
patch are always cohomologous. However, this means that if φ and φ′ are bump n-forms
supported near any two points p and p′ of X, then φ is cohomologous to a multiple of φ′ . We
just find a sequence of coordinate patches Vi = ψi (Ui ) and points p0 = p ∈ V1 , p1 ∈ V1 ∩ V2 ,
p2 ∈ V2 ∩V3 , etc. A bump form near p0 is cohomologous to a multiple of a bump form near p1
since both points are in V1 . But that is cohomologous to a multiple of a bump form near p2
since both p1 and p2 are in V2 , etc. Thus α is cohomologous to a sum of bump forms, which
is in turn cohomologous to a single bump form centered at an arbitrarily chosen location.
This shows that H n (X) is at most 1-dimensional.
Now suppose that X is not orientable. Then we can find a sequence of coordinate
neighborhoods, V1 , . . . , VN with VN = V1 , such that there are an odd number of orientation
reversing transition functions. Starting with a bump form at p, we apply the argument of the
previous paragraph, keeping track of integrals and signs, and ending up with a bump form at
p′ = p that is minus the original bump form. Thus twice the bump form is cohomologous to
zero, and the bump form itself is cohomologous to zero. Since this form generated H n (X),
H n (X) = 0.
The upshot of this theorem is not only a calculation of H n (X) = R when X is connected,
compact and oriented, but the derivation of a generator of H n (X), namely any form whose
total integral is 1. This can be a bump form, it can be a multiple of the (n-dimensional)
54 6. TOP COHOMOLOGY, POINCARE DUALITY, AND DEGREE
volume form, and there are infinitely many other possibilities. The important thing is that
integration gives an isomorphism
Z
: H n (X) → R.
X
3. Poincare duality
Suppose that X is an oriented n-manifold (not necessarily compact and not necessarily
connected), that α is a closed, compactly supported k-form, and that β is a closed n − k
form. Then α ∧ β is compactly supported, insofar as α is compactly supported. Integration
gives a map
Z
: Hck (X) × H n−k (X) → R
X Z
(28) [α] × [β] 7→ α ∧ β.
X
Exercise 2: Suppose that α and α′ represent theZ same class Zin Hck (X) and that β and β ′
represent the same class in H n−k (X). Show that α′ ∧ β ′ = α ∧ β.
X X
This implies that integration gives a map from H n−k (X) to the dual space of Hck (X).
Proof. A complete proof is beyond the scope of these notes. A complete presentation
can be found in Bott and Tu. Here is a sketch.
• The theorem is true for a single coordinate patch, since H n−k (Rn ) and Hck (Rn )∗ are
both R when k = n and 0 otherwise, with integration relating the two as above.
• We construct a Mayer-Vietoris sequence for compactly supported cohomology. With
regard to incusions, this runs the opposite direction as the usual Mayer-Vietoris
sequence:
· · · → Hck (U ∩ V ) → Hck (U) ⊕ Hck (V ) → Hck (U ∪ V ) → H k+1(U ∩ V ) → · · ·
• Looking at dual spaces, we obtain an exact sequence
· · · → Hck (U ∪ V )∗ → Hck (U)∗ ⊕ Hck (V )∗ → Hck (U ∩ V )∗ → Hck−1 (U ∪ V )∗ → · · ·
With respect to inclusions, this goes in the same direction as the usual Mayer-
Vietoris sequence (going from the cohomology of U ∪ V to that of U and V to
that of U ∩ V to that of U ∪ V , etc.), only with the index k decreasing at the
U ∩ V → U ∪ V stage instead of increasing. However, this is precisely what we
3. POINCARE DUALITY 55
need for Poincare duality, since decreasing the dimension k is the same thing as
increasing the codimension n − k.
• The five lemma in homological algebra says that if you have a commutative diagram
A −−−→ B −−−→ C −−−→ D −−−→ E
α β γ ǫ
y y y yδ y
A′ −−−→ B ′ −−−→ C ′ −−−→ D ′ −−−→ E ′
with the rows exact, and if α, β, δ and ǫ are isomorphisms, then so is γ. Comparing
the Mayer-Vietoris sequences for H n−k and for (Hck )∗ , the five lemma says that,
if for all k integration gives isomorphisms between H n−k (U) and Hck (U)∗ , between
H n−k (V ) and Hck (V )∗ , and between H n−k (U ∩ V ) and Hck (U ∩ V )∗ , then integration
also induces isomorphisms between H n−k (U ∪ V ) and Hck (U ∪ V )∗ .
• We then use the Mayer-Vietoris argument, proceeding by induction on the cardi-
nality of a good cover.
Exercise 6: Suppose that X is compact, oriented, connected and simply connected. Show
that H 1 (X) = 0, and hence that H n−1 (X) = 0. [Side note: The assumption of orientation
is superfluous, since all simply connected manifolds are orientable.]
This exercise shows that a compact, connected, simply connected 3-manifold has the
same cohomology groups as S 3 . The recently-proved Poincare conjecture asserts that all
56 6. TOP COHOMOLOGY, POINCARE DUALITY, AND DEGREE
such manifolds are actually homeomorphic to S 3 . There are plenty of other (not simply
connected!) compact 3-manifolds whose cohomology groups are also the same as those S 3 ,
namely H 0 = H 3 = R and H 1 = H 2 = 0. These are called rational homology spheres, and
come up quite a bit in gauge theory.
4. Degrees of mappings
Suppose that X and Y are both compact, oriented n-manifolds. Then H n (X) and H n (Y )
are both naturally identified with the real numbers. If f : X → Y is a smooth map, then
the pullback fn♯ : H n (Y ) → H n (X) is just multiplication by a real number. We call this
number the degree of f . Since homotopic maps induce the same map on cohomology, we
immediately deduce that homotopic maps have the same degree.
However, we already have a definition of degree from intersection theory! Not surprisingly,
the two definitions agree.
Theorem 4.1. Let p be a regular valueZ of f , anZ let D be the number of preimages of p,
counted with sign. If α ∈ Ωn (Y ), then f ∗α = D α.
X Y
Proof. First suppose that α is a bump form localized in such a small neighborhood V
of p that the stack-of-records theorem applies. That is, f −1 is a Z
discrete collection
Z of sets
Z Ui
such that f restricted to Ui is a diffeomorphism to V . But then f ∗α = ± α=± α,
Ui Z V Y
XZ
∗
where the ± is the sign of det f at the preimage of p. But then f α = f ∗α =
X i Ui
X Z Z
sign(det(df )) α = D α.
i V Y
Now suppose that α is an arbitrary n-form. Then α is cohomologous to a bump form α′
localized around p, and f ∗ α is cohomologous to f ∗ (α′ ), so
Z Z Z Z
∗ ∗ ′ ′
f α= f (α ) = D α =D α.
X X Y Y
map. [Note: Some authors use −d~v instead of d~v. This amounts to just flipping the sign of
~v .] The eigenvalues of S are called the principal curvatures of X at x, and the determinant
is called the Gauss curvature, and is denoted K(x).
Let ω2 be the area 2-form on §2 (explicitly: ω2 = xdy ∧ dz + ydz ∧ dx + zdx ∧ dy). The
pullback ~n∗ ω2 is then K(x) times the area form on X.
Exercise 7: Show that the last sentence is true regardless of the orientation of X.
The following three exercises are designed to give you some intuition on what K means.
Exercise 8: Let X be the hyperbolic paraboloid z = x2 − y 2 . Show that K at the origin
is negative, regardless of which orientation you pick for X. In other words, show that ~n is
orientation reversing near 0.
Exercise 9: Let X be the elliptic paraboloid z = x2 + y 2. Show that K at the origin is
positive.
Exercise 10: Now let X be a general paraboloid z = ax2 + bxy + cy 2, where a and b and
c are real numbers. Compute K at the origin. [Or for a simpler exercise, do this for b = 0
and arbitrary a and c.]
Z
Theorem 4.2 (Gauss-Bonnet). K(x)dA = 2πχ(X).
X
2
Proof. Since the area of S is 4π, we just have to show that the degree of ~v is half
the Euler characteristic of X. First we vary the immersion to put our Riemann surface of
genus g in the position used to illustrate the “hot fudge” map. This gives a homotopy of the
original map ~v , but preserves the degree. Then (0, 0, 1) is a regular value of ~n, and has g + 1
preimages, of which one (at the top) is orientation preserving and the rest are orientation
reversing. Thus the degree is 1 − g = χ(X)/2.
This construction, and this theorem, extends to oriented hypersurfaces in higher dimen-
sions. If X is a compact oriented n-manifold in Rn+1 , we can define the oriented normal vector
~v (x) ∈ S n , so we still have a map ~v : X → S n and a shape operator S : Tx (X) → Tx (X). The
shape operator is always self-adjoint, and so is diagonalizable with real eigenvalues, called
the principal curvatures of X, and orthogonal eigenvectors, called the principal directions.
Let ωn be the volume form on S n , and we write ~v ∗ ωn = K(x)dV , where dV is the volume
form on X. As before, K(x) is called the Gauss curvature of X, and is the determinant of
S.
Proof. As with the 2-dimensional theorem, the key is showing that the degree of ~v is
half the Euler characteristic of X. Instead of deforming the immersion of X into a standard
form, we’ll use the Hopf degree formula.
Pick a point ~a ∈ S n such that ~a and −~a are both regular values of ~v. Then the total
number of preimages of ±~a, counted with sign, is twice the degree of ~v. We now construct a
vector field w(x)
~ on X, where w(x) is the projection of ~a onto Tx (X). This is zero precisely
where the normal vector is ±~a.
Exercise 11: Show that the index of w ~ at such a point is the sign of det(d~v).
ByZthe Hopf degree
Z formula, the Euler characteristic of X is then twice the degree of ~v .
Degree
Since K(x)dV = ~v ∗ ωn = γn , the theorem follows.
X X 2
Z
If X is odd-dimensional, then χ(X) = 0, but K(x)dV need not be zero. A simple
X
counter-example is S n itself.
Another application of degrees and differential forms comes up in knot theory. If X
and Y are non-intersecting oriented loops in R3 , then there is a map f (X × Y ) → S 2 ,
f (x, y) = (y − x)/|y − x|. The linking number of X and Y is the degree of this map. This can
be computed in several ways. One is to pick a point in S 2 , say (0,0,1), and count preimages.
These are the instances where a point in Y lies directly above a point in X. In other words,
the linking number counts crossings with sign between the knot diagram for X and the knot
diagram for Y . However, it can also be expressed via an integral formula.
Exercise 12: Supposed that γ1 and γ2 are non-intersecting loops in R3 , where each γi is,
strictly speaking, a map from S 1 to R3 . Then f is a map from S 1 × S 1 to S 2 . Compute f ∗ ω2
in terms of γ1 (s), γ2 (t) and their derivatives. By integrating this, derive an integral formula
for the linking number.