2019 Book YetAnotherIntroductionToDarkMatter
2019 Book YetAnotherIntroductionToDarkMatter
Martin Bauer
Tilman Plehn
Yet Another
Introduction to
Dark Matter
The Particle Physics Approach
Lecture Notes in Physics
Volume 959
Founding Editors
Wolf Beiglböck, Heidelberg, Germany
Jürgen Ehlers, Potsdam, Germany
Klaus Hepp, Zürich, Switzerland
Hans-Arwed Weidenmüller, Heidelberg, Germany
Series Editors
Matthias Bartelmann, Heidelberg, Germany
Peter Hänggi, Augsburg, Germany
Morten Hjorth-Jensen, Oslo, Norway
Maciej Lewenstein, Barcelona, Spain
Angel Rubio, Hamburg, Germany
Manfred Salmhofer, Heidelberg, Germany
Wolfgang Schleich, Ulm, Germany
Stefan Theisen, Potsdam, Germany
Dieter Vollhardt, Augsburg, Germany
James D. Wells, Ann Arbor, MI, USA
Gary P. Zank, Huntsville, AL, USA
The Lecture Notes in Physics
The series Lecture Notes in Physics (LNP), founded in 1969, reports new devel-
opments in physics research and teaching-quickly and informally, but with a high
quality and the explicit aim to summarize and communicate current knowledge in
an accessible way. Books published in this series are conceived as bridging material
between advanced graduate textbooks and the forefront of research and to serve
three purposes:
• to be a compact and modern up-to-date source of reference on a well-defined
topic
• to serve as an accessible introduction to the field to postgraduate students and
nonspecialist researchers from related areas
• to be a source of advanced teaching material for specialized seminars, courses
and schools
Both monographs and multi-author volumes will be considered for publication.
Edited volumes should, however, consist of a very limited number of contributions
only. Proceedings will not be considered for LNP.
Volumes published in LNP are disseminated both in print and in electronic for-
mats, the electronic archive being available at springerlink.com. The series content
is indexed, abstracted and referenced by many abstracting and information services,
bibliographic networks, subscription agencies, library networks, and consortia.
Proposals should be sent to a member of the Editorial Board, or directly to the
managing editor at Springer:
Lisa Scalone
Springer Nature
Physics Editorial Department
Tiergartenstrasse 17
69121 Heidelberg, Germany
Lisa.Scalone@springernature.com
123
Martin Bauer Tilman Plehn
Institut für Theoretische Physik Institut für Theoretische Physik
Universität Heidelberg Universität Heidelberg
Heidelberg, Germany Heidelberg, Germany
This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
v
vi Preface
Finally, the literature listed at the end of the notes is not meant to cite original or
relevant research papers. Instead, it gives examples for reviews or advanced lecture
notes supplementing our lecture notes in different directions. Going through some
of these mostly introductory papers will be instructive and fun once the basics have
been covered by these lecture notes.
TP would like to thank many friends who have taught him dark matter, starting with
the always-inspiring Dan Hooper. Dan also introduced him to deep-fried cheese
curds and to the best ribs in Chicago. Tim Tait was of great help in at least two
ways: for years he showed us that it is fun to work on dark matter even as a trained
collider physicist, and then he answered every single email during the preparation of
these notes. Our experimental co-lecturer Teresa Marrodan Undagoitia showed not
only our students but also us how inspiring dark matter physics can be. As coauthors
Joe Bramante, Adam Martin, and Paddy Fox gave us a great course on dark matter
physics while we were writing these papers. Pedro Ruiz-Femenia was extremely
helpful explaining the field theory behind the Sommerfeld enhancement to us. Jörg
Jäckel for a long time and over many coffees tried to convince everybody that the
axion is a great dark matter candidate. Teaching and discussing with Björn-Malte
Schäfer was an excellent course on how thermodynamics is actually useful. Finally,
there are many people who helped us with valuable advice while we prepared this
course, like John Beacom, Martin Schmaltz, and Felix Kahlhöfer, and people who
commented on the notes, like Elias Bernreuther, Johann Brehmer, Michael Baker,
Anja Butter, Björn Eichmann, Ayres Freitas, Jan Horak, Michael Ratz, or Michael
Schmidt.
vii
Contents
ix
x Contents
Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 179
Chapter 1
History of the Universe
When we study the history of the Universe with a focus on the matter content of the
Universe, we have to define three key parameters:
– the Hubble constant H0 which describes the expansion of the Universe. Two
objects anywhere in the Universe move away from each other with a velocity
proportional to their current distance r. The proportionality constant is defined
through Hubble’s law
ṙ km
H0 := ≈ 70
r s Mpc
105 cm 1
= 70 = 2.3 · 10−18 6.6 · 10−16 eV = 1.5 · 10−33 eV.
3.1 · 1024 cm s
(1.1)
Throughout these lecture notes we will use these high-energy units with h̄ = c =
1, eventually adding kB = 1. Because H0 is not at all a number of order one we
can replace H0 with the dimensionless ratio
H0
h := ≈ 0.7. (1.2)
km
100
s Mpc
The Hubble ‘constant’ H0 is defined at the current point in time, unless explicitly
stated otherwise.
– the cosmological constant Λ, which describes most of the energy content of the
Universe and which is defined through the gravitational Einstein-Hilbert action
M2 √
SEH ≡ Pl d 4x −g (R − 2Λ) . (1.3)
2
It is most convenient to also combine the Hubble constant and the cosmological
constant to a dimensionless parameter
Λ
ΩΛ := . (1.5)
3H02
– the matter content of the Universe which changes with time. As a mass density
we can define it as ρm , but as for our other two key parameters we switch to the
dimensionless parameters
ρm ρr
Ωm := and Ωr := . (1.6)
ρc ρc
to Hubble’s law. We start by computing the velocity vesc a massive particle has
to have to escape a gravitational field. Classically, it is defined by equal kinetic
energy and gravitational binding energy for a test mass m at radius r,
4πr 3 mr 3
2 Gm ρc ρ
2 c
mvesc ! GmM 3 6MPl 1
= = vesc = vesc with G = 2
2 r 8πMPl
H0 H0
Eq. (1.1) ! 1
⇔ H03 r 3 = 3
vesc = H ρ r3
2 0 c
3MPl
⇔ ρc = 3MPl
2
H02 = (2.5 · 10−3 eV)4 . (1.8)
We give the numerical value based on the current Hubble expansion rate. For a
more detailed account for the history of the Universe and a more solid derivation
of ρc we will resort to the theory of general relativity in the next section.
ds 2 = dt 2 − dr 2 − r 2 dθ 2 − r 2 sin2 θ dφ 2
⎛ ⎞T ⎛ ⎞⎛ ⎞
dt 1 0 0 0 dt
⎜ dr ⎟ ⎜0 −1 0 0 ⎟ ⎜ dr ⎟
=⎜
⎝ rdθ ⎠
⎟ ⎜ ⎟⎜
⎝0 0 −1 0 ⎠ ⎝ rdθ ⎠ .
⎟ (1.9)
r sin θ dφ 0 0 0 −1 r sin θ dφ
The diagonal matrix defines the Minkowski metric, which we know from special
relativity or from the covariant notation of electrodynamics. We can generalize this
line element or metric to allow for a modified space-time, introducing a scale factor
4 1 History of the Universe
a 2 as
⎛ ⎞
⎜ dr 2 ⎟
ds 2 = dt 2 − ⎜
⎝ + r 2 dθ 2 + r 2 sin2 θ dφ 2 ⎟
⎠
r2
1−
a2
⎛ ⎞
r2
⎜ a2d r2 r2 ⎟
= dt 2 − a 2 ⎜
⎝ 2
+ 2 dθ 2 + 2 sin2 θ dφ 2 ⎟ ⎠
r a a
1− 2
a
2
dr r
= dt 2 − a 2 + r 2
dθ 2
+ r 2
sin 2
θ dφ 2
with → r. (1.10)
1 − r2 a
We define a to have the unit length or inverse energy and the last form of r to be
dimensionless. In this derivation we implicitly assume a positive curvature through
a 2 > 0. However, this does not have to be the case. We can allow for a free sign
of the scale factor by introducing the free curvature k, with the possible values k =
−1, 0, 1 for negatively, flat, or positively curved space. It enters the original form in
Eq. (1.10) as
dr 2
ds 2 = dt 2 − a 2 + r 2
dθ 2
+ r 2
sin 2
θ dφ 2
. (1.11)
1 − kr 2
At least for constant a this looks like a metric with a modified distance r(t) →
r(t)a. It is also clear that the choice k = 0 switches off the effect of 1/a 2 , because
we can combine a and r to arrive at the original Minkowski metric.
Finally, there is really no reason to assume that the scale factor is constant with
time. In general, the history of the Universe has to allow for a time-dependent scale
factor a(t), defining the line element or metric as
dr 2
ds 2 = dt 2 − a(t)2 + r 2
dθ 2
+ r 2
sin2
θ dφ 2
. (1.12)
1 − kr 2
From Eq. (1.9) we can read off the corresponding metric including the scale factor,
⎛ ⎞
1 0 0 0
⎜ 2 ⎟
⎜0 − a 0 0 ⎟
gμν ⎜
=⎜ 1 − kr 2 ⎟. (1.13)
⎟
⎝0 0 −a 0 ⎠
2
0 0 0 −a 2
Now, the time-dependent scale factor a(t) indicates a motion of objects in the
Universe, r(t) → a(t)r(t). If we look at objects with no relative motion except
1.1 Expanding Universe 5
for the expanding Universe, we can express Hubble’s law given in Eq. (1.1) in terms
of
!
r (t) = a(t) r ⇔ ṙ (t) = ȧ(t) r = H (t)r (t) = H (t)a(t) r
ȧ(t)
⇔ H (t) = . (1.14)
a(t)
This relation reads like a linearized treatment of a(t), because it depends only on the
first derivative ȧ(t). However, higher derivatives of a(t) appear through a possible
time dependence of the Hubble constant H (t). From the above relation we can learn
another, basic aspect of cosmology: we can describe the evolution of the universe in
terms of
1. time t, which is fundamental, but hard to directly observe;
2. the Hubble constant H (t) describing the expansion of the Universe;
3. the scale factor a(t) entering the distance metric;
4. the temperature T (t), which we will use from Sect. 1.2 on.
Which of these concepts we prefer depends on the kind of observations we want to
link. Clearly, all of them should be interchangeable. For now we will continue with
time.
Assuming the general metric of Eq. (1.12) we can solve Einstein’s equation
including the coupling to matter
1 Tμν (t)
Rμν (t) − gμν (t)R(t) + Λ(t)gμν (t) = 2
. (1.15)
2 MPl
The energy-momentum tensor includes the energy density ρt = T00 and the corre-
sponding pressure p. The latter is defined as the direction-independent contribution
to the diagonal entries Tjj = pj of the energy-momentum tensor. The Ricci tensor
Rμν and Ricci scalar R = g μν Rμν are defined in terms of the metric; their explicit
forms are one of the main topics of a lecture on general relativity. In terms of the
scale factor the Ricci tensor reads
3ä(t)
R00 (t) = − and Rij (t) = δij 2ȧ(t)2 + a(t)ä(t) . (1.16)
a(t)
ρΛ (t) := Λ(t)MPl
2
= 3H02 MPl
2
ΩΛ (t), (1.17)
6 1 History of the Universe
with k defined in Eq. (1.11). A similar, second condition from the symmetry of the
energy-momentum tensor and its derivatives reads
If we use the quasi-linear relation Eq. (1.14) and define the time-dependent critical
total density of the Universe following Eq. (1.8), we can write the Friedmann
equation as
k ρt (t)
H (t)2 + =
a(t)2 3MPl 2
k ρt (t)
⇔ 1+ = =: Ωt (t) with ρc (t) := 3H (t)2 MPl
2
.
H (t)2 a(t)2 ρc (t)
(1.19)
This is the actual definition of the critical density ρc (t). It means that k is determined
by the time-dependent total energy density of the Universe,
This expression holds at all times t, including today, t0 . For Ωt > 1 the curvature is
positive, k > 0, which means that the boundaries of the Universe are well defined.
Below the critical density the curvature is negative. In passing we note that we can
identify
The two separate equations (Eqs. (1.17) and (1.18)) include not only the energy
and matter densities, but also the pressure. Combining them we find
It is crucial for our understanding of the matter content of the Universe. If we can
measure w it will tell us what the energy or matter density of the Universe consists
of.
Following the logic of describing the Universe in terms of the variable scale
factor a(t), we can replace the quasi-linear description in Eq. (1.14) with a full
Taylor series for a(t) around the current value a0 and in terms of H0 . This will
allow us to see the drastic effects of the different equations of state in Eq. (1.23),
1
a(t) − a0 = ȧ(t0 ) (t − t0 ) + ä(t0 ) (t − t0 )2 + O (t − t0 )3
2
1
≡ a0 H0 (t − t0 ) − a0 q0 H02 (t − t0 )2 + O (t − t0 )3 , (1.24)
2
implicitly defining q0 . The units are correct, because the Hubble constant defined in
Eq. (1.1) is measured in energy. The pre-factors in the quadratic term are historic,
as is the name deceleration parameter for q0 . Combined with our former results we
find for the quadratic term
The sum includes the three components contributing to the total energy density
of the Universe, as listed in Eq. (1.31). Negative values of w corresponding to a
Universe dominated by its vacuum energy can lead to negative values of q0 and in
turn to an accelerated expansion beyond the linear Hubble law. This is the basis for
a fundamental feature in the evolution of the Universe, called inflation.
To be able to track the evolution of the Universe in terms of the scale factor a(t)
rather than time, we next compute the time dependence of a(t). As a starting point,
the Friedmann equation gives us a relation between a(t) and ρ(t). What we need is a
relation of ρ and t, or alternatively a second relation between a(t) and ρ(t). Because
we skip as much of general relativity as possible we leave it as an exercise to show
that from the vanishing covariant derivative of the energy-momentum tensor, which
8 1 History of the Universe
gives rise to Eq. (1.18), we can also extract the time dependence of the energy and
matter densities,
d d
ρj a 3 = −pj a 3 . (1.26)
dt dt
It relates the energy inside the volume a 3 to the work through the pressure pj . From
this conservation law we can extract the a-dependence of the energy and matter
densities
This functional dependence is not yet what we want. To compute the time
dependence of the scale factor a(t) we use a power-law ansatz for a(t) to find
We can translate the result for a(t) ∝ t β into the time-dependent Hubble constant
ȧ(t) β t β−1 β 2 1
H (t) = ∼ = = . (1.29)
a(t) t β t 3 + 3wj t
The problem with these formulas is that the power-law ansatz and the form of
H (t) obviously fails for the vacuum energy with w = −1. For an energy density
only based on vacuum energy and neglecting any curvature, k ≡ 0, in the absence
1.1 Expanding Universe 9
Combining this result and Eq. (1.28), the functional dependence of a(t) reads
⎧
⎪
⎪ t 2/3 non-relativistic matter
⎨ t 2/(3+3wj ) =
a(t) ∼ t 1/2 relativistic radiation (1.31)
⎪
⎪ √
⎩e Λ(t )/3 t vacuum energy.
⎧ ⎧
⎪ ⎪2
⎪
⎪
⎪
⎪ 2 1 ⎨ 3t non-relativistic matter
⎨ =
3 + 3w t ⎪
⎩1
H (t) ∼ relativistic radiation (1.32)
⎪
⎪ 2t
⎪
⎪
⎪
⎩ Λ(t) vacuum energy.
3
From the above list we have now understood the relation between the time t, the
scale factor a(t), and the Hubble constant H (t). An interesting aspect is that for
the vacuum energy case w = −1 the change in the scale factor and with it the
expansion of the Universe does not follow a power law, but an exponential law,
defining an inflationary expansion. What is missing from our list at the beginning
of this section is the temperature as the parameter describing the evolution of the
Universe. Here we need to quote a thermodynamic result, namely that for constant
entropy1
1
a(T ) ∝ . (1.33)
T
This relation is correct if the degrees of freedom describing the energy density of
the Universe does not change. The easy reference point is a0 = 1 today. We will use
an improved scaling relation in Chap. 3.
Finally, we can combine several aspects described in these notes and talk
about distance measures and their link to (i) the curved space-time metric, (ii) the
expansion of the Universe, and (iii) the energy and matter densities. We will need it
1 This is the only thermodynamic result which we will (repeatedly) use in these notes.
10 1 History of the Universe
to discuss the cosmic microwave background in Sect. 1.4. As a first step, we compute
the apparent distance along a line of sight, defined by dφ = 0 = dθ . This is the path
of a traveling photon. Based on the time-dependent curved space-time metric of
Eq. (1.12) we find
! dr 2 dr
0 = ds 2 = dt 2 − a(t)2 ⇔ dt = a(t) √ . (1.34)
1 − kr 2 1 − kr 2
For the definition of the co-moving distance we integrate along this path,
dc dr dt 1
:= √ = = da . (1.35)
a0 1 − kr 2 a(t) ȧ(t)a(t)
The distance measure we obtain from integrating dr in the presence of the curvature
k is called the co-moving distance. It is the distance a photon traveling at the speed
of light can reach in a given time. We can evaluate the integrand using the Friedmann
equation, Eq. (1.17), and the relation ρ a 3(1−w) = const,
ρt (t)
ȧ(t)2 = a(t)2 2
−k
3MPl
1
⇒
ȧ(t)a(t)
1
= 1/2 .
H0 Ωm (t0 )a03 a(t) + Ωr (t0 )a04 + ΩΛ a(t)4 − (Ωt (t0 ) − 1) a02a(t)2
(1.36)
(1.37)
1.2 Radiation and Matter 11
Here we assume (and confirm later) that today Ωr (t0 ) can be neglected and hence
Ωt (t0 ) = Ωm (t0 ) + ΩΛ . What is important to remember that looking back the
variable scale factor is always a(t) < a0 . The integrand only depends on all
mass and energy densities describing today’s Universe, as well as today’s Hubble
constant. Note that the co-moving distance integrates the effect of time passing
while we move along the light cone in Minkowski space. It would therefore be well
suited for example to see which regions of the Universe can be causally connected.
Another distance measure based on Eq. (1.11) assumes the same line of sight
dφ = 0 = dθ , but also a synchronized time at both end of the measurement, dt = 0.
This defines a purely geometric, instantaneous distance of two points in space,
dr
dθ = dφ = dt = 0 ⇒ ds(t) = −a(t) √
1 − kr 2
Eq. (1.20)
with k = H02 a02 (Ωt (t0 ) − 1)
⎧ √
⎪ a(t)
⎪
⎪ √ arcsin( k d) k>0
⎪
⎪ k
0 0
⎪
⎨
dr
⇒ dAc (t) := ds = −a(t) √ = a(t)d k=0
d d 1 − kr 2 ⎪
⎪
⎪
⎪ a(t)
⎪ √
⎪
⎩ √ arcsinh( |k| d) k<0.
|k|
(1.38)
This angular diameter distance is time dependent, but because it fixes the time at
both ends we can use it for geometrical analyses. It depends on the assumed constant
distance d, which can for example be identified with the co-moving distance d ≡ d c .
The curvature is again expressed in terms of today’s energy density and Hubble
constant.
To understand the implications of the evolution of the Universe following Eq. (1.27),
we can look at the composition of the Universe in terms of relativistic states
(radiation), non-relativistic states (matter including dark matter), and a cosmological
constant Λ. Figure 1.1 shows that at very large temperatures the Universe is
dominated by relativistic states. When the variable scale factor a increases, the
relativistic energy density drops like 1/a 4 . At the same time, the non-relativistic
energy density drops like 1/a 3 . This means that as long as the relativistic energy
density dominates, the relative fraction of matter increases linear in a. Radiation and
matter contribute the same amount to the entire energy density around aeq = 3·10−4,
a period known as matter-radiation equality. The cosmological constant does not
change, which means eventually it will dominate. This starts happening around now.
12 1 History of the Universe
matter
radiation
cosmological constant
Fig. 1.1 Composition of our Universe as a function of the scale factor. Figure from Daniel
Baumann’s lecture notes [1]
We know experimentally that most of the matter content in the Universe is not
baryonic, but dark matter. To describe its production in our expanding Universe we
need to apply some basic statistical physics and thermodynamics. We start with the
observation that according to Fig. 1.1 in the early Universe neither the curvature k
nor the vacuum energy ρΛ play a role. This means that the relevant terms in the
Friedmann equation Eq. (1.17) read
This form will be the basis of our calculation in this section. The main change with
respect to our above discussion will be a shift to temperature rather than time as an
evolution variable.
For relativistic and non-relativistic particles or radiation we can use a unified
picture in terms of their quantum fields. What we have to distinguish are fermion
and boson fields and the temperature T relative to their respective masses m. The
number of degrees of freedom are counted by a factor g, for example accounting
for the anti-particle, the spin, or the color states. For example for the photon we
have gγ = 2, for the electron and positron ge = 2 each, and for the left-handed
neutrino gν = 1. If we neglect the chemical potential because we assume to be
either clearly non-relativistic or clearly relativistic, and we set kB = 1, we (or better
1.2 Radiation and Matter 13
MATHEMATICA) find
d 3p 1
neq (T ) =g for fermions/bosons (1.40)
(2π)3 eE/T ± 1
∞ √
EdE E 2 − m2
=g 4π 3 e E/T ± 1
using E 2 = p2 + m2 and pdp = EdE
m (2π)
⎧
⎪
⎪ mT 3/2 −m/T
⎪
⎪ g e non-relativistic states T m
⎪
⎨ 2π
ζ3
= 3 relativistic bosons T
m
⎪ π 2 gT
⎪
⎪
⎪
⎪ 3 ζ3
⎩ gT 3 relativistic fermions T
m.
4 π2
The Riemann zeta function has the value ζ3 = 1.2. As expected, the quantum-
statistical nature only matters once the states become relativistic and probe the
relevant energy ranges. Similarly, we can compute the energy density in these
different cases.
∞ √
d 3p E EdE E E 2 − m2
ρeq (T ) =g = g 4π (1.41)
(2π)3 eE/T ± 1 m (2π)
3 e E/T ± 1
⎧
⎪
⎪ mT 3/2 −m/T
⎪mg
⎪ e non-relativistic states T m
⎪
⎪
⎨ 2 2π
π
= gT 4 relativistic bosons T
m
⎪
⎪ 30
⎪
⎪
⎪
⎪ 2
⎩ 7 π gT 4 relativistic fermions T
m.
8 30
In the non-relativistic case the relative scaling of ρ relative to the number density is
given by an additional factor m
T . In the relativistic case the additional factor
is the temperature T , resulting in a Stefan–Boltzmann scaling of the energy density,
ρ ∝ T 4 . To compute the pressure we can simply use the equation of state, Eq. (1.23),
with w = 1/3.
The number of active degrees of freedom in our system depends on the
temperature. As an example, above the electroweak scale v = 246 GeV the effective
number of degrees of freedom includes all particles of the Standard Model
Often, the additional factor 7/8 for the fermions in Eq. (1.41) is absorbed in an
effective number of degrees of freedom, implicitly defined through the unified
14 1 History of the Universe
relation
π2
ρr = geff (T ) T 4 , (1.43)
30
with the relativistic contribution to the matter density defined in Eq. (1.17). Strictly
speaking, this relation between the relativistic energy density and the temperature
only holds if all states contributing to ρr have the same temperature, i.e. are in
thermal equilibrium with each other. This does not have to be the case. To include
different states with different temperatures we define geff as a weighted sum with
the specific temperatures of each component, namely
Tb4 7 Tf4
geff (T ) = gb + gf . (1.44)
T4 8 T4
bosons fermions
For the entire Standard Model particle content at equal temperatures this gives
Eq. (1.42) 7
geff (T > 175 GeV) = 28 + 90 = 106.75. (1.45)
8
When we reduce the temperature, this number of active degrees of freedom changes
whenever a particle species vanishes at the respective threshold T = m. This curve
tt̄ W ±, Z 0, H
cc̄, τ ±
106.75
bb̄
96.25 86.25
75.75
61.75
EW
π ± , π 0 , μ±
e±
geff (T ) 17.25
10.75
QCD
3.6
Fig. 1.2 Number of effective degrees of freedom geff as a function of the temperature, assuming
the Standard Model particle content. Figure from Daniel Baumann’s lecture notes [1]
1.3 Relic Photons 15
is illustrated in Fig. 1.2. For today’s value we will use the value
Finally, we can insert the relativistic matter density given in Eq. (1.43) into the
Friedmann equation Eq. (1.39) and find for the relativistic, radiation-dominated case
√
Eq. (1.32) 1 2 Eq. (1.39) ρr Eq. (1.43) 1 π 2 2 2
H (t)2 = = = g (T ) T 4 = π√ geff T .
2t 2
3MPl 2 30 eff
3MPl 90 MPl
(1.47)
This relation is important, because it links time, temperature, and Hubble constant
as three possible scales in the evolution of our Universe in the relativistic regime.
The one thing we need to check is if all relativistic relics have the same temperature.
Before we will eventually focus on weakly interacting massive particles, forming the
dark matter content of the Universe, it is for many reasons instructive to understand
the current photon density. We already know that the densities of all particles
pair-produced from a thermal bath in the early, hot Universe follows Eq. (1.41) and
hence drops rapidly with the decreasing temperature of the expanding Universe.
This kind of behavior is described by the Boltzmann equation, which we will study
in some detail in Chap. 3. Computing the neutrino or photon number densities from
the Boltzmann equation as a function of time or temperature will turn out to be a
serious numerical problem. An alternative approach is to keep track of the relevant
degrees of freedom g(T ) and compute for example the neutrino relic density ρν
from Eq. (1.43), all as a function of the temperature instead of time. In this approach
it is crucial to know which particles are in equilibrium at any given point in time or
temperature, which means that we need to track the temperature of the photon–
neutrino–electron bath falling apart.
Neutrinos, photons, and electrons maintain thermal equilibrium through the
scattering processes
πα 2 T 2 πα 2
σνe (T ) = σγ e (T ) = . (1.49)
sw4 m4W m2e
16 1 History of the Universe
The coupling strength g ≡ e/ sin θw ≡ e/sw with sw2 ≈ 1/4 defines the weak
coupling α = e2 /(4π) ≈ 1/137. The geometric factor π comes from the angular
integration and helps us getting to the correct approximate numbers. The photons
are more strongly coupled to the electron bath, which means they will decouple
last, and in their decoupling we do not have to consider the neutrinos anymore. The
interaction rate
Γ := σ v n (1.50)
describes the probability for example of the neutrino or photon scattering process in
Eq. (1.48) to happen. It is a combination of the cross section, the relevant number
density and the velocity, measured in powers of temperature or energy, or inverse
time. In our case, the relativistic relics move at the speed of light. Because the
Universe expands, the density of neutrinos, photons, and charged leptons will at
some point drop to a point where the processes in Eq. (1.48) hardly occur. They will
stop maintaining the equilibrium between photons, neutrinos, and charged leptons
roughly when the respective interaction rate drops below the Hubble expansion.
This gives us the condition
Γ (Tdec ) !
=1. (1.51)
H (Tdec )
1 ! v σ (Tdec ) v n(Tdec ) !
= ⇔ = 1. (1.52)
σ (Tdec ) n(Tdec ) H (Tdec ) H (Tdec )
While the interaction rate for example for neutrino–electron scattering is in the
literature often defined using the neutrino density n = nν . For the mean free path
we have to use the target density, in this case the electron n = ne .
We should be able to compute the photon decoupling from the electrons based on
the above definition of Tdec and the photon–electron or Thomson scattering rate in
Eq. (1.49). The problem is, that it will turn out that at the time of photon decoupling
the electrons are no longer the relevant states. Between temperatures of 1 MeV and
the relevant eV-scale for photon decoupling, nucleosynthesis will have happened,
and the early Universe will be made up by atoms and photons, with a small number
of free electrons. Based on this, we can very roughly guess the temperature at which
the Universe becomes transparent to photons from the fact that most of the electrons
are bound in hydrogen atoms. The ionization energy of hydrogen is 13.6 eV, which
is our first guess for Tdec . On the other hand, the photon temperature will follow a
Boltzmann distribution. This means that for a given temperature Tdec there will be
a high-energy tail of photons with much larger energies. To avoid having too many
photons still ionizing the hydrogen atoms the photon temperature should therefore
come out as Tdec 13.6 eV.
1.3 Relic Photons 17
Going back to the defining relation in Eq. (1.51), we can circumvent the problem
of the unknown electron density by expressing the density of free electrons first
relative to the density of electrons bound in mostly hydrogen, with a measured
suppression factor ne /nB ≈ 10−2 . Moreover, we can relate the full electron density
or the baryon density nB to the photon density nγ through the measured baryon–to–
photon ratio. In combination, this gives us for the time of photon decoupling
ne
ne (Tdec ) = (Tdec ) nB (Tdec )
nB
3
2ζ3 Tdec
ne nB
= (Tdec ) (Tdec ) nγ (Tdec ) = 10−2 10−10 2
. (1.53)
nB nγ π
At this point we only consider the ratio nB /nγ ≈ 10−10 a measurable quantity, its
meaning will be the topic of Sect. 2.4. With this estimate of the relevant electron
density we can compute the temperature at the point of photon decoupling. For the
Hubble constant we need the number of active degrees of freedom in the absence of
neutrinos and just including electrons, positions, and photons
7
geff (Tdec ) = (2 + 2) + 2 = 5.5. (1.54)
8
Inserting the Hubble constant from Eq. (1.47) and the cross section from Eq. (1.49)
gives us the condition
√
Γγ 2πζ3 α 2 −12 T 3 90MPl 1
= 2
10 2
√
H π me π geff (T )T 2
√
6 10 ζ3 1 MPl T !
= 10−12 α 2 √ =1
π2 geff (T ) m2e
√
12 π2 m2e geff (Tdec )
⇔ Tdec = 10 √ ≈ (0.1 . . . 1) eV. (1.55)
6 10 ζ3 MPl α2
As discussed above, to avoid having too many photons still ionizing the hydrogen
atoms, the photon temperature indeed is Tdec ≈ 0.26 eV < 13.6 eV.
These decoupled photons form the cosmic microwave background (CMB), which
will be the main topic of Sect. 1.4. The main property of this photon background,
which we will need all over these notes, is its current temperature. We can compute
T0,γ from the temperature at the point of decoupling, when we account for the
expansion of the Universe between Tdec and now. We can for example use the time
evolution of the Hubble constant H ∝ T 2 from Eq. (1.47) to compute the photon
temperature today. We find the experimentally measured value of
Tdec
T0,γ = 2.4 · 10−4 eV = 2.73 K ≈ . (1.56)
1000
18 1 History of the Universe
From Eq. (1.40) we can also compute the current density of CMB photons,
2ζ3 3 410
nγ (T0 ) = 2
T0,γ = . (1.58)
π cm3
In Sect. 1.3 we have learned that at temperatures around 0.1 eV the thermal photons
decoupled from the matter in the Universe and have since then been streaming
through the expanding Universe. This is why their temperature has dropped to
T0 = 2.4 · 10−4 eV now. We can think of the cosmic microwave background or
CMB photons as coming from a sphere of last scattering with the observer in the
center. The photons stream freely through the Universe, which means they come
from this sphere straight to us.
The largest effect leading to a temperature fluctuation in the CMB photons is that
the earth moves through the photon background or any other background at constant
speed. We can subtract the corresponding dipole correlation, because it does not
tell us anything about fundamental cosmological parameters. The most important,
fundamental result is that after subtracting this dipole contribution the temperature
on the surface of last scattering only shows tiny variations around δT /T 10−5 .
The entire surface, rapidly moving away from us, should not be causally connected,
so what generated such a constant temperature? Our favorite explanation for this is
a phase of very rapid, inflationary period of expansion. This means that we postulate
a fast enough expansion of the Universe, such that the sphere or last scattering
becomes causally connected. From Eq. (1.31) we know that such an expansion will
be driven not by matter but by a cosmological constant. The detailed structure of the
CMB background should therefore be a direct and powerful probe for essentially all
parameters defined and discussed in this chapter.
The main observable which the photon background offers is their temperature or
energy—additional information for example about the polarization of the photons
is very interesting in general, but less important for dark matter studies. Any effect
which modifies this picture of an entirely homogeneous Universe made out of a
thermal bath of electrons, photons, neutrinos, and possibly dark matter particles,
should be visible as a modification to a constant temperature over the sphere of
last scattering. This means, we are interested in analyzing temperature fluctuations
between points on this surface.
1.4 Cosmic Microwave Background 19
∞
T (θ, φ) − T0
δT (θ, φ)
:= = am Ym (θ, φ). (1.59)
T0 T0
=0 m=−
The spherical harmonics are orthonormal, which means in terms of the integral over
the full angle dΩ = dφd cos θ
dΩ Ym (θ, φ) Y∗ m (θ, φ) = δ δmm
δT (θ, φ) ∗ Eq. (1.59)
⇒ dΩ Y m (θ, φ) = am dΩ Ym (θ, φ) Y∗ m (θ, φ)
T0
m
= am δ δmm = a m . (1.60)
m
This is the inverse relation to Eq. (1.59), which allows us to compute the set of
numbers am from a known temperature map δT (θ, φ)/T0 .
For the function T (θ, φ) measured over the sphere of last scattering, we can
ask the three questions which we usually ask for distributions which we know are
peaked:
1. what is the peak value?
2. what is the width of the peak?
3. what the shape of the peak?
For the CMB we assume that we already know the peak value T0 and that there is
no valuable information in the statistical distribution. This means that we can focus
on the width or the variance of the temperature distribution. Its square root defines
the standard deviation. In terms of the spherical harmonics the variance reads
2
1 δT (θ, φ) 1
dΩ = dΩ am Ym (θ, φ)
4π T0 4π
m
× a∗ m Y∗ m (θ, φ)
m
Eq. (1.60) 1 1
= am a∗ m δ δmm = |am |2 .
4π 4π
m, ,m m
(1.61)
20 1 History of the Universe
We can further simplify this relation by our expectation for the distribution of the
temperature deviations. We remember for example from quantum mechanics that for
the angular momentum the index m describes the angular momentum in one specific
direction. Our analysis of the surface of last scattering, just like the hydrogen atom
without an external magnetic field, does not have any special direction. This implies
that the values of am do not depend on the value of the index m; the sum over
m should just become a sum over 2 + 1 identical terms. We therefore define the
observed power spectrum as the average of the |am |2 over m,
1
C := |am |2
2 + 1
m=−
2 ∞
1 δT (θ, φ) 2 + 1
⇔ dΩ = C . (1.62)
4π T0 4π
=0
The great simplification of this last assumption is that we now just analyze the
discrete values C as a function of ≥ 0.
Note that we analyze the fluctuations averaged over the surface of last scat-
tering, which gives us one curve C for discrete values ≥ 0. This curve is
one measurement, which means none of its points have to perfectly agree with
the theoretical expectations. However, because of the averaging over m possible
statistical fluctuation will cancel, in particular for larger values of , where we
average over more independent orientations.
We can compare the series in spherical harmonics Eq. (1.59) to a Fourier series.
The latter will, for example, analyze the frequencies contributing to a sound from
a musical instrument. The discrete series of Fourier coefficients tells us which
frequency modes contribute how strongly to the sound or noise. The spherical
harmonic do something similar, which we can illustrate using the properties of the
Y0 (θ, φ). Their explicit form in terms of the associated Legendre polynomials Pm
and the Legendre polynomials P is
2 + 1 ( − m)!
Ym (θ, φ) = (−1) e m imφ
Pm (cos θ )
4 ( + m)!
2 + 1 ( − m)!
= (−1)m eimφ
4 ( + m)!
m/2 dm
× (−1)m 1 − cos2 θ P (cos θ )
d(cos θ )m
√
2 + 1
⇒ Y0 (θ, φ) = P (cos θ ). (1.63)
2
1.4 Cosmic Microwave Background 21
1 d
P (cos θ ) = (cos2 θ − 1) = C cos θ + · · · , (1.64)
2 ! dt
with the normalization P (±1) = 1 and zeros in between. Approximately, these
zeros occur at
4k − 1
P (cos θ ) = 0 ⇔ cos θ = cos π k = 1, . . . , . (1.65)
4 + 2
The first zero of each mode defines an angular resolution θ of the th term in the
hypergeometric series,
3 3π
cos π ≡ cos θ ⇔ θ ≈ . (1.66)
4 + 2 4
This separation in angle can obviously be translated into a spatial distance on the
sphere of last scattering, if we know the distance of the sphere of last scattering to
us. This means that the series of am or the power spectrum C gives us information
about the angular distances (encoded in ) which contribute to the temperature
fluctuations δT /T0 .
Next, we need to think about how a distribution of the C will typically
look. In Fig. 1.3 we see that the measured power spectrum essentially consists
of a set of peaks. Each peak gives us an angular scale with a particularly large
Fig. 1.3 Power spectrum as measured by PLANCK in 2015. Figure from the PLANCK collabo-
ration [2]
22 1 History of the Universe
one C curve. Second, the peaks are washed out for large . This happens because
our approximation that the sphere of last scattering has negligible thickness catches
up with us. If we take into account that the sphere of last scattering has a finite
thickness, the strongly peaked structure of the power spectrum gets washed out.
Towards large values or small distances the thickness effects become comparable
to the spatial resolution at the time of last scattering. This leads to an additional
damping term
C ∝ e−
2 /15002
, (1.67)
which washes out the peaks above = 1500 and erases all relevant information.
Next, we can derive the position of the acoustic peaks. Because of the rapid
expansion of the Universe, a critical angle θ in Eq. (1.66) defines the size of patches
of the sky, which were not in causal contact during and since the time of the last
scattering. Below the corresponding -value there will be no correlation. It is given
by two distances: the first of them is the distance on the sphere of last scattering,
which we can compute in analogy to the co-moving distance defined in Eq. (1.37).
Because the co-moving distance is best described by an integral over the scale factor
a, we use the value adec ≈ 1/1100 from Eq. (1.57) to integrate the ratio of the
distance to the sound velocity in the baryon–photon fluid cs to
adec
rs Eq. (1.35) da
= . (1.68)
cs 0 a(t)ȧ(t)
√
For a perfect relativistic fluid the speed of sound is given by cs = 1/ 3. This
distance is called the sound horizon and depends mostly on the matter density
around the oscillating baryon–photon fluid. The second relevant distance is the
distance between us and the sphere of last scattering. Again, we start from the co-
moving distance d c introduced in Eq. (1.35). Following Eq. (1.37) it will depend on
the current energy and matter content of the universe. The angular separation is
rs (Ωm , Ωb )
sin θ = . (1.69)
dc
Both rs (Ωm , Ωb ) and d c are described by the same integrand in Eq. (1.36). It can
be simplified for a matter-dominated (Ωr Ωm ) and almost flat (Ωt ≈ Ωm )
Universe to
−1/2 1
Ωm (t0 )a(t) + Ωr (t0 ) + ΩΛ a(t)4 − (Ωt (t0 ) − 1) a(t)2 ≈ √ ,
Ωm (t0 )a(t)
(1.70)
24 1 History of the Universe
where we also replaced a0 = 1. The ratio of the two integrals then gives
adec da
cs √ √
a 1 adec 1
sin θ =
0
= √ √ ≈ ⇒ θ ≈ 1◦ . (1.71)
1 da 3 1 − a dec 55
√
adec a
A more careful calculation taking into account the reduced speed of sound and the
effects from Ωr , ΩΛ gives a critical angle
4
Eq. (1.66)
θ ≈ 0.6◦ ⇒ = 0.6◦ = 225. (1.72)
first peak 3π
The first peak in Fig. 1.3 corresponds to the fundamental tone, a sound wave with
a wavelength twice the size of the horizon at decoupling. By the time of the last
scattering this wave had just compressed once. Note that a closed or open universe
predict different result for θ following Eq. (1.38). The measurement of the position
of the first peak is therefore considered a measurement of the geometry of the
universe and a confirmation of its flatness.
The second peak corresponds to the sound wave which underwent one com-
pression and one rarefaction at the time of the last scattering and so forth for the
higher peaks. Even-numbered peaks are associated with how far the baryon–photon
fluid compresses due to the gravitational potential, odd-numbered peaks indicate
the rarefaction counter effect of radiative pressure. If the relative baryon content in
the baryon–photon is higher, the radiation pressure decreases and the compression
peaks become higher. The relative amplitude between odd and even peaks can
therefore be used as a measure of Ωb .
Dark matter does not respond to radiation pressure, but contributes to the
gravitational wells and therefore further enhances the compression peaks with
respect to the rarefaction peaks. This makes a large third peak a sign of a sizable
dark matter component at the time of the last scattering.
From Fig. 1.1 we know that today we can neglect Ωr (t0 ) Ωm (t0 ) ∼ ΩΛ .
Moreover, the relativistic matter content is known from the accurate measurement of
the photon temperature T0 , giving Ωr h2 through Eq. (2.9). This means that the peaks
in the CMB power spectrum will be described by: the cosmological constant defined
in Eq. (1.5), the entire matter density defined in Eq. (1.6), which is dominated by
the dark matter contribution, as well as by the baryonic matter density defined in
Eq. (1.7), and the Hubble parameter defined in Eq. (1.1). People usually choose the
four parameters
Including h2 in the matter densities means that we define the total energy density
Ωt (t0 ) as an independent parameter, but at the expense of h or H0 now being a
1.4 Cosmic Microwave Background 25
derived quantity,
⎛ ⎞2
⎜ H0 ⎟ 2 Ωm (t0 )h2 Ωm (t0 )h2
⎜ ⎟ = h2 = Ωm (t0 )h = ≈ .
⎝ km ⎠ Ωm (t0 ) Ωt (t0 ) − ΩΛ − Ωr (t0 ) Ωt (t0 ) − ΩΛ
100
s Mpc
(1.74)
There are other, cosmological parameters which we for example need to determine
the distance of the sphere of last scattering, but we will not discuss them in detail.
Obviously, the choice of parameter basis is not unique, but a matter of convenience.
There exist plenty of additional parameters which affect the CMB power spectrum,
but they are not as interesting for non-relativistic dark matter studies.
We go through the impact of the parameters basis defined in Eq. (1.73) one by
one:
– Ωt affects the co-moving distance, Eq. (1.37), such that an increase in Ωt (t0 )
decreases d c . The same link to the curvature, k ∝ (Ωt (t0 ) − 1) as given in
Eq. (1.20), also decreases ds, following Eq. (1.38); this way the angular diameter
distance dAc is reduced. In addition, there is an indirect effect through H0 ;
following Eq. (1.74) an increased total energy density decreases H0 and in turn
increases d c .
Combining all of these effects, it turns out that increasing Ωt (t0 ) decreases d c .
According to Eq. (1.69) a smaller predicted value of d c effectively increases the
corresponding θ scale. This means that the acoustic peak positions consistently
appear at smaller values.
– ΩΛ has two effects on the peak positions: first, ΩΛ enters the formula for d c
with a different sign, which means an increase in ΩΛ also increases d c and
with it d c . At the same time, an increased ΩΛ also increases H0 and this way
decreases d c . The combined effect is that an increase in ΩΛ moves the acoustic
peaks to smaller . Because in our parameter basis both, Ωt (t0 ) and ΩΛ have
to be determined by the peak positions we will need to find a way to break this
degeneracy.
– Ωm h2 is dominated by dark matter and provides the gravitational potential
for the acoustic oscillations. Increasing the amount of dark matter stabilizes the
gravitational background for the baryon–photon fluid, reducing the height of all
peaks, most visibly the first two. In addition, an increased dark matter density
makes the gravitational potential more similar to a box shape, bringing the higher
modes closer together.
– Ωb h2 essentially only affects the height of the peaks. The baryons provide
most of the mass of the baryon–photon fluid, which until now we assume to
be infinitely strongly coupled. Effects of a changed Ωb h2 on the CMB power
spectrum arise when we go beyond this infinitely strong coupling. Moreover, an
increased amount of baryonic matter increases the height of the odd peaks and
reduces the height of the even peaks.
26 1 History of the Universe
Separating these four effects from each other and from other astrophysical and
cosmological parameters obviously becomes easier when we can include more and
higher peaks. Historically, the WMAP experiment lost sensitivity around the third
peak. This means that its results were typically combined with other experiments.
The PLANCK satellite clearly identified seven peaks and measures in a slight
modification to our basis in Eq. (1.73) [2]
Ωχ h2 = 0.1198 ± 0.0015
Ωb h2 = 0.02225 ± 0.00016
ΩΛ = 0.6844 ± 0.0091
km
H0 = 67.27 ± 0.66 . (1.75)
Mpc s
The dark matter relic density is defined in Eq. (1.7). This is the best measurement of
Ωχ we currently have.
∂ρm
= −∇ · (ρm u ) continuity equation (1.76)
∂t
∂ ∇p
+ u · ∇ u = − − ∇φ Euler equation (1.77)
∂t ρm
∇ 2 φ = 4πGρm Poisson equation, (1.78)
coupling defined in Eq. (1.4). This set of equations can be solved by a homoge-
neously expanding fluid
a 3 ȧ 1
0
ρ = ρ(t0 ) u = r = H r φ= 2
ρr 2 ∇p = 0. (1.79)
a a 12MPl
1
Ḣ r + H r · ∇(H r) = −∇φ = − ρ r
2 m
6MPl
ρm
⇔ Ḣ + H 2 = − 2
. (1.80)
6MPl
The first Friedmann equation in this approximation also follows when we use
Eq. (1.19) for k → 0 and ρt = ρm ,
ρm
H2 = 2
. (1.81)
3MPl
We will now allow for small perturbations around the background given in
Eq. (1.79),
ρ(t, r) = ρ̄(t) + δρ (t, r) u(t, r) = H (t) r + δu (t, r)
1
φ(t, r) = 2
ρ̄r 2 + δφ (t, r) p(t, r) = p̄(t) + δp (t, r).
12MPl
(1.82)
The pressure and density fluctuations are linked by the speed of sound δp = cs2 δρ .
Inserting Eq. (1.82), the continuity equation becomes
0 = ρ̇ + ∇ · (ρ u)
= ρ̄˙ + δ˙ρ + ρ̄∇ · (H r + δu ) + ∇δρ · (H r + δu )
28 1 History of the Universe
where we only keep terms linear in the perturbations. In the last line that the
background fields solve the continuity equation (Eq. (1.76)). The Euler equation
for the perturbations results in
∂ ∇p
0= + u · ∇ u = − − ∇φ
∂t ρm
∇(p̄ + δ )
= Ḣ r + δ˙u + H r + δu · ∇ H r + δu +
p 1
+∇ ρ̄r 2 + δφ
ρ̄ + δρ 2
12MPl
1
0 = ∇2φ − ρ
2 m
2MPl
1 1 Eq. (1.78) 1
=∇ 2
2
ρ̄r 2 + δφ − 2
(ρ̄ + δρ ) = ∇ 2 δφ − δ .
2 ρ
(1.85)
12MPl 2MPl 2MPl
a0 a0 a0 ∂ ∂
x := r v := u ∇r := ∇x H r · ∇r + → .
a a a ∂t ∂t
(1.87)
1.5 Structure Formation 29
δ̇ + ∇x δv = 0
a 2
δ˙v + 2H δv = −
0
∇x cs2 δ + δφ
a
1 a 2
0
∇x2 δφ = 2
ρ̄ δ. (1.88)
2MPl a
These three equations can be combined into a second order differential equation for
the density fluctuations δ,
a 2 !
˙
0 = δ̈ + ∇x δv = δ̈ − ∇x 2H δv +
0
∇x cs δ + δφ
2
a
a 2 1
0
= δ̈ + 2H δ̇ − cs2 ∇x2 δ − 2
ρ̄δ. (1.89)
a 2MPl
To solve this equation, we Fourier-transform the density fluctuation and find the
so-called Jeans equation
d 3k
x , t) =
δ( 3
δ̂(k, t) e−i k·x
(2π)
2 !
δ̂¨ + 2H δ̂˙ = δ̂
1 cs ka0
⇒ 2
ρ̄ − . (1.90)
2MPl a
2π a0 2MPl2
λJ = = 2π c . (1.91)
k homogeneous
s
a ρ̄
Perturbations of this size neither grow nor get washed out by pressure. To get an
idea what the Jeans length for baryons means we can compare it to the co-moving
Hubble scale,
2MPl2
λJ Eq. (1.81) 2
a0 −1 = 2πcs ρ̄
H = 2π
3
cs . (1.92)
H
a
30 1 History of the Universe
with ω = cs ka0 /a. The solutions are oscillating with decreasing amplitudes due to
˙ Structures with sub-Jeans lengths, λ λ , therefore
the Hubble friction term 2H δ̂. J
do not grow, but the resulting acoustic oscillations can be observed in the matter
power spectrum today.
In the opposite regime, for structures larger than the Jeans length λ
λJ , the
pressure term in the Jeans equation can be neglected. The gravitational compression
term can be simplified for a matter-dominated universe with a ∝ t 2/3 , Eq. (1.31).
This gives H = ȧ/a = 2/(3t) and it follows for the second Friedmann equation
that
2
2 Eq. (1.80) 1 4 MPl
Ḣ + H 2 = − = −ρ̄ ⇒ ρ̄ = . (1.94)
9t 2 2
6MPl 3 t2
We can use this form to simplify the Jeans equation and solve it
δ̂¨ + δ̂˙ − 2 δ̂ = 0
4 2 B
⇒ δ̂ = At 2/3 +
3t 3t t
∝ t 2/3 growing mode
a
=: δ̂0 using a ∝ t 2/3
a0
(non-relativistic, large structures).
(1.95)
We can use this formula for the growth as a function of the scale factor to link the
density perturbations at the time of photon decoupling to today. For this we quote
that at photon decoupling we expect δ̂dec ≈ 10−5 , which gives us for today
We can compare this value with the results from numerical N-body simulations
and find that those simulations prefer much larger values δ̂0 ≈ 1. In other words,
the smoothness of the CMB shows that perturbations in the photon-baryon fluid
alone cannot account for the cosmic structures observed today. One way to improve
the situation is to introduce a dominant non-relativistic matter component with a
negligible pressure term, defining the main properties of cold dark matter.
Until now our solutions of the Jeans equation rely on the assumption of non-
relativistic matter domination. For relativistic matter with a ∝ t 1/2 the growth of
density perturbations follows a different scaling. Following Eq. (1.32) we use H =
t/2 and assume H 2
4πGρ̄, such that the Jeans equation becomes
δ̂˙
δ̂¨ + = 0 ⇒ δ̂ = A + B log t (relativistic, small structures).
t
(1.97)
This growth of density perturbations is much weaker than for non-relativistic matter.
Finally, we have to consider relativistic density perturbations larger than the
Hubble scale, λ
a/a0 H . In this case a Newtonian treatment is no longer justified
and we only quote the result of the full calculation from general relativity, which
gives a scaling
2
a
δ̂ = δ̂0 (relativistic, large structures). (1.98)
a0
Together with Eqs. (1.93), (1.95) and (1.97) this gives us the growth of structures
as a function of the scale parameter for non-relativistic and relativistic matter and
for small as well as large structures. Radiation pressure in the photon-baryon fluid
prevents the growth of small baryonic structures, but baryon-acoustic oscillations
on smaller scales predicted by Eq. (1.93) can be observed. Large structures in
a relativistic, radiation-dominated universe indeed expand rapidly. Later in the
evolution of the Universe, non-relativistic structures come close to explaining the
matter density in the CMB evolving to today as we see it in numerical simulations,
but require a dominant additional matter component.
Similar to the variations of the cosmic microwave photon temperature we can
expand our analysis of the matter density from the central value to its distribution
with different sizes or wave numbers. To this end we define the matter power
spectrum P (k) in momentum space as
As before, we can link k to a wave length λ = 2π/k. For the scaling of the initial
power spectrum the proposed relation by Harrison and Zel’dovich is
n
2π
P (k) ∝ k =
n
. (1.100)
λ
32 1 History of the Universe
Fig. 1.4 Best fit of today’s matter power spectrum (a0 = 1) from Max Tegmark’s lecture notes [3]
1.5 Structure Formation 33
Ωm (aeq )
aeq Ωr (aeq ) Ωr (a0 )
= = ≈ 3 · 10−4 , (1.103)
a0 Ωm (a0 ) Ωm (a0 )
Ωr (a0 )
This is true even for ΩΛ (t0 ) > Ωm (t0 ) > Ωr (t0 ) today. We can use Eq. (1.104) and
write
aeq
1 aeq
deq ≈
c
da √ = √
0 H0 Ωr (t0 ) H0 Ωr (t0 )
Eq. (1.103) 3 · 10−4 Mpc s
= √ = 4.7 · 10−4
70 s Mpc 0.28 × 3 · 10
km −4 km
km Mpc s
⇒ λeq = c deq
c
≈ 3 · 105 × 4.7 · 10−4 = 140 Mpc.
s km
(1.105)
This means that the growth of structures with a size of at least 140 Mpc never stalls,
while for smaller structures the Meszaros effect leads to a suppressed growth. The
scaling of λeq in the radiation dominated era in dependence of the scale factor is
given by λeq ∝ aeq c/H0 . The co-moving wavenumber is defined as k = 2π/λ and
therefore keq ≈ 0.05/Mpc. Using this scaling, a ∝ 1/k, the power spectrum scales
as
⎧
2 ⎪⎨k k < keq or λ > 120 Mpc
aenter
P (k) ∝ k = (1.106)
aeq ⎪
⎩1 k > k eq or λ < 120 Mpc.
k3
The measurement of the power spectrum shown in Fig. 1.4 confirms these two
regimes.
34 1 History of the Universe
Even if pressure can be neglected for cold, collision-less dark matter, its
perturbations cannot collapse towards arbitrary small scales because of the non-
zero velocity dispersion. Once the velocity of dark matter particles exceeds the
escape velocity of a density perturbation, they will stream away before they can be
gravitationally bound. This phenomenon is called free streaming and allows us to
derive more properties of the dark matter particles from the matter power spectrum.
To this end we generalize the Jeans equation of Eq. (1.90) to
2 !
δ̂¨ + 2H δ̂˙ = δ̂
1 eff ka0
2
ρ̄ − cs , (1.107)
2MPl a
where in the term counteracting the gravitational attraction the speed of sound is
replaced by an effective speed of sound cseff , whose precise form depends on the
properties of the dark matter. We show predictions for different dark matter particles
in Fig. 1.5 [4]:
– for cold dark matter with
"
1 dp p2 f (p)
(cseff )2 = 2 " (1.108)
m dp f (p)
Fig. 1.5 Sketch of the matter power spectrum for different dark matter scenarios normalized to
the ΛCDM power spectrum. Figure from Ref. [4]
1.5 Structure Formation 35
the effective speed of sound is a function of temperature and mass. Warm dark
matter is faster than cold dark matter and the effective speed of sound is larger.
As a result, small structures are washed out as indicated by the blue line, because
the free streaming length for warm dark matter is larger than for cold dark matter;
– sterile neutrinos which we will introduce in Sect. 2.1 feature
"
1 dp p2 f (p)
(cs ) = 2 "
eff 2
. (1.110)
m dp f (p)
They are a special case of warm dark matter, but the result of the integral depends
on the velocity distribution, which is model-dependent. In general a suppression
of small scale structures is expected and the resulting normalized power spectrum
should end up between the two cyan lines;
– light, non-relativistic dark matter or fuzzy dark matter which we will discuss in
Sect. 2.2 gives
k
cseff = . (1.111)
m
The effective speed of sound depends on k, leading to an even stronger
suppression of small scale structures. The normalized power spectrum is shown
in turquoise;
– for mixed warm and cold dark matter with
T2 ρ̄ a δ̂C
(cseff )2 = 2
− 2 a k
(1.112)
m 2MPl 0 δ̂
covers models from a dark force (dark radiation) to multi-component dark matter
that could form dark atoms. Besides a potential dark sound speed, the Jeans
equation needs to be modified by an interaction term. The effects on the power
spectrum range from dark acoustic oscillations to a suppression of structures at
multiple scales.
36 1 History of the Universe
References
After we understand the relic photons in the Universe, we can focus on a set of
different other relics, including the first dark matter candidates. For those the main
question is to explain the observed value of Ωχ h2 ≈ 0.12. Before we will eventually
turn to thermal production of massive dark matter particles, we can use a similar
approach as for the photons for relic neutrinos. Furthermore, we will look at ways
to produce dark matter during the thermal history of the Universe, without relying
on the thermal bath.
are related through a similar scattering rate σνe given in Eq. (1.49).
Because the neutrino scattering cross section is small, we expect the neutrinos to
decouple earlier. It turns out that this happens before nucleosynthesis. This means
that for the relic neutrinos the electrons are the relevant degrees of freedom to
compute the decoupling temperature. With the cross section given in Eq. (1.49) the
With only one generation of neutrinos in the initial state and a purely left-handed
coupling the number of relativistic degrees of freedom relevant for this scattering
process is g = 1.
Just as for the photons, we first compute the decoupling temperature. To link the
interaction rate to the Hubble constant, as given by Eq. (1.47), we need the effective
number of degrees of freedom in the thermal bath. It now includes electrons,
positrons, three generations of neutrinos, and photons
7
geff (Tdec ) = (2 + 2 + 3 × 2) + 2 = 10.75 . (2.3)
8
With Eq. (1.47) and in analogy to Eq. (1.55) we find
√
Γν 3ζ3gα T 5 90MPl 1
= 2 4 4
√
H 4π sw mW π geff (T )T 2
√
9 10 ζ3 2 g MPl T 3 !
= α √ =1
4π 2 sw4 geff (T ) m4W
√ 1/3
4π 2 sw4 m4W geff (Tdec )
⇔ Tdec = √ ≈ (1 . . . 10) MeV . (2.4)
9 10 ζ3 MPl α2 g
7
geff (Tdec . . . T0 ) = × 6 + 2 = 7.25 (2.5)
8
relativistic degrees of freedom. The decoupling of the massive electron adds one
complication: in the full thermodynamic calculation we need to assume that their
entropy is transferred to the photons, the only other particles still in equilibrium.
We only quote the corresponding result from the complete calculation: because the
entropy in the system should not change in this electron decoupling process, the
2.1 Relic Neutrinos 39
If the neutrino and photon do not have the same temperature we can use Eqs. (1.43)
and (1.44) to obtain the combined relativistic matter density at the time of neutrino
decoupling,
π2 4 π2 Tγ4 7 Tν4
ρr (T ) = geff (T ) T = 2 4 + 6 4 T4
30 30 T 8 T
4/3
π2 21 4 π2
⇒ ρr (Tγ ) = 2+ Tγ4 = 3.4 Tγ4 , (2.7)
30 4 11 30
or geff (T ) = 3.4. This assumes that we measure the current temperature of the
Universe through the photons. Assuming a constant suppression of the neutrino
background, its temperature and the total relativistic energy density today are
π2
T0,ν = 1.7 · 10−4 eV and ρr (T0 ) = 4
3.4 T0,γ = 1.1 T0,γ
4
. (2.8)
30
From the composition in Eq. (2.7) we see that the current relativistic matter density
of the Universe is split roughly 60 − 40 between the photons at T0,γ = 2.4 · 10−4 eV
and the neutrinos at T0,ν = 1.7 · 10−4 eV. The normalized relativistic relic density
today becomes
4
ρr (T0 )h2 2.4 · 10−4 eV
Ωr (t0 )h =
2
= 0.54 = 4.6 · 10−5 . (2.9)
3MPl 2 H2
0
2.5 · 10−3 eV
Note that for this result we assume that the neutrino mass never plays a role in our
calculation, which is not at all a good approximation.
We are now in a position to answer the question whether a massive, stable fourth
neutrino could explain the observed dark matter relic density. With a moderate mass,
this fourth neutrino decouples in a relativistic state. In that case we can relate its
number density to the photon temperature through Eq. (2.7),
density today,
6 ζ3 3
ρν (T0 ) = mν nν (T0 ) = mν T
11π 2 0,γ
6 ζ3 3 h2 mν (2.4 · 10−4 )3 1 mν
⇒ Ων h2 = mν T 0,γ = −3 )4 eV
= .
11π 2 2 H2
3MPl 0
30 (2.5 · 10 85 eV
(2.11)
For an additional, heavy neutrino to account for the observed dark matter we need
to require
!
Ων h2 = Ωχ h2 ≈ 0.12 ⇔ mν ≈ 10 eV . (2.12)
This number for hot neutrino dark matter is not unreasonable, as long as we only
consider the dark matter relic density today. The problem appears when we study
the formation of galaxies, where it turns out that dark matter relativistic at the point
of decoupling will move too fast to stabilize the accumulation of matter. We can
look at Eq. (2.12) another way: if all neutrinos in the Universe add to more than this
mass value, they predict hot dark matter with a relic density more than then entire
dark matter in the Universe. This gives a stringent upper bound on the neutrino mass
scale.
Before we introduce cold and much heavier dark matter, there is another scenario we
need to discuss. Following Eq. (2.12) a new neutrino with mass around 10 eV could
explain the observed relic density. The problem with thermal neutrino dark matter is
that it would be relativistic at the wrong moment of the thermal history, causing
serious issues with structure formation as discussed in Sect. 1.5. The obvious
question is if we can modify this scenario such that light dark matter remains
non-relativistic. To produce such light cold dark matter we need a non-thermal
production process.
We consider a toy model for light cold dark matter with a spatially homogeneous
but time-dependent complex scalar field φ(t) with a potential V . For the latter, the
Taylor expansion is dominated by a quadratic mass term mφ . Based on the invariant
action with the additional determinant of the metric g, describing the expanding
Universe, the Lagrangian for a single complex scalar field reads
1
√ L = (∂ μ φ ∗ )(∂μ φ) − V (φ) = (∂ μ φ ∗ )(∂μ φ) − m2φ φ ∗ φ . (2.13)
|g|
Just as a side remark, the difference between the Lagrangians for real and complex
scalar fields is a set of factors 1/2 in front of each term. In our case the equation of
2.2 Cold Light Dark Matter 41
∂L ∂L
0 = ∂t ∗
−
∂(∂t φ ) ∂φ ∗
# #
= ∂t |g| ∂t φ + |g| m2φ φ
# # #
= (∂t |g|) (∂t φ) + |g| ∂t2 φ + |g| m2φ φ
√ !
# (∂t |g|) 2 2
= |g| √ (∂t φ) + ∂t φ + mφ φ . (2.14)
|g|
For example from Eq. (1.13) we know that in flat space (k = 0) the determinant of
the metric is |g| = a 6 , giving us
(∂t a 3 ) 3ȧ
0= (∂t φ) + ∂t2 φ + m2φ φ = φ̇ + φ̈ + m2φ φ . (2.15)
a3 a
Using the definition of the Hubble constant in Eq. (1.14) we find that the expansion
of the Universe is responsible for the friction term in
We can solve this equation for the evolving Universe, described by a decreasing
Hubble constant with increasing time or decreasing temperature, Eq. (1.47). If for
each regime we assume a constant value of H —an approximation we need to check
later—and find
This functional form defines three distinct regimes in the evolution of the Uni-
verse:
– In the early Universe H
mφ the two solutions are ω = 0 and ω = 3iH .
The scalar field value is a combination of a constant mode and an exponentially
decaying mode.
time evolution
φ(t) = φ1 + φ2 e−3H t −→ φ1 . (2.18)
The scalar field very rapidly settles in a constant field value and stays there.
There is no good reason to assume that this constant value corresponds to a
minimum of the potential. Due to the Hubble friction term in Eq. (2.16), there
42 2 Relics
is simply no time for the field to evolve towards another, minimal value. This
behavior gives the process its name, misalignment mechanism. For our dark
matter considerations we are interested in the energy density. Following the virial
theorem we assume that the total energy density stored in our spatially constant
field is twice the average potential energy V = m2φ |φ|2 /2. After the rapid decay
of the exponential contribution this means
– A transition point in the evolution of the universe occurs when the evolution of
the field φ switches from the exponential decay towards a constant value φ1 to an
oscillation mode. If we identify the oscillation modes of the field φ with a dark
matter degree of freedom, this point in the thermal history defines the production
of cold, light dark matter,
3i
Hprod ≈ mφ ⇔ ω≈ Hprod . (2.20)
2
– For the late Universe H mφ we expand the complex eigen-frequency one step
further,
3i 9H 2 3i 9H 2 3i
ω = H ± mφ 1 − 2
≈ H ± mφ 1 − 2
≈ ±mφ + H .
2 4mφ 2 8mφ 2
(2.21)
The leading time dependence of the scalar field is an oscillation. The subleading
term, suppressed by H /mφ , describes an exponentially decreasing field ampli-
tude,
ȧ(t)
H = ⇒ a(t) ∝ eH t
a(t)
Eq. (2.22) 1
⇒ ρ(t) = m2φ |φ3 |2 e−3H t ∝
a(t)3
ρ(t) a3
⇔ ∝ 03 . (2.23)
ρ0 a(t)
2.2 Cold Light Dark Matter 43
The energy density of the scalar field in this late regime is inversely proportional
to the space volume element in the expanding Universe. This relation is exactly
what we expect from a non-relativistic relic without any interaction or quantum
effects.
Next, we can use Eq. (2.23) combined with the assumption of constant H to
approximately relate the dark matter relic densities at the time of production with
today a0 = 1,
Using our thermodynamic result a(T ) ∝ 1/T from Eq. (1.33) and the approximate
relation between the Hubble parameter and the temperature at the time of production
we find
√ mφ φ(tprod ) T0
3/2
Eq. (1.47) mφ φ(tprod ) T0
3/2
0.12 = −3
h ≈ −3
h.
(2.5 · 10 eV) T 2 3/2 (2.5 · 10 eV) (HprodMPl )3/4
2
prod
(2.25)
Moreover, from Eq. (2.20) we know that the Hubble constant at the time of dark
matter production is Hprod ∼ mφ . This leads us to the relic density condition for
dark matter produced by the misalignment mechanism,
3/2
mφ φ(tprod ) T0
0.35 = −3
h (2.26)
(2.5 · 10 eV) (mφ MPl )3/4
2
This is the general relation between the mass of a cold dark matter particle and
its field value, based on the observed relic density. If the misalignment mechanism
should be responsible for today’s dark matter, inflation occurring after the field φ
has picked its non-trivial starting value will have to give us the required spatial
homogeneity. This is exactly the same argument we used for the relic photons in
Sect. 1.4. We can then link today’s density to the density at an early starting point
through the evolution sketched above.
Before we illustrate this behavior with a specific model we can briefly check
when and why this dark matter candidate is non-relativistic. If through some
unspecified quantization we identify the field oscillations of φ with dark matter
particles, their non-relativistic velocity is linked to the field value φ through the
44 2 Relics
p̂ ∂φ
v= ∝ , (2.27)
m ∂x
assuming an appropriate normalization by the field value φ. It can be small, provided
we find a mechanism to keep the field φ spatially constant. What is nice about this
model for cold, light dark matter is that it requires absolutely no particle physics
calculations, no relativistic field theory, and can always be tuned to work.
2.3 Axions
The best way to guarantee that a particle is massless or light is through a symmetry
in the Lagrangian of the quantum field theory. For example, if the Lagrangian
for a real spin-0 field φ(x) ≡ a(x) is invariant under a constant shift a(x) →
a(x) + c, a mass term m2a a 2 breaks this symmetry. Such particles, called Nambu-
Goldstone bosons, appear in theories with broken global symmetries. Because most
global symmetry groups are compact or have hermitian generators and unitary
representations, the Nambu-Goldstone bosons are usually CP-odd.
We illustrate their structure using a complex scalar field transforming under a
U (1) rotation, φ → φ eia/fa . A vacuum expectation value φ = fa leads to
spontaneous breaking of the U (1) symmetry, and the Nambu-Goldstone boson a
will be identified with the broken generator of the phase. If the complex scalar
has couplings to chiral fermions ψL and ψR charged under this U (1) group, the
Lagrangian includes the terms
L ⊃ iψ L γ μ ∂μ ψL + iψ R γ μ ∂μ ψR − y φ ψ R ψL + h.c. (2.28)
We can rewrite the Yukawa coupling such that after the rotation the phase is
absorbed in the definition of the fermion fields,
y φ ψ R ψL → y fa ψ R eia/fa ψL ≡ y fa ψ R ψL
with ψR,L = e∓ia/(2fa ) ψR,L . (2.29)
This gives us a fermion mass mψ = yfa . In the new basis the kinetic terms read
iψ L γ μ ∂μ ψL + iψ R γ μ ∂μ ψR
= i ψ L e−ia/(2fa ) γ μ ∂μ eia/(2fa ) ψL
+ i ψ R eia/(2fa ) γ μ ∂μ e−ia/(2fa ) ψR
2.3 Axions 45
(∂μ a)
=i ψ Lγ μ ∂μ + i ψL
2fa
(∂μ a)
+ i ψ R γ μ ∂μ − i ψR + O(fa−2 )
2fa
(∂μ a)
= i ψγ μ ∂μ ψ + ψ γ μ γ5 ψ + O(fa−2 ) , (2.30)
2fa
where in the last line we define the four-component spinor ψ ≡ (ψL , ψR ). The
derivative coupling and the axial structure of the new particle a are evident. Other
structures arise if the underlying symmetry is not unitary, as is the case for space-
time symmetries for which the group elements can be written as eα/f and a
calculation analogous to Eq. (2.30) leads to scalar couplings. The Nambu-Goldstone
boson of the scale symmetry, the dilaton, is an example of such a case.
Following Eq. (2.30), the general shift-symmetric Lagrangian for such a CP-odd
pseudo-scalar reads
1 a αs a $a μν a α $μν
L = (∂μ a) (∂ μ a) + G G + cγ Fμν F
2 fa 8π μν fa 8π
(∂μ a)
+ cψ ψγ μ γ5 ψ . (2.31)
2fa
ψ
θQCD a $a μν
G G (2.32)
8π μν
respects the SU (3) gauge symmetry, but would induce observable CP-violation,
for example a dipole moment for the neutron. Note that it is non-trivial that this
operator cannot be ignored, because it looks like a total derivative, but due to the
topological structure of SU (3), it doesn’t vanish. The non-observation of a neutron
dipole moment sets very strong constraints on θQCD < 10−10. This almost looks
like this operator shouldn’t be there and yet there is no symmetry in the Standard
Model that forbids it.
Combining the gluonic operators in Eqs. (2.31) and (2.32) allows us to solve this
problem
1 αs a $a μν + cγ a α Fμν F
$μν
L = (∂μ a) (∂ μ a) + − θQCD Gaμν G
2 8π fa fa 8π
(∂μ a)
+ cψ ψγ μ γ5 ψ . (2.33)
2fa
ψ
46 2 Relics
With this ansatz we can combine the θ -parameter and the scalar field, such that after
quarks and gluons have formed hadrons, we can rewrite the corresponding effective
Lagrangian including the terms
1 1 2 a 2 a 4
Leff ⊃ (∂μ a) (∂ a) − κ θQCD −
μ
− λa θQCD − + O(fa−6 ) .
2 2 fa fa
(2.34)
The parameters κ and λa depend on the QCD dynamics. This contribution provides
a potential for a with a minimum at a/fa = θQCD . In other words, the shift
symmetry has eliminated the CP-violating gluonic term from the theory. Because
of its axial couplings to matter fields, the field a is called axion.
The axion would be a bad dark matter candidate if it was truly massless. However,
the same effects that induce a potential for the axion also induce an axion mass.
Indeed, from Eq. (2.34) we immediately see that
∂ 2 V κ2
m2a ≡ = , (2.35)
∂a 2 a=fa θQCD fa2
This seems like a contradiction, because a mass term breaks the shift symmetry
and for a true Nambu-Goldstone boson, we expect this term to vanish. However,
in the presence of quark masses, the transformations in Eq. (2.30) do not leave the
Lagrangian invariant under the shift symmetry
mq ψ R ψL → mq ψ R e2ia/fa ψL . (2.36)
Fermion masses lead to an explicit breaking of the shift symmetry and turn the axion
a pseudo Nambu-Goldstone boson, similar to the pions in QCD. For more than one
quark flavor it suffices to have a single massless quark to recover the shift symmetry.
We can determine this mass term from a chiral Lagrangian in which the fundamental
fields are hadrons instead of quarks and find
mu md m2π fπ2 m2 f 2
m2a = ≈ π 2π , (2.37)
(mu + md ) fa
2 2 2fa
where fπ ≈ mπ ≈ 140 MeV are the pion decay constant and mass, respectively.
This term vanishes in the limit mu → 0 or md → 0 as we expect from the discussion
above. In the original axion proposal, fa ∼ v and therefore ma ∼ 10 keV. Since the
couplings of the axion are also fixed by the value of fa , such a particle was excluded
very fast by searches for rare kaon decays, like for instance K + → π + a. In general,
fa can be a free parameter and the mass of the axion can be smaller and its couplings
can be weaker.
2.3 Axions 47
This leaves the question for which model parameters the axion makes a good dark
matter candidate. Since the value of the axion field is not necessarily at the minimum
of the potential at the time of the QCD phase transition, the axion begins to oscillates
around the minimum and the oscillation energy density contributes to the dark
matter relic density. This is a special version of the more general misalignment
mechanism described in the previous section. We can then employ Eq. (2.26) and
find the relation for the observed relic density
The maximum field value of the oscillation mode is given by a(tprod) ≈ fa and
therefore
This relation holds for ma ≈ 2 · 10−6 eV, which corresponds to fa ≈ 1013 GeV. For
heavier axions and smaller values of fa , the axion can still constitute a part of the
relic density. For example with a mass of ma = 6 · 10−5 eV and fa ≈ 3 · 1011 GeV,
axions make up one per-cent of the observed relic density.
Dark Matter candidates with such low masses are hard to detect and we usually
takes advantage of their couplings to photons. In Eq. (2.31), there is no reason why
the coupling cγ needs to be there. It is neither relevant for the strong CP problem nor
for the axion to be dark matter. However, from the perspective of the effective theory
we expect all couplings which are allowed by the assumed symmetry structure to
appear. This includes the axion coupling to photons. If the complete theory, the
axion coupling to gluons needs to be induced by some physics at the mass scale fa .
This can be achieved by axion couplings to SM quarks, or by axion couplings to
non-SM fields that are color-charged but electrically neutral. Even in the latter case
there is a non-zero coupling induced by the axion mixing with the SM pion after
the QCD phase transition. Apart from really fine-tuned models the axion therefore
couples to photons with an order-one coupling constant cγ .
In Fig. 2.1 the yellow band shows the range of axion couplings to photons for
which the models solve Eq. (2.37). The regime where the axion is a viable dark
matter candidate is dashed. It is notoriously hard to probe axion dark matter in the
parameter space in which they can constitute dark matter. Helioscopes try to convert
axions produced in the sun into observable photons through a strong magnetic field.
Haloscopes like ADMX use the same strategy to search for axions in the dark matter
halo.
48 2 Relics
Fig. 2.1 Range of masses and couplings for which the axion can be a viable cold dark matter
candidate. Figure from Ref. [1]
The same axion coupling to photons that we rely on for axion detection also
allows for the decay a → γ γ . This looks dangerous for a dark matter candidate, to
we can estimate the corresponding decay width,
α 3 m3a
Γ (a → γ γ ) = | cγ | 2 (2.40)
256 π 3 fa2
1 1 (6 · 10−6 eV)3
= |cγ |2 ≈ 1 · 10−70 eV |cγ |2
1373 256 π 3 (1022 eV)2
μ2 μ8/3 −2/3
ma = ⇒ ma = eV . (2.41)
fa MPl
where μ is a mass scale not related to QCD. In such models, the axion-like
pseudoscalar can be very light. For example for μ ≈ 100 eV, the axion mass is
ma ≈ 10−22 eV. For such a low mass and a typical velocity of v ≈ 100 km/s, the
de-Broglie wavelength is around 1 Kpc, the size of a galaxy. This type of dark matter
2.4 Matter vs Anti-matter 49
is called fuzzy dark matter and can inherit the interesting properties of Bose-Einstein
condensates or super-fluids.
Before we look at a set of relics linked to dark matter, let us follow a famous
argument fixing the conditions which allow us to live in a Universe dominated
by matter rather than anti-matter. In this section we will largely follow Kolb and
Turner [2]. The observational reasoning why matter dominates the Universe goes in
two steps: first, matter and anti-matter cannot be mixed, because we do not observe
constant macroscopic annihilation; second, if we separate matter and anti-matter we
should see a boundary with constant annihilation processes, which we do not see,
either. So there cannot be too much anti-matter in the Universe.
The corresponding measurement is usually formulated in terms of the observed
baryons, protons and neutrons, relative to the number of photons,
nB
≈ 6 · 10−10 . (2.42)
nγ
The normalization to the photon density is motivated by the fact that this ratio should
be of order unity in the very early Universe. Effects of the Universe’s expansion and
cooling to first approximation cancel. Its choice is only indirectly related to the
observed number of photons and instead assumes that the photon density as the
entropy density in thermal equilibrium. As a matter of fact, we use this number
already in Eq. (1.53).
To understand Eq. (2.42) we start by remembering that in the hot universe
anti-quarks and quarks or anti-baryons and baryons are pair-produced out of a
thermal bath and annihilate with each other in thermal equilibrium. Following the
same argument as for the photons, the baryons and anti-baryons decouple from
each other when the temperature drops enough. In this scenario we can estimate
the ratio of baryon and photon densities from Eq. (1.40), assuming for example
T = 20 MeV mB = 1 GeV
mB T 3/2 −mB /T
gB e
nB (T ) n (T ) 2π
= B̄ =
nγ (T ) nγ (T ) ζ3
gγ T 3
π2
√
gB π mB 3/2 −mB /T
= √ e = 3.5 · 10−20 (2.43)
gγ 2 2ζ3 T
The way of looking at the baryon asymmetry is that independent of the actual anti-
baryon density the density of baryons observed today is much larger than what
50 2 Relics
we would expect from thermal production. While we will see that for dark matter
the problem is to get their interactions just right to produce the correct freeze-out
density, for baryons the problem is to avoid their annihilation as much as possible.
We can think of two ways to avoid such an over-annihilation in our thermal
history. First, there could be some kind of mechanism stopping the annihilation
of baryons and anti-baryons when nB /nγ reaches the observed value. The problem
with this solution is that we would still have to do something with the anti-baryons,
as discussed above.
The second solution is to assume that through the baryon annihilation phase
there exists an initially small asymmetry, such that almost all anti-baryons annihilate
while the observed baryons remain. As a rough estimate, neglecting all degrees of
freedom and differences between fermions and bosons, we assume that in the hot
thermal bath we start with roughly as many baryons as photons. After cooling we
assume that the anti-baryons reach their thermal density given in Eq. (2.43), while
the baryons through some mechanism arrive at today’s density given in Eq. (2.42).
The baryon vs anti-baryon asymmetry starting at an early time then becomes
nB − nB̄ nB n cooling
≈ − B̄ −→ 6 · 10−10 − 3.5 · 10−20 ≈ 6 · 10−10 . (2.44)
nB nγ nγ
If we do the proper calculation, the correct number for a net quark excess in the
early Universe comes out around
nB − nB̄
≈ 3 · 10−8 . (2.45)
nB
In the early Universe we start with this very small net asymmetry between the
very large individual densities of baryons and anti-baryons. Rather than through the
freeze-out mechanism introduced for neutrinos in Sect. 2.1, the baryons decouple
when all anti-baryons are annihilated away. This mechanism can explain the very
large baryon density measured today. The question is now how this asymmetry
occurs at high temperatures.
Unlike the rest of the lecture notes, the discussion of the matter anti-matter
asymmetry is not aimed at showing how the relic densities of the two species are
computed. Instead, we will get to the general Sakharov conditions which tell us
what ingredients our theory has to have to generate a net baryon excess in the early
Universe, where we naively would expect the number of baryons and anti-baryons
(or quarks and anti-quarks) to be exactly the same and in thermal equilibrium. Let
us go through these condition one by one:
Baryon number violation—to understand this condition we just need to remem-
ber that we want to generate a different density of baryons (baryon number B = +1)
and anti-baryons (baryon number B = −1) dynamically during the evolution of the
Universe. We assume that our theory is described by a Lagrangian including finite
temperature effects. If our Lagrangian is fully symmetric with respect to exchanging
2.4 Matter vs Anti-matter 51
X → dd X → d̄− , (2.46)
where the d quark carries baryon number 1/3. A scattering process induced by these
two interactions,
dd → X∗ → d̄− , (2.47)
links an initial state with B = 2/3 to a final state with B = −1/3. The combination
B − L is instead conserved. Such heavy bosons can appear in grand unified theories.
In the Standard Model the situation is a little more complicated: instead of the
lepton number L and the baryon number B individually, the combination B − L is
indeed an (accidental) global symmetry of the electroweak interaction to all orders.
In contrast, the orthogonal B +L is anomalous, i.e. there are quantum contributions
to scattering processes which respect B − L but violate B + L. One can show that
non-perturbative finite-temperature sphaleron processes can generate the combined
state
ij k uL,i dL,j uL,k eL + · · · (2.48)
for one generation of fermions with SU (2)L indices i, j, k out of the vacuum. It
violates lepton and baryon number,
ΔL = 1 ΔB = 1 Δ(B − L) = 0 . (2.49)
coupling g ≈ 0.7. At high temperatures their rate increases significantly. The main
effect of such interactions is that we can replace the condition of baryon number
violation with lepton number violation when we ensure that sphaleron-induced
processes transform a lepton asymmetry into a baryon asymmetry and neither of
them gets washed out. This process is called leptogenesis rather than baryogenesis.
Departure from thermal equilibrium—in our above setup we can see what
assumptions we need to be able to generate a net baryon asymmetry from the
interactions given in Eq. (2.46) and the scattering given in Eq. (2.47). If we follow
the reasoning for the relic photons we reduce the temperature until the two sides
of the 2 → 2 scattering process in Eq. (2.47) drop out of thermal equilibrium.
Our Universe could settles on one of the two sides of the scattering process, i.e.
either with a net excess of d over d̄ particles or vice versa. The problem is that the
52 2 Relics
process d̄ d̄ → X̄∗ → d+ with mX = mX̄ is protected by CPT invariance and will
compensate everything exactly.
The more promising approach are out-of-equilibrium decays of the heavy X
boson. This means that a population of X and X̄ bosons decouple from the thermal
bath early and induce the baryon asymmetry through late decays preferably into
quarks or anti-quarks. In both cases we see that baryon number violating interactions
require a departure from thermal equilibrium to generate a net baryon asymmetry in
the evolution of the Universe.
In the absence of late-decaying particles, for example in the Standard Model,
we need to rely on another mechanism to deviate from thermal equilibrium. The
electroweak phase transition, like any phase transition, can proceed in two ways:
if the phase transition is of first order the Higgs potential develops a non-trivial
minimum while we are sitting at the unbroken field value φ = 0. At the critical
temperature the broken minimum becomes the global minimum of the potential and
we have to tunnel there. The second order phase transition instead develops the
broken minimum smoothly such that there is never a potential barrier between the
two and we can smoothly transition into the broken minimum around the critical
temperature. For a first-order phase transition different regions of the Universe will
switch to the broken phase at different times, starting with expanding bubbles of
broken phase regions. At the bubble surface the thermal equilibrium will be broken,
allowing for a generation of the baryon asymmetry through the electroweak phase
transition. Unfortunately, the Standard Model Higgs mass would have had to be
below 60 GeV to allow for this scenario. C and CP violation—this condition appears
more indirectly.
First, even if we assume that a transition of the kind shown in Eq. (2.46) exists we
need to generate a baryon asymmetry from these decays where the heavy state and
its anti-particle are produced from the vacuum. Charge conjugation links particles
and anti-particles, which means that C conservation implies
In that case there will always be the same numbers of baryons d and anti-baryons
d̄ on average in the system. We only quote the statement that statistical fluctuations
of the baryon and anti-baryon numbers are not large enough to explain the global
asymmetry observed.
Next we assume a theory where C is violated, but CP is intact. This could for
example be the electroweak Standard Model with no CP-violating phases in the
quark and lepton mixing matrices. For our toy model we introduce a quark chirality
qL,R which violates parity P but restores CP as a symmetry. For our decay widths
transforming under C and CP this means
This means unless C and CP are both violated, there will be no baryon asymmetry
from X decays to d quarks.
In the above argument there is, strictly speaking, one piece missing: if we assume
that we start with the same number of X and X̄ bosons out of thermal equilibrium,
once all of them have decayed to dd and d̄ d̄ pairs irrespective of their chirality there
is again no asymmetry between d and d̄ quarks in the Universe. An asymmetry
only occurs if a competing X decay channel produces a different number of baryons
and allows the different partial widths to generate a net asymmetry. This is why we
include the second term in Eq. (2.46). Assuming C and CP violation it implies
Starting from the similarity of the measured baryon and dark matter densities in
Eq. (1.75)
Ωχ 0.12
= = 5.5 , (2.53)
Ωb 0.022
an obvious question is if we can link these two matter densities. We know that the
observed baryon density in the Universe today is not determined by a thermal freeze-
out, but by an initial small asymmetry between the baryon and anti-baryon densities.
If we assume that dark matter is very roughly as heavy as baryons, that dark matter
states carry some kind of charge which defines dark matter anti-particles, and that
the baryon and dark matter asymmetries are linked, we can hope to explain the
observed dark matter relic density. Following the leptogenesis example we could
assume that the sphaleron transition not only breaks B + L, but also some kind of
dark matter number D. Dark matter is then generated thermally, but the value of
the relic density is not determined by thermal freeze-out. Still, from the structure
formation constraints discussed in Sect. 1.5 we know that the dark matter agent
should not be too light.
54 2 Relics
First, we can roughly estimate the dark matter masses this scenario predicts.
From Sect. 2.4 we know how little we understand about the mechanism of gener-
ating the baryon asymmetry in models structurally similar to the Standard Model.
For that reason, we start by just assuming that the particle densities of the baryons
and of dark matter trace each other through some kind of mechanism,
nχ (T ) ≈ nB (T ) . (2.54)
This will start in the relativistic regime and remain true after the two sectors
decouple from each other and both densities get diluted through the expansion of
the Universe. For the observed densities by PLANCK we use the non-relativistic
relation between number and energy densities in Eqs. (1.40) and (1.41),
Ωχ ρχ mχ nχ mχ
= = ≈ ⇔ mχ ≈ 5.5 mB ≈ 5 GeV . (2.55)
Ωb ρB mB nB mB
Corrections to this relation can arise from the mechanism linking the two asymme-
tries.
Alternatively, we can assume that at the temperature Tdec at which the link
between the baryons and the dark matter decouples, the baryons are relativistic and
dark matter is non-relativistic. For the two energy densities this means
ρχ (Tdec ) = mχ nχ (Tdec )
30ζ3 ρB (Tdec )
≈ mχ nB (Tdec ) = mχ
π4 Tdec
mχ ρχ (Tdec ) π 4
⇒ = ≈ 15 . (2.56)
Tdec ρB (Tdec ) 30ζ3
The relevant temperature is determined by the interactions between the baryonic and
the dark matter sectors. However, in general this scenario will allow for heavy dark
matter, mχ
mB .
In a second step we can analyze what kind of dark matter annihilation rates
are required in the asymmetric dark matter scenario. Very generally, we know the
decoupling condition of a dark matter particle of the thermal bath of Standard Model
states from the relativistic case. The mediating process can include a Standard
Model fermion, χf → χf . The corresponding annihilation process for dark matter
which is not its own anti-particle is
χ χ̄ → f f¯ . (2.57)
As long as these scattering processes are active, the dark matter agent follows the
decreasing temperature of the light Standard Model states in an equilibrium between
production out of the thermal bath and annihilation. At some point, dark-matter
freezes out of the thermal bath, and its density is only reduced by the expansion of
References 55
παχ2
σχχ ≈ , (2.59)
m2χ
with the generic dark matter coupling αχ to a dark gauge boson or another light
mediator. For heavier dark matter we will see in Sect. 4.1 how we can achieve large
annihilation rates through a 2 → 1 annihilation topology.
References
1. Tanabashi, M., et al.: [Particle data group], Review of particle physics. Phys. Rev. D 98(3),
030001 (2018)
2. Kolb, E.W., Turner, M.S.: The early Universe. Front. Phys. 69, 1 (1990)
Chapter 3
Thermal Relic Density
After introducing the observed relic density of photons in Sect. 1.3 and the observed
relic density of neutrinos in Sect. 2.1 we will now compute the relic density of a
hypothetical massive, weakly interacting dark matter agent. As for the photons and
neutrinos we assume dark matter to be created thermally, and the observed relic
density to be determined by the freeze-out combined with the following expansion
of the Universe. We will focus on masses of at least a few GeV, which guarantees
that dark matter will be non-relativistic when it decouples from thermal equilibrium.
At this point we do not have specific particles in mind, but in Chap. 4 we will
illustrate this scenario with a set of particle physics models.
The general theme of this chapter and the following Chaps. 5–7 is the typical
four-point interaction of the dark matter agent with the Standard Model. For
illustration purposes we assume the dark matter agent to be a fermion χ and the
Standard Model interaction partner a fermion f :
χ f
χ f
Unlike for asymmetric dark matter, in this process it does not matter if the
dark matter agent has an anti-particle χ̄, or if is it’s own anti-particle χ = χ̄.
This Feynman diagram, or more precisely this amplitude mediates three different
scattering processes:
– left-to-right we can compute dark matter annihilation, χ χ̄ → f f¯, see this
chapter and Chaps. 4 and 5;
– bottom-to-top it describes dark matter scattering of visible matter χf → χf , see
Chap. 6;
– right-to-left it describes dark matter pair-production, f f¯ → χ χ̄, see Chap. 7.
This strong link between very different observables is what makes dark matter so
interesting for particle physicists, including the possibility of global analyses for any
model which can predict this amplitude. Note also that we will see how different the
kinematics of the different scattering processes actually are.
As for the relativistic neutrinos, we will first avoid solving the full Boltzmann
equation for the number density as a function of time. Instead, we assume that some
kind of interaction keeps the dark matter particle χ in thermal equilibrium with the
Standard Model particles and at the same time able to annihilate. At the point of
thermal decoupling the dark matter freezes out with a specific density. As for the
neutrinos, the underlying process is described by the matrix element for dark matter
annihilation
χχ → f f¯ . (3.1)
As in Eq. (1.51) the interaction rate Γ corresponding to this scattering process just
compensates the increasing scale factor at the point of decoupling,
!
Γ (Tdec ) = H (Tdec ) . (3.2)
πα 2 m2χ
σχχ (T mχ ) = . (3.3)
4 m4
cw Z
This formula combines the dark matter mass mχ with a weak interaction represented
by a 1/mZ suppression, implicitly assuming mχ mZ . We will check this
assumption later. Following Eq. (2.2) we can use the non-relativistic number density.
For the non-relativistic decoupling we should not assume v = 1, as we did before.
3.1 WIMP Miracle 59
Given the limited number of energy scales in our description we instead estimate
very roughly
mχ 2 2T
v =T ⇔ v= , (3.4)
2 mχ
remembering that we need to check this later. Moreover, we set the number of
relevant degrees of freedom of the dark matter agent to g = 2, corresponding for
example to a complex scalar or a Majorana fermion. In that case the condition of
dark matter freeze-out is
Note how in this calculation the explicit temperature dependence drops out. This
means the result can be considered an equation for the ratio xdec . If we want to
include the temperature dependence of geff we cannot solve this equation in a closed
form, but we can estimate the value of xdec . First, we can use the generic electroweak
annihilation cross section from Eq. (3.3) to find
√ 4 m4 #
π π cw
e−xdec = √ Z
geff (Tdec ) . (3.7)
3 10 α 2 m3χ MPl
Next, we assume that most of the Standard Model particles contribute to the active
degrees of freedom. From Eq. (1.45) we know that the full number gives us geff =
106.75. In the slightly lower range Tdec = 5 . . . 80 GeV the weak bosons and the
top quark decouple, and Eq. (1.44) gives the slightly reduced value
7
geff (Tdec ) = (8 × 2 + 2) + (5 × 3 × 2 × 2 + 3 × 2 × 2 + 3 × 2)
8
7
= 18 + 78 = 86.25 . (3.8)
8
60 3 Thermal Relic Density
As a benchmark we will use mχ = 30 GeV with xdec ≈ 23 from now on. We need to
eventually check these assumptions, but because of the leading exponential depen-
dence we expect this result for xdec to be insensitive to our detailed assumptions.
Following Eqs. (1.40) and (3.7) the temperature at the point of decoupling gives us
the non-relativistic number density at the point of decoupling,
mχ Tdec 3/2 −xdec π mχ # 4 m4
cw
nχ (Tdec ) = g e = √ 2
geff (Tdec ) Tdec Z
2π 3 20 MPl Tdec πα 2 m2χ
4
From the time of non-relativistic decoupling we have to evolve the energy density
to the current time or temperature T0 . We start with the fact that once a particle has
decoupled, its number density drops like 1/a 3 , as we can read off Eq. (1.27) in the
non-relativistic case,
3
a(Tdec )
ρχ (T0 ) = mχ nχ (T0 ) = mχ nχ (Tdec ) . (3.11)
a(T0 )
again for Tdec > 5 GeV and depending slightly on the number of neutrinos we take
into account. We can use this result to compute the non-relativistic energy density
now
3
a(Tdec )Tdec T03 xdec 3 nχ (Tdec )
ρχ (T0 ) = mχ nχ (Tdec ) = T
a(T0 )T0 3
Tdec 28 0 Tdec 2
3
nχ (Tdec )xdec Eq. (3.10) m4Z
= T03 ≈ 3 · 103 T3 . (3.13)
28m2χ m2χ MPl 0
3.2 Boltzmann Equation 61
Using this result we can compute the dimensionless dark matter density in close
analogy to the neutrino case of Eq. (2.11),
ρχ (T0 )h2
Ωχ h2 = 2 H2
3MPl 0
Because the derivation for the non-relativistic dark matter agent is at the heart of
these lecture notes, we will also show how to properly compute the current relic
density of a weakly interacting, massive dark matter agent. This calculation is based
on the Boltzmann equation. It describes the change of a number density n(t) with
time. The first effect included in the equation is the increasing scale factor a(t). It
even occurs in full equilibrium,
d
0= n(t)a(t)3 = ṅ(t)a(t)3 + 3n(t)a(t)2 ȧ(t)
dt
⇔ ṅ(t) + 3H (t)n(t) = 0 . (3.15)
62 3 Thermal Relic Density
χχ ↔ f f¯ , (3.16)
with any available pair of light fermions in the final state. The depletion rate from
the WIMP pair annihilation process in Eq. (3.16) is given by the corresponding
σχχ v n2χ . This rate describes the probability of the WIMP annihilation process in
Eq. (3.16) to happen, given the WIMP density and their velocity. For the relativistic
relic neutrinos we could safely assume v = 1, while for the WIMP case we did not
even make this assumption for our previous order-of-magnitude estimate.
When we derive the Boltzmann equation from first principles it turns out that we
need to thermally average. This reflects the fact that the WIMP number density is a
global observable, integrated over the velocity spectrum. In the non-relativistic limit
the velocity of a particle with momentum k and energy k0 is
|k|
|k|
vk := ≈ 1. (3.17)
k0 mχ
The external momenta of the two fermions then have the form
!
k 2 = k02 − k2 = k02 − (mχ vk )2 = m2χ
% v 2
⇔ k0 = m2χ + m2χ vk2 ≈ mχ 1 + k
. (3.18)
2
s − 4m2χ s
= 4m2χ + 4|k1|2 = 4m2χ (1 + v12 ) ⇔ v12 = = −1.
4m2χ 4m2χ
(3.19)
3.2 Boltzmann Equation 63
The relative velocity of the two incoming particles in the non-relativistic limit is
instead defined as
k k2 cms k1 k1 2|k1 |
1
v = 0 − 0 = 0 + 0 = 0 ≈ 2v1
k1 k2 k1 k1 k1
Using the relative velocity the thermal average of σχχ v as it for example appears in
Eq. (3.7) is defined as
"
d 3 pχ,1 d 3 pχ,2 e−(Eχ,1 +Eχ,2 )/T σχχ→ff v
σχχ→ff v := "
d 3 pχ,1 d 3 pχ,2 e−(Eχ,1 +Eχ,2 )/T
√
"∞ √ s
2π T 4m2 ds s(s − 4mχ )K1
2 2 σχχ→ff (s)
χ T
= m 2 , (3.21)
χ
4πm2χ T K2
T
in terms of the modified Bessel functions of the second kind K1,2 . Unfortunately,
this form is numerically not very helpful in the general case. The thermal averaging
replaces the global value of σχχ v, as it gets added to the equilibrium Boltzmann
equation (Eq. (3.15)) on the right-hand side,
ṅ(t) + 3H (t)n(t) = −σχχ v n(t)2 − neq (t)2 . (3.22)
1 d
n(t)a(t) 3
= −σ χχ v n(t) 2
− neq (t) 2
a(t)3 dt
d n(t)
⇔ T (t)3 = −σ χχ v n(t) 2
− neq (t) 2
dt T (t)3
dY (t)
⇔ = −σχχ v T (t)3 Y (t)2 − Yeq (t)2
dt
n(t)
with Y (t) := 3 . (3.23)
T
64 3 Thermal Relic Density
Throughout these lecture notes we have always replaced the time by some other
variable describing the history of the Universe. We again switch variables to x =
mχ /T . For the Jacobian we assume that most of the dark matter decoupling happens
with ρr
ρm ; in the early, radiation-dominated Universe we can link the time and
x through the Hubble constant,
dY (x) x dY (t)
=
dx H (x = 1) dt
x m3χ 2 2
= −σχχ v Y (x) − Yeq (x)
H (x = 1) x 3
λ(x)
=− 2 Y (x)2 − Yeq (x)2
x
√
m3χ σχχ v 90 MPl mχ
with λ(x) := = √ σχχ v(x) . (3.25)
H (x = 1) π geff
dY (x) λ(x)
= − 2 Y (x)2 . (3.26)
dx x
Second, we can estimate λ(x) by expanding the thermally averaged annihilation
WIMP cross section for small velocities. We use Eq. (3.3) as the leading term in the
annihilation cross section and approximate v following Eq. (3.4), giving us
√
90 MPl mχ
λ(x) = √ σχχ v + O(v 2 )
π geff
√
90 MPl mχ 2 πα 2 m2χ λ̄
≈ √ 4 m4
≡√ . (3.27)
π geff x cw Z
x
3.2 Boltzmann Equation 65
From Eq. (3.9) we know that thermal WIMPs have masses well above 10 GeV,
which corresponds to geff ≈ 100. This value only changes once the temperature
reaches the bottom mass and then drops to geff ≈ 3.6 today. This allows us to
separate the leading effects driving the dark matter density into the decoupling phase
described by the Boltzmann equation and an expansion phase with its drop in geff .
For the first phase we can just integrate the Boltzmann equation for constant geff
starting just before decoupling (xdec ) and to a point xdec
xdec after decoupling
but above the bottom mass,
1 1 λ̄ λ̄
− = Y (xdec ) − Y (xdec ) = − 3/2 + 3/2 . (3.29)
Y (xdec ) Y (xdec ) xdec xdec
From the form of the Boltzmann equation in Eq. (3.26) we see that Y (x) drops
x
rapidly with increasing x. If we choose xdec dec = 23 it follows that Y (xdec )
Y (xdec ) and hence
1 λ̄
= 3/2
Y (xdec ) xdec
√
m3χ σχχ v xdec Eq. (3.25) π geff 1
Y (xdec )= = = xdec √ . (3.30)
H (x = 1) λ(xdec ) 90 MPl mχ σχχ v
In this expression geff is evaluated around the point of decoupling. For the second,
expansion phase we can just follow Eq. (3.11) and compute
ρχ (T0 ) = mχ nχ (T0 )
)
3
3 a(Tdec Eq. (3.12) geff (T0 )
= mχ Y (xdec )Tdec = mχ Y (xdec ) T03 )
a(T0 ) geff (Tdec
)T 3
Y (xdec
= mχ 0
. (3.31)
28
66 3 Thermal Relic Density
)T 3
Y (xdec h2
⇒ Ωχ h2 = mχ 0
2 H2
28 3MPl 0
√
h2 π geff xdec T03
= √ (3.32)
28 90 MPl σχχ v 3MPl 2 H2
0
√
2
h π geff xdec (2.4 · 10−4)3 1
= √
28 90 MPl σχχ v (2.5 · 10−3)4 eV
√
xdec geff 1.7 · 10−9 GeV−2
⇒ Ωχ h2 ≈ 0.12 .
23 10 σχχ v
We can translate this result into different units. In the cosmology literature people
often use eV−1 = 2 · 10−5 cm. In particle physics we measure cross sections in
barn, where 1 fb = 10−39 cm2 . Our above result is a very good approximation to the
correct value for the relic density in terms of the annihilation cross section
√
xdec geff 1.7 · 10−9 GeV−2
Ωχ h2 ≈ 0.12
23 10 σχχ v
√
xdec geff 2.04 · 10−26cm3 /s
≈ 0.12 . (3.33)
23 10 σχχ v
With this result we can now insert the WIMP annihilation rate given by Eqs. (3.3)
and (3.4),
2 πα 2 m2χ
σχχ v = σχχ v + O(v ) ≈2
x cw 4 m4
Z
√ √
xdec geff cw mZ x 1.7 · 10−9
4 4
⇒ Ωχ h2 = 0.12 √
23 10 2πα 2 m2χ GeV2
x 3/2 √g 35 GeV
2
dec eff
= 0.12 . (3.34)
23 10 mχ
We can compare this result to our earlier estimate in Eq. (3.14) and confirm that
these numbers make sense for a weakly interacting particle with a weak-scale mass.
Alternatively, we can replace the scaling of the annihilation cross section given
in Eq. (3.3) by a simpler form, only including the WIMP mass and certainly valid
3.3 Co-annihilation 67
−9
g4 ! 1.7 · 10 mχ mχ
σχχ v ≈ = ⇔ g2 ≈ = .
16πm2χ GeV2 3400 GeV 3.4 TeV
(3.35)
This form of the cross section does not assume a weakly interacting origin, it simply
follows for the scaling with the coupling and from dimensional analysis. Depending
on the coupling, its prediction for the dark matter mass can be significantly higher.
Based on this relation we can estimate an upper limit on mχ from the unitarity
condition for the annihilation cross section
A lower limit does not exist, because we can make a lighter particle more and more
weakly coupled. Eventually, it will be light enough to be relativistic at the point of
decoupling, bringing us back to the relic neutrinos discussed in Sect. 2.1.
Let us briefly recapitulate our argument which through the Boltzmann equation
leads us to the WIMP miracle: we start with a 2 → 2 scattering process linking
dark matter to Standard Model particles through a so-called mediator, which can for
example be a weak boson. This allows us to compute the dark matter relic density as
a function of the mediating coupling and the dark matter mass, and it turns out that a
weak-coupling combined with a dark matter mass below the TeV scale fits perfectly.
There are two ways in which we can modify the assumed dark matter annihilation
process given in Eq. (3.16): first, in the next section we will introduce additional
annihilation channels for an extended dark matter sector. Second, in Sect. 4.1 we
will show what happens if the annihilation process proceeds through an s-channel
Higgs resonance.
3.3 Co-annihilation
In many models the dark matter sector consists of more than one particle, separated
from the Standard Model particles for example through a specific quantum number.
A typical structure are two dark matter particles χ1 and χ2 with mχ1 < mχ2 . In
analogy to Eq. (3.16) they can annihilate into a pair of Standard Model particles
through the set of processes
χ1 χ1 → f f¯ χ1 χ2 → f f¯ χ2 χ2 → f f¯ . (3.37)
This set of processes can mediate a much more efficient annihilation of the dark
matter state χ1 together with the second state χ2 , even in the limit where the actual
dark matter process χ1 χ1 → f f¯ is not allowed. Two non-relativistic states will have
68 3 Thermal Relic Density
number densities both given by Eq. (1.40). We know from Eq. (3.9) that decoupling
of a WIMP happens at typical values xdec = mχ /Tdec ≈ 28, so if we for example
assume Δmχ = mχ2 − mχ1 = 0.2 mχ1 and g1 = g2 we find
Just from statistics the heavier state will already be rare by the time the lighter, actual
dark matter agent annihilates. For a mass difference around 10% this suppression is
reduced to a factor 1/15, gives us an estimate that efficient co-annihilation will prefer
two states with mass differences in the 10% range or closer.
Let us assume that there are two particles present at the time of decoupling. In
addition, we assume that the first two processes shown in Eq. (3.37) contribute to the
annihilation of the dark matter state χ1 . In this case the Boltzmann equation from
Eq. (3.22) reads
In the second step we assume that the two particles decouple simultaneously, such
that their number densities track each other through the entire process, including the
assumed equilibrium values. This means that we can throughout our single-species
calculations just replace
In the co-annihilation setup it is not required that the direct annihilation process
dominates. The annihilation of more than one particle contributing to a dark matter
sector can include many other aspects, for example when the dark matter state only
interacts gravitationally and the annihilation proceeds mostly through a next-to-
lightest, weakly interacting state. The Boltzmann equation will in this case split
into one equation for each state and include decays of the heavier state into the dark
matter state. Such a system of Boltzmann equations cannot be solved analytically in
general.
3.4 Velocity Dependence 69
What we can assume is that the two co-annihilation partners have very similar
masses, Δmχ mχ1 , similar couplings, g1 = g2 , and that the two annihilation
processes in Eq. (3.37) are of similar size, σχ1 χ1 v ≈ σχ1 χ2 v. In that limit
we simply find σχχ v → 2σχ1 χ1 v in the Boltzmann equation. We know
from Eq. (3.33) how the correct relic density depends on the annihilation cross
section. Keeping the relic density constant we absorb the rate increase through co-
annihilation into a shift in the typical WIMP masses of the two dark matter states.
According to Eq. (3.35) the WIMP masses should now be
g4 g4 √
σχχ v ≈ ≡ 2 or mχ1 ≈ mχ2 ≈ 2mχ . (3.41)
16πm2χ 32π m2χ1
A simple question we can ask for example when we will talk about collider
signatures is how easy it would be to discover a single WIMP compared to the
pair of co-annihilating, slightly heavier WIMPs.
An interesting question is how co-annihilation channels modify the WIMP
mass scale which is required by the observed relic density. From Eq. (3.41) we
see that an increase in the total annihilation rate leads to a larger mass scale
of the dark matter particles, as expected from our usual scaling. On the other
hand, the annihilation cross section really enters for example Eq. (3.33) in the
√
combination σχ1 χ1 v/ geff . If we increase the number of effective degrees of
freedom significantly, while the co-annihilation channels really have a small effect
on the total annihilation rate, the dark matter mass might also decrease.
While throughout the early estimates we use the dark matter annihilation rate σχχ ,
we introduce the more appropriate thermal expectation value of the velocity times
the annihilation rate σχχ v in Eq. (3.21). This combination has the nice feature
that its leading term can be independent of the velocity v. In general, the velocity-
weighted cross section will be of the form
This pattern follows from the partial wave analysis of relativistic scattering. The first
term s0 is velocity-independent and arises from S-wave scattering. An example is
the scattering of two scalar dark matter particles with an s-channel scalar mediator
or two Dirac fermions with an s-channel vector mediator. The second term s1 with a
vanishing rate at threshold is generated by S-wave and P -wave scattering. It occurs
for example for Dirac fermion scattering through an s-channel vector mediator. All
t-channel processes have an S-wave component and are not suppressed at threshold.
70 3 Thermal Relic Density
Particles who are their own anti-particles, like Majorana fermions and real
scalars, do not annihilate through s-channel vector mediators. The same happens
for complex scalars and axial-vector mediators. In general, t-channel annihilation
to two Standard Model fermions is not possible for scalar dark matter.
To allow for an efficient dark matter annihilation to today’s relic density, we tend
to prefer an un-suppressed contribution s0 to increase the thermal freeze-out cross
section. The problem with such large annihilation rates is that they are strongly
constrained by early-universe physics. For example, the PLANCK measurements
of the matter power spectrum discussed in Sect. 1.5 constrain the light dark matter
very generally, just based on the fact that such light dark matter can affect the photon
background at the time of decoupling. The problem arises if dark matter candidates
annihilate into Standard Model particles through non-gravitational interactions,
χχ → SM SM . (3.43)
As we know from Eq. (3.1) this process is the key ingredient to thermal freeze-
out dark matter. If it happens at the time of last scattering it injects heat into
the intergalactic medium. This ionizes the hydrogen and helium atoms formed
during recombination. While the ionization energy does not modify the time of
the last scattering, it prolongs the period of recombination or, alternatively, leads
to a broadening of the surface of last scattering. This leads to a suppression of
the temperature fluctuations and enhance the polarization power spectrum. The
temperature and polarization data from PLANCK puts an upper limit on the dark
matter annihilation cross section
The factor feff < 1 denotes the fraction of the dark matter rest mass energy
injected into the intergalactic medium. It is a function of the dark matter mass, the
dominant annihilation channel, and the fragmentation patterns of the SM particles
the dark matter agents annihilate into. For example, a 200 GeV dark matter particle
annihilating to photons or electrons reaches feff = 0.66 . . . 0.71, while an
annihilation to muon pairs only gives feff = 0.28. As we know from Eq. (3.32)
for freeze-out dark matter an annihilation cross section of the order σχχ v ≈
1.7 · 10−9 GeV2 is needed. This means that the PLANCK constraints of Eq. (3.44)
requires
mχ 10 GeV , (3.45)
In contrast to limits from searches for dark matter annihilation in the center of the
galaxy or in dwarf galaxies, as we will discuss in Chap. 5, this constraint does not
suffer from astrophysical uncertainties, such as the density profile of the dark matter
halo in galaxies.
χ(k1 )
χ(q + k1 )
Z(q) S
χ(q + k2 )
χ(k2 )
The question is where this integral receives large contributions. Using k 2 = m2χ the
denominators of the fermion propagators read
1 1
= 2
(q + k)2 − m2χ q0 − | q k
q | + 2q0 k0 − 2
2
Eq. (3.18) 1
=
q02 − |
q |2 + (2 + v 2 )m χ q0 − 2mχ v|
q | cos θ + O(q0 v 2 )
|
q |=mχ v 1
= .
q02 − m2χ v 2 (1 + 2 cos θ ) + (2 + v 2 )mχ q 0 + O(q0 v 2 )
(3.47)
The particles in the loop are not on their respective mass shells. Instead, we can
identify a particularly dangerous region for v → 0, namely q0 = mχ v 2 , where
1 1
= 2 2 . (3.48)
(q + k) − mχ
2 2 mχ v (1 − 2 cos θ ) + O(v 4 )
In the same phase space region the Z boson propagator in the integral scales like
1 1 1
= =− . (3.50)
q 2 − m2Z m2χ v 4 − m2χ v 2 − m2Z m2χ v 2 + m2Z + O(v 4 )
In the absence of the gauge boson mass the gauge boson propagator would diverge
for v → 0, just like the fermion propagators. This means that we can approximate
the loop integral by focussing on the phase space regime
q 0 ≈ mχ v 2 and |
q | ≈ mχ v . (3.51)
The complete infrared contribution to the one-loop matrix element of Eq. (3.46) with
a massive gauge boson exchange and neglecting the Dirac matrix structure is
mχ 1 mχ
d 4q
(q + k1 )2 − m2χ q 2 − m2Z (q − k2 )2 − m2χ
1 1 1
≈ Δq0 (Δ|
q |)3
mχ v 2 m2χ v 2 + m2Z mχ v 2
3.5 Sommerfeld Enhancement 73
1 1 1
≈ mχ v 2 (mχ v)3
mχ v 2 m2χ v 2 + m2Z mχ v 2
v mχ
mZ 1
= −→ . (3.52)
m2Z v
v2 +
m2χ
This means that part of the one-loop correction to the dark matter annihilation
process at threshold scales like 1/v in the limit of massless gauge boson exchange.
For massive gauge bosons the divergent behavior is cut off with a lower limit
v mZ /mχ . If we attach an additional gauge boson exchange to form a two-
loop integral, the above considerations apply again, but only to the last, triangular
diagram. The divergence still has the form 1/v. Eventually, it will be cut off by the
widths of the particles, which is a phrase often used in the literature and not at all
easy to show in detail.
What is more important is the question what the impact of this result is for our
calculations—it will turn out that while the loop corrections for slowly moving
particles with a massless gauge boson exchange are divergent, they typically correct
a cross section which vanishes at threshold and only lead to a finite rate at the
production threshold.
As long as we limit ourselves to v 1 we do not need to use relativistic quantum
field theory for this calculation. We can compute the same v-dependent correction
to particle scattering using non-relativistic quantum mechanics. We assume two
electrically and weakly charged particles χ ± , so their attractive potential has
spherically symmetric Coulomb and Yukawa parts,
e2 g2
V (r) = − − Z e−mZ r with r = |r | . (3.53)
r r
The coupling gZ describes an unknown χ-χ-Z interaction. With such a potential
we can compute a two-body scattering process. The wave function ψk (r ) will in
general be a superposition of an incoming plane wave in the z-direction and a set of
spherical waves with a modulation in terms of the scattering angle θ . As in Eq. (1.59)
we can expand the wave function in spherical harmonics, combined with an energy-
dependent radial function R(r; E). We again exploit the symmetry with respect to
the azimuthal angle φ and obtain
∞
ψk (r ) = am Ym (θ, φ) R (r; E)
=0 m=−
∞
= (2 + 1) a0 Y0 (θ, φ) R (r; E)
=0
74 3 Thermal Relic Density
∞ √
Eq. (1.63) 2 + 1
= (2 + 1) a0 P (cos θ ) R (r; E)
2
=0
∞
=: A P (cos θ ) R (r; E) . (3.54)
=0
From the calculation of the hydrogen atom we know that the radial, time-
independent Schrödinger equation in terms of the reduced mass m reads
!
1 d d ( + 1)
− r2 + + V (r) − E R (r; E) = 0 . (3.55)
2mr 2 dr dr 2mr 2
The reduced mass for a system with two identical masses is given by
m1 m2 mχ
m= = . (3.56)
m1 + m2 2
As a first step we solve the Schrödinger equation at large distances, where we can
neglect V (r). We know that the solution will be plane waves, but to establish our
procedure we follow the procedure starting with Eq. (3.55) step by step,
!
1 d 2 d ( + 1)
− 2 r + − k Rk (r) = 0 with k 2 := 2mE = m2 v 2
2
r dr dr r2
1 d 2 dRk ( + 1)
⇔ 2 ρ − Rk + Rk = 0 with ρ := kr
ρ dρ dρ ρ2
1 dRk 2
2 d Rk ( + 1)
⇔ 2 2ρ +ρ − Rk + Rk = 0
ρ dρ dρ 2 ρ2
d 2 Rk dRk
⇔ ρ2 2
+ 2ρ − ( + 1)Rk + ρ 2 Rk = 0 (3.57)
dρ dρ
This differential equation turns out to be identical to the implicit definition of the
spherical Bessel functions j (ρ), so we can identify Rk (r) = j (ρ). The radial
wave function can then be expressed in Legendre polynomials,
1 ρ 1
Rk (r) = j (ρ) = (−1) dt eiρt (t 2 − 1)
2! 2 −1
1 ρ 1
1 iρt 2
= (−1) e (t − 1)
2! 2 iρ −1
1 !
1 iρt d 2
− dt e (t − 1)
−1 iρ dt
3.5 Sommerfeld Enhancement 75
1 ρ (−1) 1 d 2
= (−1) dt eiρt (t − 1) = · · ·
2! 2 iρ −1 dt
1
1 ρ 1 iρt d
= dt e (t 2 − 1)
2! 2 (iρ) −1 dt
1
Eq. (1.64) (−i)
= dt eiρt P (t) . (3.58)
2 −1
to link the plane wave to this expression in terms of the spherical Bessel functions
and the Legendre polynomials. With the correct ansatz we find
∞
∞
1
Eq. (3.58) (−i)
i (2 + 1)P (t)j (ρ) = i (2 + 1)P (t) dt eiρt P (t )
2 −1
=0 =0
⎪
⎪ sin kr −
π
⎪
⎪
⎪
⎨i (2 + 1) 2
for kr
2
A Rk (r) = i (2 + 1)j (kr) ≈
kr
⎪
⎪
⎪
⎪ (kr) √
⎪
⎩i (2 + 1) for kr 2 .
(2 + 1)!!
(3.61)
We include two limits which can be derived for the spherical Bessel functions. To
describe the interaction with and without a potential V (r) we are always interested
in the wave function at the origin. The lower of the above two limits indicates that
for small r and hence small ρ values only the first term = 0 will contribute. We
can evaluate j0 (kr) for kr = 0 in both forms and find the same value,
2
=0
ψk (0) = |A0 P0 (cos θ )Rk (0)|2 = |A0 Rk (0)|2 = lim |j0 (kr)|2 = 1 (3.62)
r→0
76 3 Thermal Relic Density
The argument that only = 0 contributes to the wave function at the origin is not at
all trivial to make, and it holds as long as the potential does not diverge faster than
1/r towards the origin.
Next, we add an attractive Coulomb potential to Eq. (3.57), giving us the radial
Schrödinger equation in a slightly re-written form in the first term
!
1 d2 ( + 1) 2me2 2 uk
− r+ − −k =0 with uk (r) := rRk (r)
r dr 2 r2 r r
d2 ( + 1) 2me2
⇔ u k − u k + uk + k 2 uk = 0
dr 2 r2 r
d2 ( + 1) 2me2
⇔ u k − u k + uk + uk = 0 (3.63)
dρ 2 ρ2 ρk
The solution of this equation will lead us to the well-known hydrogen atom
and its energy levels. However, we are not interested in the energy levels but in
the continuum scattering process. Following the discussion around Eq. (3.61) and
assuming that the Coulomb potential will not change the fundamental structure of
the solution around the origin we can evaluate the radial wave function for = 0,
d2 2me2
u k0 + uk0 + uk0 = 0 (3.64)
dρ 2 ρk
We only
This is the equation we need to solve and then evaluate at the origin, r = 0.
quote the result,
⎧
2 2πe2 ⎪ 2πe2
⎨
1 for v → 0
=
ψk (0) ≈ v . (3.65)
v 1 − e−2πe2 /v ⎪
⎩1 for v → ∞
Compared to Eq. (3.62) this increased probability measure is called the Sommer-
feld enhancement. It is divergent at small velocities, just as in the Feynman-
diagrammatic discussion before. For very small velocities, it can lead to an
enhancement of the threshold cross section by several orders of magnitude.
It can be shown that the calculation based on ladder diagrams in momentum
space and based on the Schrödinger equation in position space are equivalent for
simple scattering processes. The resummation of the ladder diagrams is equivalent
to the computation of the wave function at the origin in the Fourier-transformed
position space.
The case of the Yukawa potential shows a similar behavior. It involves an
amusing trick in the computation of the potential, so we discuss it in some detail.
When we include the Yukawa potential in the Schrödinger equation we cannot solve
the equation analytically; however, the Hulthen potential is an approximation to
3.5 Sommerfeld Enhancement 77
the Yukawa potential which does allow us to solve the Schrödinger equation. It is
defined as
gZ2 δe−δr
V (r) = . (3.66)
1 − e−δr
Optimizing the numerical agreement of the Hulthen potential’s radial wave func-
tions with those of the Yukawa potential suggests for the relevant mass ratio in our
calculation
π2
δ≈ mZ , (3.67)
6
which we will use later. Unlike for the Coulomb potential we can now keep the full
-dependence of the Schrödinger equation. The only additional approximation we
use is for the angular momentum term
δ 2 e−δr 1 − δr + O(δ 2 r 2 )
2 = δ
2
2
1 − e−δr 1 2 2
−δr + δ r + O(δ r ) 3 3
2
1 1 − δr + O(δ 2 r 2 ) 1
=
= 1 + O(δ 2 2
r ) . (3.68)
r2 1 2 r2
1 − δr + O(δ 2 r 2 )
2
The radial Schrödinger equation of Eq. (3.63) with the Hulthen potential and the
above approximation for the angular-momentum-induced potential term now reads
1 d2 δ 2 e−δr gZ2 δe−δr 2 uk
− r + ( + 1) 2 + −k =0
r dr 2 1 − e−δr 1 − e−δr r
Again, we only quote the result: the leading term for the corresponding Sommerfeld
enhancement factor in the limit v 1 arises from
2vmχ π
2 πgZ2 sinh
=
ψk (0) ⎛ δ ⎞. (3.70)
v v 2 m2χ
2vmχ π gZ2 mχ
cosh − cos ⎝2π − ⎠
δ δ δ2
78 3 Thermal Relic Density
x2
sinh x = x + O(x 3 ) and cosh x = 1 + + O(x 4 ) (3.71)
2
The cosh function is always larger than one and grows rapidly with increasing
argument. This means that in the limit v 1 the two terms in the denominator
can cancel almost entirely,
2πvmχ
2 πgZ2 + O(v 3 )
= δ⎛
ψk (0) ⎞
v 2m
g
1 + O(v 2 ) − cos ⎝2π Z χ
+ O(v 2 )⎠
δ
2π 2 gZ2 mχ
v→0
−→ δ . (3.72)
4π 2 gZ2 mχ
1 − cos
δ
The finite limit for v → 0 is well defined except for mass ratios mχ /δ or mχ /mZ
right on the pole. The positions of the peaks in this oscillating function of the mass
ratio mχ /mZ is independent of the velocity in the limit v 1. The peak positions
are
4π 2 gZ2 mχ mχ n2
= (2nπ)2 ⇔ = 2
δ δ gZ
Eq. (3.67) mχ π2
⇔ = 2 n2 with n = 1, 2, . . . (3.73)
mZ 6gZ
For example assuming gZ2 ≈ 1/20 we expect the first peak at dark matter masses
below roughly 3 TeV. For the Sommerfeld enhancement factor on the first peak we
have to include the second term in the Taylor series in Eq. (3.70) and find
2π 2 gZ2 mχ
2 g2 δ
= δ
ψk (0)
2 = Z 2
1 2vmχ π mχ v
+ O(v 4 )
2 δ
2 g 4
Eq. (3.73) = Z .
⇒ ψk (0) (3.74)
v2
3.6 Freeze-In Production 79
105
104
103
–5
0
v =1
102
v=10 –3
10 v=10–2
v=10 –1
Fig. 3.1 Sommerfeld enhancement for a Yukawa potential as a function of the dark matter mass
(M ≡ mχ ), shown for different velocities. It assumes the correct Z-mass and a coupling strength
of gZ2 = 1/30. Figure from Ref. [1], found for example in Mariangela Lisanti’s lecture notes [2]
For v = 10−3 and gZ2 ≈ 1/20 we find sizeable Sommerfeld enhancement on the first
peak by a factor around 2500. Figure 3.1 illustrates these peaks in the Sommerfeld
enhancement for different velocities. The slightly different numerical values arise
because the agreement of the Hulthen and Yukawa potentials is limited.
From our calculation and this final result it is clear that a large ratio of the
dark matter mass to the electroweak masses modifies the pure v-dependence of
the Coulomb-like Sommerfeld enhancement, but is not its source. Just like for the
Coulomb potential the driving force behind the Sommerfeld enhancement is the
vanishing velocity, leading to long-lived bound states. The ratio mχ /mZ entering
the Sommerfeld enhancement is simply the effect of the Z-mass acting as a regulator
towards small velocities.
In the previous discussion we have seen that thermal freeze-out offers an elegant
explanation of the observed relic density, requiring only minimal modifications to
the thermal history of the Universe. On the other hand, for cold dark matter and
asymmetric dark matter we have seen that an alternative production mechanism has
a huge effect on dark matter physics. A crucial assumption behind freeze-out dark
matter is that the coupling between the Standard Model and dark matter cannot
be too small, otherwise we will never reach thermal equilibrium and cannot apply
80 3 Thermal Relic Density
Eq. (3.2). For example for the Higgs portal model discussed in Sect. 4.1 this is the
case for a portal coupling of λ3 10−7 . For such small interaction rates the (almost)
model-independent lower bound on the dark matter mass from measurements of
the CMB temperature variation and polarization, discussed in Sect. 1.4 and giving
mχ 10 GeV, does not apply. This allows for new kinds of light dark matter.
For such very weakly interacting particles, called feebly interacting massive
particles or FIMPs, we can invoke the non-thermal, so-called freeze-in mechanism.
The idea is that the dark matter sector gets populated through decay or annihilation
of SM particles until the number density of the corresponding SM particles species
becomes Boltzmann-suppressed. For an example SM particle B with an interaction
and mB > 2mχ , the decay B → χ χ̄ allows to populate the dark sector. The
Boltzmann equation in Eq. (3.22) then acquires a source term
The condition that the dark matter sector is not in thermal equilibrium initially
translates into a lower bound on the dark matter mass. Its precise value depends
on the model, but for a mediator with mB ≈ 100 GeV one can estimate mχ
0.1 . . . 1 keV from the fundamental assumptions of the model.
A decay-based source term in terms of the internal number of degrees of freedom
gB∗ , the partial width B → χ χ̄, and the equilibrium distribution exp(−EB /T ) can
be written as
S(B → χ χ̄)
3
d pB −EB /T mB
= gB∗ e Γ (B → χ χ̄)
(2π)3 EB
|pB |2 d|pB | −EB /T mB
= gB∗ Γ (B → χ χ̄ ) e
2π 2 EB
∗ ∞ %
g mB
= B 2 Γ (B → χ χ̄ ) dEB EB2 − m2B e−EB /T
2π mB
d|pB | EB dEB
using = %
dEB EB2 − m2B
gB∗ m2B
= Γ (B → χ χ̄) T K1 (mB /T ) , (3.77)
2π 2
where K1 (z) is the modified Bessel function of the second kind. For small z it is
approximately given by K1 (z) ≈ 1/z, while for large z it reproduces the Boltzmann
factor, K1 (z) ∝ e−z /z + O(1/z). This form suggests that the dark matter density
3.6 Freeze-In Production 81
will increase until T becomes small compared to mB and the source term becomes
suppressed by e−mB /T . The source term is independent of nχ and proportional to the
partial decay width. We also expect it to be proportional to the equilibrium number
density of B, defined as
%
d 3 p −EB /T gB∗ ∞
gB∗ dEB EB EB2 − m2B e−EB /T
eq
nB = e =
(2π)3 2π 2 mB
gB∗
= m2B T K2 (mB /T ) , (3.78)
2π 2
in analogy to Eq. (3.77), but with the Bessel function of the first kind K1 . We can
use this relation to eliminate the explicit temperature dependence of the source term,
K1 (mB /T ) eq
S(B → χ χ̄) = Γ (B → χ̄χ) n . (3.79)
K2 (mB /T ) B
To compute the relic density, we introduce the notation of Eq. (3.25), namely x =
mB /T and Y = nχ /T 3 . The Boltzmann equation from Eq. (3.76) now reads
dY (x) g ∗ Γ (B → χ χ̄) 3
= B2 x K1 (x)
dx 2π H (xB = 1)
x 3 e−x x
1 or T mB
with x 3 K1 (x) ≈ (3.80)
x2 x 1 or T
mB .
We can now follow the steps from Eqs. (3.14) and (3.32) and compute the relic
density today,
h2 mχ 3
Ωχ h2 = 2 2
T0 Y (xdec )
3MPl H0 28
h2 mχ T03
= Γ (B → χ χ̄)
112π MPl H H (xB = 1)
2 2
√
Eq. (1.47) 90h2 gB∗ mχ T03
= √ Γ (B → χ χ̄)
112π 2 geff m2B H 2 MPl
82 3 Thermal Relic Density
Fig. 3.2 Scaling of Y (x) = nχ /T 3 for the freeze-out (left) and freeze-in (right) mechanisms for
three different interaction rates (larger to smaller cross sections along the arrow). In the left panel
x = mχ /T and in the right panel x = mB /T . The dashed contours correspond to the equilibrium
densities. Figure from Ref. [3]
√
90h2 gB∗ mχ (2.4 · 10−4 )3
= √ MPl Γ (B → χ χ̄)
112π 2 geff m2B (2.5 · 10−3 )4
gB∗ mχ
= 3.6 · 1023 Γ (B → χ χ̄ ) . (3.82)
m2B
The calculation up to this point is independent from the details of the interaction
between the decaying particle B and the DM candidate χ. For the example interac-
tion Eq. (3.75), the partial decay with is given by Γ (B → χ χ̄ ) = y 2 mB /(8π), and
assuming gB∗ = 2 we find
y 2 m
χ
Ωχ h2 = 0.12 −12
. (3.83)
2 · 10 mB
The correct relic density from B-decays requires small couplings y and/or dark
matter masses mχ , compatible with the initial assumption that dark matter was
never in thermal equilibrium with the Standard Model for T mB . Following
Eq. (3.83), larger interaction rates lead to larger final dark matter abundances. This
is the opposite scaling as for the freeze-out mechanism of Eq. (3.33). In the right
panel of Fig. 3.2 we show the scaling of Y (x) with x = mB /T , compared with the
scaling of Y (x) with x = mχ /T for freeze-out. Both mechanisms can be understood
as the limits of increasing the interaction strength between the visible and the dark
matter sector (freeze-out) and decreasing this interaction strength (freeze-in) in a
given model.
Even though we illustrate the freeze-in mechanism with the example of the
decay of the SM particle B into dark matter, the dark matter sector could also be
populated by an annihilation process B B̄ → χ χ̄, decays of SM particles into a
References 83
References
1. Bellazzini, B., Cliche, M., Tanedo, P.: Effective theory of self-interacting dark matter. Phys. Rev.
D 88(8), 083506 (2013). arXiv:1307.1129 [hep-ph]
2. Lisanti, M.: Lectures on Dark Matter Physics (2016). arXiv:1603.03797 [hep-ph]
3. Bernal, N., Heikinheimo, M., Tenkanen, T., Tuominen, K., Vaskonen, V.: The dawn of FIMP
dark matter: a review of models and constraints. Int. J. Mod. Phys. A 32(27), 1730023 (2017).
arXiv:1706.07442 [hep-ph]
Chapter 4
WIMP Models
An additional scalar particle in the Standard Model can couple to the Higgs sector
of the Standard Model in a unique way. The so-called Higgs portal interactions is
renormalizable, which means that the coupling constant between two Higgs bosons
and two new scalars has a mass unit zero and can be represented by a c-number. All
(H + vH )2
VSM = μ2H φ † φ + λH (φ † φ)2 ⊃ μ2H
2
(H + vH )4 m2 m2 m2
+ λH ⊃ − H H 2 + H H 3 + H2 H 4 , (4.1)
4 2 2vH 8vH
In the Standard Model this leads to the two observable mass scales
# %
−μ2H vH
vH = = 246 GeV and mH = 2λH vH = 2 −μ2H = 125 GeV ≈ .
λH 2
(4.2)
The last relation is a numerical accident. The general Higgs potential in Eq. (4.1)
allows us to couple a new scalar field S to the Standard Model using a renormaliz-
able, dimension-4 term (φ † φ)(S † S).
For any new scalar field there are two choices we can make. First, we can give it
some kind of multiplicative charge, so we actually postulate a set of two particles,
one with positive and one with negative charge. This just means that our new scalar
field has to be complex values, such that the two charges are linked by complex
conjugation. In that case the Higgs portal coupling includes the combination S † S.
Alternatively, we can assume that no such charge exists, in which case our new
scalar is real and the Higgs portal interaction is proportional to S 2 .
Second, we know from the case of the Higgs boson that a scalar can have a finite
vacuum expectation value. Due to that VEV, the corresponding new state will mix
with the SM Higgs boson to form two mass eigenstates, and modify the SM Higgs
couplings and the masses of the W and Z bosons. This is a complication we neither
want nor need, so we will work with a dark real scalar. The combined potential
reads
This defines an ad-hoc Z2 parity +1 for all SM particles and −1 for the dark matter
candidate. The combined potential now reads
The mass of the dark matter scalar and its phenomenologically relevant SSH and
SSH H couplings read
%
mS = 2μ2S − λ3 vH
2 gSSH = −2λ3 vH gSSH H = −2λ3 . (4.6)
The sign of λ3 is a free parameter. Unlike for singlet models with a second VEV, the
dark singlet does not affect the SM Higgs relations in Eq. (4.2). However, the SSH
coupling mediates SS interactions with pairs of SM particles through the light Higgs
pole, as well as Higgs decays H → SS, provided the new scalar is light enough.
The SSH H coupling can mediate heavy dark matter annihilation into Higgs pairs.
We will discuss more details on invisible Higgs decays in Chap. 7.
For dark matter annihilation, the SSf f¯ transition matrix element based on the
Higgs portal is described by the Feynman diagram
S b(k2 )
S b̄(k1 )
All momenta are defined incoming, giving us for an outgoing fermion and an
outgoing anti-fermion
−imf −i
M = ū(k2 ) v(k1 ) (−2iλ3vH ) . (4.7)
vH (k1 + k2 ) − m2H + imH ΓH
2
88 4 WIMP Models
In this expression we see that vH cancels, but the fermion mass mf will appear
in the expression for the annihilation rate. We have to square this matrix element,
paying attention to the spinors v and u, and then sum over the spins of the external
fermions,
⎛ ⎞
2 2 ⎝
|M | = 4λ3 mf
2
v(k1 )v̄(k1 )⎠
spin spin
⎛ ⎞
1
×⎝ u(k2 )ū(k2 )⎠
spin
(k1 + k2 )2 − m2 + imH ΓH 2
H
1
= 4λ23 m2f Tr (k/ 1 − mf 1) (k/ 2 + mf 1) 2
(k1 + k2 )2 − m2H + m2H ΓH2
1
= 4λ23 m2f 4 k1 k2 − m2f 2
(k1 + k2 )2 − m2H + m2H ΓH2
(k1 + k2 )2 − 4m2f
= 8λ23 m2f 2 . (4.8)
(k1 + k2 )2 − m2H + m2H ΓH2
In the sum over spin and color of the external fermions the averaging is not yet
included, because we need to specify which of the external particles are incoming
or outgoing. As an example, we compute the cross section for the dark matter
annihilation process to a pair of bottom quarks
SS → H ∗ → b b̄ . (4.9)
s − 4m2b
|M |2 = Nc 8λ23 m2b 2
spin,color s − m2H + m2H ΓH2
1 1 − 4m2b /s
⇒ σ (SS → bb̄) = |M |2
16πs 1 − 4m2S /s
Nc 1 − 4m2b /s s − 4m2b
= √ λ23 m2b .
2π s s − 4m2S s − m2 2 + m2 Γ 2
H H H
(4.10)
4.1 Higgs Portal 89
To compute the relic density we need the velocity-averaged cross section. For the
contribution of the bb̄ final state to the dark matter annihilation rate we find the
leading term in the non-relativistic limit, s = 4m2S
%
1 − 4m2b /s
Eq. (3.20) Nc λ23 m2b s − 4m2b
σ v ≡ σ v = v √ 2
SS→bb̄ SS→bb̄ 2π s mS v s − m2H + m2H ΓH2
threshold Nc λ23 m2b m2b 4m2S − 4m2b
= 1 −
4π m2S m2S 4m2 − m2 2 + m2 Γ 2
S H H H
mS
mb Nc λ23 m2b 1
= 2 2 . (4.11)
π 4mS − m2H + m2H ΓH2
This expression holds for all scalar masses mS . In our estimate we identify the v-
independent expression with the thermal average. Obviously, this will become more
complicated once we include the next term in the expansion around v ≈ 0. The
Breit–Wigner propagator guarantees that the rate never diverges, even in the case
when the annihilating dark matter hits the Higgs pole in the s-channel.
The simplest parameter point to evaluate this annihilation cross section is on the
Higgs pole. This gives us
Nc λ23 m2b 1 2 2
mH =2mS Nc λ3 mb 15λ23
σ v = 2 2 = ≈
SS→bb̄ π 4mS − m2H + m2H ΓH2 πm2H ΓH2 GeV2
1 25λ23 ! 1
⇒ σχχ v = σ v ≈ = 1.7 · 10−9
BR(H → b b̄) SS→bb̄ GeV 2 GeV 2
⇔ λ3 ≈ 8 · 10−6 , (4.12)
with ΓH ≈ 4 · 10−5mH . While it is correct that the self coupling required on the
Higgs pole is very small, the full calculation leads to a slightly larger value λ3 ≈
10−3 , as shown in Fig. 4.1.
Lighter dark matter scalars also probe the Higgs mediator on-shell. In the Breit-
Wigner propagator of the annihilation cross section, Eq. (4.11), we have to compare
to the two terms
4m2S
mH − 4mS = mH 1 − 2
2 2 2
⇔ mH ΓH ≈ 4 · 10−5 m2H . (4.13)
mH
The two states would have to fulfill exactly the on-shell condition mH = 2mS for
the second term to dominate. We can therefore stick to the first term for mH > 2mS
90 4 WIMP Models
and find for the dominant decay to bb̄ pairs in the limit m2H
m2S
m2b
Nc λ23 m2b λ23
σ v = ≈
SS→bb̄ πm4H 1252 502 GeV2
! 1
= 1.7 · 10−9 ⇔ λ3 = 0.26 . (4.14)
GeV2
Heavier dark matter scalars well above the Higgs pole also include the annihila-
tion channels
SS → τ + τ − , W + W − , ZZ, H H, t t¯ . (4.15)
Unlike for on-shell Higgs decays, the bb̄ final state is not dominant for dark matter
annihilation when it proceeds through a 2 → 2 process. Heavier particles couple to
the Higgs more strongly, so above the Higgs pole they will give larger contributions
to the dark matter annihilation rate. For top quarks in the final state this simply
means replacing the Yukawa coupling m2b by the much larger m2t . In addition, the
Breit-Wigner propagator will no longer scale like 1/m2H , but proportional to 1/m2S .
4.1 Higgs Portal 91
The real problem is the annihilation to the weak bosons W, Z, because it leads to
a different scaling of the annihilation cross section. In the limit of large energies
we can describe for example the process SS → W + W − using spin-0 Nambu-
Goldstone bosons in the final state. These Nambu-Goldstone modes in the Higgs
doublet φ appear as the longitudinal degrees of freedom, which means that dark
matter annihilation to weak bosons at large energies follows the same pattern as
dark matter annihilation to Higgs pairs. Because we are more used to the Higgs
degree of freedom we calculate the annihilation to Higgs pairs,
SS → H H . (4.17)
The two Feynman diagrams with the direct four-point interaction and the Higgs
propagator at the threshold s = 4m2S scale like
M4 = gSSH H = −2λ3
gSSH 3m2H threshold 2λ3 vH 3m2H mS
mH 6λ3 m2H
MH = = − = − M4 .
s − m2H vH 4m2S − m2H vH 4m2S
(4.18)
This means for heavy dark matter we can neglect the s-channel Higgs propagator
contribution and focus on the four-scalar interaction. In analogy to Eq. (4.11) we
then compute the velocity-weighted cross section at threshold,
%
1 1 − 4m2H /m2S Eq. (3.20) λ2
4m2 1
σ (SS → H H ) = √ % 4λ23 = 3
√ 1 − 2H
16π s s − 4m2S 4π s mS vmS
λ23 4m2H threshold λ23 4m2H mS
mH λ23
σ v = √ 1− 2 = 1 − =
SS→H H 4πmS s mS 8πm2S m2S 8πm2S
(4.19)
For mS = 200 GeV we can derive the coupling λ3 which we need to reproduce the
observed relic density,
1 ! λ23 λ23
1.7 · 10−9 = ≈ ⇔ λ3 ≈ 0.04 . (4.20)
GeV2 8πm2S 106 GeV2
92 4 WIMP Models
The curve in Fig. 4.1 shows two thresholds related to four-point annihilation
channels, one at mS = mZ and one at mS = mH . Starting with mS = 200 GeV
and corresponding values for λ3 the annihilation to Higgs and Goldstone boson
pairs dominates the annihilation rate.
One lesson to learn from our Higgs portal considerations is the scaling of the
dark matter annihilation cross section with the WIMP mass mS . It does not follow
Eq. (3.3) at all and only follows Eq. (3.35) for very heavy dark matter. For our model,
where the annihilation is largely mediated by a Yukawa coupling mb , we find
⎧ 2 2
⎪
⎪ λ3 mb mH
⎪
⎪ mS
⎪
⎪ m4H 2
⎪
⎪
⎨ λ2 m2 mH
3 b
σχχ ∝ mS = (4.21)
⎪
⎪ m 2 Γ2 2
⎪
⎪ H H
⎪
⎪ λ23
⎪
⎪
⎩ 2 mS > mZ , mH .
mS
It will turn out that the most interesting scaling is on the Higgs peak, because the
Higgs width is not at all related to the weak scale.
Inspired by the WIMP assumption in Eq. (3.3) we can use a new massive gauge
boson to mediate thermal freeze-out production. The combination of a free vector
mediator mass and a free dark matter mass will allow us to study a similar range
scenarios as for the Higgs portal, Eq. (4.21). A physics argument is given by the
fact that the Standard Model has a few global symmetries which can be extended to
anomaly-free gauge symmetries.
The extension of the Standard Model with its hypercharge symmetry U (1)Y
by an additional U (1) gauge group defines another renormalizable portal to dark
matter. Since U (1)-field strength tensors are gauge singlets, the kinetic part of the
Lagrangian allows for kinetic mixing,
1 sχ 1
Lgauge = − B̂ μν B̂μν − V̂ μν B̂μν − V̂ μν V̂μν
4 2 4
1 1 sχ B̂μν
= − B̂μν V̂μν , (4.22)
4 sχ 1 V̂μν
for it. Similar to the Higgs portal, there is no symmetry that forbids it, so we do not
want to assume that all quantum corrections cancel to a net value sχ = 0.
The notation B̂μν indicates that the gauge fields are not yet canonically normal-
ized, which means that the residue of the propagator it not one. In addition, the gauge
boson propagators derived from Eq. (4.22) are not diagonal. We can diagonalize the
matrix in Eq. (4.22) and keep the hypercharge unchanged with a non-orthogonal
rotation of the gauge fields
⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎛ ⎞
B̂μ Bμ 1 0 −sχ /cχ Bμ
⎜ 3⎟
⎝Ŵμ ⎠ = G(θV ) ⎝Wμ3 ⎠ = ⎝0 1 0 ⎠ ⎝Wμ3 ⎠ . (4.23)
V̂μ Vμ 0 0 1/cχ Vμ
We now include the third component of the SU (2)L gauge field triplet Wμ =
(Wμ1 , Wμ2 , Wμ3 ) which mixes with the hypercharge gauge boson through electroweak
symmetry breaking to produce the massive Z boson and the massless photon.
Kinetic mixing between the SU (2)L field strength tensor and the U (1)X field
strength tensor is forbidden because V̂ μν Âaμν is not a gauge singlet. Assuming a
mass m̂V for the V -boson we write the combined mass matrix as
⎛ ⎞
g 2 −g g −g 2 sχ
v2 ⎜ ⎟
Eq. (4.23) ⎜ −g g g2 g g sχ ⎟
M2 = ⎜ ⎟ + O(sχ3 ) .
4 ⎝ 4m̂2V ⎠
−g 2 sχ g g sχ (1 + sχ2 ) + g 2 sχ2
v2
(4.24)
giving
⎛ 2 ⎞
mγ 0 0
R1 (ξ )R2 (θw )M 2 R2 (θw )T R1 (ξ )T = ⎝ 0 m2Z 0 ⎠ , (4.26)
0 0 m2V
provided
2sχ sw
tan 2ξ = + O(sχ2 ) . (4.27)
m̂2V
1− 2
m̂Z
94 4 WIMP Models
m̂2V 2m̂2V
= 1, (4.28)
m̂2Z (g 2 + g 2 )v 2
In addition to the dark matter mediator mass we also need the coupling of the new
gauge boson V to SM matter. Again, we start with the neutral currents for the not
canonically normalized gauge fields and rotate them to the physical gauge bosons
defined in Eq. (4.26),
⎛ ⎞
Â
e
ejEM , jZ , gD jD ⎝ Ẑ ⎠
sin θw cos θw
Â
⎛ ⎞
A
e
= ejEM , jZ , gD jD K ⎝Z ⎠ , (4.30)
sin θw cos θw
V
with
⎛ ⎞
−1 1 0 −sχ cw
K = R1 (ξ )R2 (θw )G−1 (θχ )R2 (θw )−1 ≈ ⎝0 1 0 ⎠ . (4.31)
0 sχ sw 1
The new gauge boson couples to the electromagnetic current with a coupling
strength of −sχ cw e, while to leading order in sχ and m̂V /m̂Z its coupling to the Z-
current vanishes. It is therefore referred to as hidden photon. This behavior changes
for larger masses, m̂V /m̂Z 1, for which the coupling to the Z-current can be
the dominating coupling to SM fields. In this case the new gauge boson is called
a Z -boson. For the purpose of these lecture notes we will concentrate on the light
V -boson, because it will allow for a light dark matter particle.
There are two ways in which the hidden photon could be relevant from a dark
matter perspective. The new gauge boson could be the dark matter itself, or it could
provide a portal to a dark matter sector if the dark matter candidate is charged under
U (1)X . The former case is problematic, because the hidden photon is not stable and
can decay through the kinetic mixing term. Even if it is too light to decay into the
4.2 Vector Portal 95
e− V
e+ V
Fig. 4.2 Feynman diagrams contributing to the annihilation of dark matter coupled to the visible
sector through a hidden photon
Through the U (1)X -mediator this dark fermion is in thermal contact with the
Standard Model through the usual annihilation shown in Eq. (3.1). If the dark matter
is lighter than the hidden photon and the electron mV > mχ > me , the dominant
s-channel Feynman diagram contributing to the annihilation cross section is shown
on the left of Fig. 4.2. This diagram resembles the one shown above Eq. (4.7) for
the case of a Higgs portal and the cross section can be computed in analogy to
Eq. (4.10),
1 2m 2
2m 2
σ (χ χ̄ → e+ e− ) =
χ
(sχ cw egD Qχ )2 1 + e
1+
12π s s
m2
1 − 4 2e
s mV
× , (4.33)
(s − mV ) + mV ΓV
2 2 2 2
m2χ
1−4 2
mV
96 4 WIMP Models
with ΓV the total width of the hidden photon V . For the annihilation of two dark
matter particles s = 4m2χ , and assuming mV
ΓV , the thermally averaged
annihilation cross section is given by
1 m2e m2e 4m2χ
σ v = (sχ cw egD Qχ ) 1 − 2 1 +
2
4π mχ 2m2χ (4m2χ − m2V )2
mχ
me m2χ
≈ (sχ cw eQf gD Qχ )2 . (4.34)
πm4V
It exhibits the same scaling as in the generic WIMP case of Eq. (3.3). In contrast
to the WIMP, however, the gauge coupling is rescaled by the mixing angle sχ and
for very small mixing angles the hidden photon can in principle be very light. In
Eq. (4.34) we assume that the dark photon decays into electrons. Since the hidden
photon branching ratios into SM final states are induced by mixing with the photon,
for masses mV > 2mμ the hidden photon also decays into muons and for hidden
photon masses above a few 100 MeV and below 2mτ , the hidden photon decays
mainly into hadronic states. For mV > mχ , the PLANCK bound on the DM mass
in Eq. (3.44) implies mV > 10 GeV and Eq. (4.34) would need to be modified by
the branching ratios into the different kinematically accessible final states. Instead,
we illustrate the scaling with the scenario Qχ = 1, mχ = 10 MeV and mV =
100 MeV that is formally excluded by the PLANCK bound, but only allows for
hidden photon decays into electrons. In this case, we find the observed relic density
given in Eq. (3.32) for a coupling strength
!4
1.7 · 10−9 ! mχ 2 0.1 GeV
= 0.07 (sχ gD )2 ⇔ sχ gD = 0.0015 .
GeV2 0.01 GeV mV
(4.35)
In the opposite case of mχ > mV > me , the annihilation cross section is dominated
by the diagram on the right of Fig. 4.2, with subsequent decays of the hidden photon.
The thermally averaged annihilation cross section then reads
3
m2 2
4 Q4
1 − V2 4 Q4
gD χ 1
mχ mχ
mV gD χ
σ v = 2 ≈ . (4.36)
8π m2χ m2 8πm2χ
1 − V2
2mχ
The scaling with the dark matter mass is the same as for a WIMP with mχ > mZ ,
as shown in Eq. (3.35). The annihilation cross section is in principle independent
of the mixing angle sχ , motivating the name secluded dark matter for such models,
but the hidden photon needs to eventually decay into SM particles. Again assuming
4.3 Supersymmetric Neutralinos 97
1.7 · 10−9 !
4
gD 4
gD
= = ⇔ gd = 0.025 . (4.37)
GeV2 8πm2χ 250 GeV2
vu2 + vd2 = vH
2
= (246 GeV)2
vu
⇔ vu = vH cos β vd = vH sin β ⇔ tan β = . (4.38)
vd
Two Higgs doublets include eight degrees of freedom, out of which three Nambu-
Goldstone modes are needed to make the weak bosons massive. The five remaining
degrees of freedom form a light scalar h0 , a heavy scalar H 0 , a pseudo-scalar
A0 , and a charged Higgs H ± . Altogether this gives four neutral and four charged
degrees of freedom. In the Standard Model we know that the one neutral (pseudo-
scalar) Nambu-Goldstone mode forms one particle with the W 3 gauge bosons. We
can therefore expect the supersymmetric higgsinos to mix with the bino and wino
as well. Because the neutralinos still are Majorana fermions, the eight degrees of
98 4 WIMP Models
freedom form four neutralino states χ̃i0 . Their mass matrix has the form
⎛ ⎞
M1 0 −mZ sw cos β mZ sw sin β
⎜ 0 M2 mZ cw cos β −mZ cw sin β ⎟
M =⎜
⎝−mZ sw cos β mZ cw cos β
⎟ .
⎠ (4.39)
0 −μ
mZ sw sin β −mZ cw sin β −μ 0
The mass matrix is real and therefore symmetric. In the upper left corner the bino
and wino mass parameters appear, without any mixing terms between them. In the
lower right corner we see the two higgsino states. Their mass parameter is μ, the
minus sign is conventional; by definition of the Higgs potential it links the up-
type and down-type Higgs or higgsino fields, so it has to appear in the off-diagonal
entries. The off-diagonal
√sub-matrices are proportional to mZ √ . In the limit sw → 0
and sin β = cos β = 1/ 2 a universal mixing mass term mZ / 2 between the wino
and each of the two higgsinos appears. It is the supersymmetric counterpart of the
combined Goldstone-W 3 mass mZ .
As any symmetric matrix, the neutralino mass matrix can be diagonalized
through a real orthogonal rotation,
N M N −1 = diag mχ̃ 0 j = 1, 2 (4.40)
j
It is possible to extend the MSSM such that the dark matter candidates become Dirac
fermions, but we will not explore this avenue in these lecture notes.
Because the SU (2)L gauge bosons as well as the Higgs doublet include charged
states, the neutralinos are accompanied by chargino states. They cannot be Majorana
particles, because they carry electric charge. However, as a remainder of the
neutralino Majorana property they do not have a well-defined fermion number,
like electrons or positrons have. The corresponding chargino mass matrix will not
include a bino-like state, so it reads
√
M 2mW sin β
M = √ 2
(4.41)
2mW cos β μ
It includes the remaining four degrees of freedom from the wino sector and four
degrees of freedom from the higgsino sector. As for the neutralinos, the wino and
higgsino components mix via a weak mass term. Because the chargino mass matrix
is real and not symmetric, it can only be diagonalized using two unitary matrices,
U ∗ M V −1 = diag mχ̃ ± j = 1, 2 (4.42)
j
For the dark matter phenomenology of the neutralino–chargino sector it will turn out
that the mass difference between the lightest neutralino(s) and the lightest chargino
are the relevant parameters. The reason is a possible co-annihilation process as
described in Sect. 3.3
4.3 Supersymmetric Neutralinos 99
We can best understand the MSSM dark matter sector in terms of the different
SU (2)L representations. The bino state as the partner of the hypercharge gauge
boson is a singlet under SU (2)L . The wino fields with the mass parameter M2
consist of two neutral degrees of freedom as well as four charged degrees of
freedom, one for each polarization of W ± . Together, the supersymmetric partners
of the W boson vector field also form a triplet under SU (2)L . Finally, each of the
two higgsinos arise as supersymmetric partner of an SU (2)L Higgs doublet. The
neutralino mass matrix in Eq. (4.39) therefore interpolates between singlet, doublet,
and triplet states under SU (2)L .
The most relevant couplings of the neutralinos and charginos we need to consider
for our dark matter calculations are
g
gZ χ̃ 0 χ̃ 0 = |N13 |2 − |N14 |2
1 1 2cw
ghχ̃ 0 χ̃ 0 = g N11 − gN12 (sin α N13 + cos α N14 )
1 1
gAχ̃ 0 χ̃ 0 = g N11 − gN12 (sin β N13 − cos β N14 )
1 1
gγ χ̃ + χ̃ − = e
1 1
1 ∗ ∗
gW χ̃ 0 χ̃ + = g √ N14 V12 − N12 V11 , (4.43)
1 1 2
with e = gsw , sw2 ≈ 1/4 and hence cw 2 ≈ 3/4. The mixing angle α describes the
rotation from the up-type and down-type supersymmetric Higgs bosons into mass
eigenstates. In the limit of only one light Higgs boson with a mass of 126 GeV it is
given by the decoupling condition cos(β −α) → 0. The above form means for those
couplings which contribute to the (co-) annihilation of neutralino dark matter
– neutralinos couple to weak gauge bosons through their higgsino content
– neutralinos couple to the light Higgs through gaugino–higgsino mixing
– charginos couple to the photon diagonally, like any other charged particle
– neutralinos and charginos couple to a W -boson diagonally as higgsinos and
gauginos
Finally, supersymmetry predicts scalar partners of the quarks and leptons, so-called
squarks and sleptons. For the partners of massless fermions, for example squarks q̃,
there exists a q q̃ χ̃j0 coupling induced through the gaugino content of the neutralinos.
If this kind of coupling should contribute to neutralino dark matter annihilation, the
lightest supersymmetric scalar has to be almost mass degenerate with the lightest
neutralino. Because squarks are strongly constrained by LHC searches and because
of the pattern of renormalization group running, we usually assume one of the
sleptons to be this lightest state. In addition, the mixing of the scalar partners of
the left-handed and right-handed fermions into mass eigenstates is driven by the
corresponding fermion mass, the most attractive co-annihilation scenario in the
scalar sector is stau–neutralino co-annihilation. However, in these lecture notes we
100 4 WIMP Models
B̃ f¯ W̃ W− H̃ f¯
f˜ W̃ +
A
B̃ f W̃ W+ H̃ f
Fig. 4.3 Sample Feynman diagrams for the annihilation of supersymmetric binos (left), winos
(center), and higgsinos
will focus on a pure neutralino–chargino dark matter sector and leave the discussion
of the squark–quark–neutralino coupling to Chap. 7 on LHC searches.
Similar to the previous section we now compute the neutralino annihilation rate,
assuming that in the 10–1000 GeV mass range they are thermally produced. For
mostly bino dark matter with
M1 M2 , |μ| (4.44)
g 4 m2 0
χ̃1
σ (B̃ B̃ → f f¯) ≈ with mχ̃ 0 ≈ M1 mf˜ . (4.45)
16πm4˜ 1
f
The problem with pure neutralino annihilation is that in the limit of relatively
heavy sfermions the annihilation cross section drops rapidly, leading to a too large
predicted bino relic density. Usually, this leads us to rely on stau co-annihilation for
a light bino LSP. Along these lines it is useful to mention that with gravity-mediated
supersymmetry breaking we assume M1 and M2 to be identical at the Planck scale,
which given the beta functions of the hypercharge and the weak interaction turns
into the condition M1 ≈ M2 /2 at the weak scale, i.e. light bino dark matter would
be a typical feature in these models.
If M1 becomes larger than M2 or μ we can to a good approximation consider the
limit
In that case we know that independent of the relation of M2 and μ there will be at
least the lightest chargino close in mass to the LSP. It appears in the t-channel of
the actual annihilation diagram and as a co-annihilation partner. To avoid a second
4.3 Supersymmetric Neutralinos 101
M2 μ, M1 , mf˜ . (4.47)
From the list of neutralino couplings in Eq. (4.43) we see that in the absence of
additional supersymmetric particles pure wino dark matter can annihilate through a
t-channel chargino, as illustrated in Fig. 4.3. Based on the known couplings and on
dimensional arguments the annihilation cross section should scale like
&
'
+ 1 ' 1 − 4m2W /s g 4 sw2
−
σ (W̃ W̃ → W W ) ≈ ( withmχ̃ 0 ≈ mχ̃ ± ≈ M2
mW
16π 1 − 4m2 0 /s cw
4 m2 1 1
χ̃ χ̃ 0 1 1
1 1 g 4 sw4 1
≈ 4m 0
⇒ σχ χ ∝ (4.48)
16π mχ̃ 0 v cw χ̃ m2 0
1 1 χ̃1
The scaling with the mass of the dark matter agent does not follow our original
postulate for the WIMP miracle in Eq. (3.3), which was σχχ ∝ m2 0 /m4W . If we
χ̃1
only rely on the direct dark matter annihilation, the observed relic density translated
into a comparably light neutralino mass,
g 4 sw4 0.74 1
σ v
Eq. (3.35)
= 4 m2
≈ = 1.7 · 10−9
W̃ W̃ →W + W − 16πcw 0 450 m2 0 GeV2
χ̃1 χ̃1
However, this estimate is numerically poor. The reason is that in contrast to this
lightest neutralino, the co-annihilating chargino can annihilate through a photon s-
channel diagram into charged Standard Model fermions,
Nc e4 Nc g 4 s 2
σ (χ̃1+ χ̃1− → γ ∗ → f f¯) ≈ = w
. (4.50)
f
16πm2χ̃ ± f
16πm 2
χ̃ ±
1 1
For light quarks alone the color factor combined with the sum over flavors adds a
factor 5 × 3 = 15 to the annihilation rate. In addition, for χ̃1+ χ̃1− annihilation we
need to take into account the Sommerfeld enhancement through photon exchange
between the slowly moving incoming charginos, as derived in Sect. 3.5. This gives
us the correct values
m 0
2 m 0
2
2 χ̃ Sommerfeld χ̃
ΩW̃ h ≈ 0.12 1
−→ 0.12 1
. (4.51)
2.1 TeV 2.6 TeV
102 4 WIMP Models
4 = 0
tan =10
3
M2 TeV
0
4 4
3 2 3
2 0 1
1 –2 –1
M –4 –3 TeV
1 Te LSP mass
V
m 10= 0.1 | 0.2 | 0.5 | 1.0 | 1.5 | 2.0 | 2.5 TeV
No Sommerfeld =
Fig. 4.4 Combinations of neutralino mass parameters M1 , M2 , μ that produce the correct relic
abundance, accounting for Sommerfeld-enhancement, along with the LSP mass. The relic surface
without Sommerfeld enhancement is shown in gray. Figure from Ref. [3]
In Fig. 4.4 and in the left panel of Fig. 4.5 this wino LSP mass range appears as
a horizontal plateau in M2 , with and without the Sommerfeld enhancement. In the
right panel of Fig. 4.5 we show the mass difference between the lightest neutralino
and the lighter chargino. Typical values for a wino-LSP mass splitting are around
Δm = 150 MeV, sensitive to loop corrections to the mass matrices shown in
Eqs. (4.39) and (4.41).
Finally, we can study higgsino dark matter in the limit
Again from Eq. (4.43) we see that in addition to the t-channel chargino exchange,
annihilation through s-channel Higgs states is possible. Again, the corresponding
Feynman diagrams are shown in Fig. 4.3. At least in the pure higgsino limit with
Ni3 = Ni4 the two contributions to the H̃ H̃ Z 0 coupling cancel, limiting the
impact of s-channel Z-mediated annihilation. Still, these channels make the direct
annihilation of higgsino dark matter significantly more efficient than for wino dark
matter. The Sommerfeld enhancement plays a sub-leading role, because it mostly
affects the less relevant chargino co-annihilation,
mχ̃ 0
2 mχ̃ 0
2
Sommerfeld
ΩH̃ h ≈ 0.122 1
−→ 0.12 1
. (4.53)
1.13 TeV 1.14 TeV
4 4 tan
n =10
tan =10
3 3
h
4.3 Supersymmetric Neutralinos
2
w 2
w
M2 TeV
M2 TeV
wh
1 bh 1
bw
0 0
4 bw 4
3 3
2 2
1 3 4
M1 TeV
1 3 4 2
M1 TeV
1 2 0 1
–1 0 –2 –1
0 –3 –2 –4 –3
–4
TeV TeV
CLSP-LSP mass splitting
tt,bb || ud,cs || tb || W+W- || W+W+ m 1±-m 10= <0.15 | 0.25 | 0.35 | 1 | 20 | >40 GeV
Fig. 4.5 Left: combinations of neutralino mass parameters M1 , M2 , μ that produce the correct relic abundance, not accounting for Sommerfeld-enhancement,
along with the leading annihilation product. Parameters excluded by LEP are occluded with a white or black box. Right: mass splitting between the lightest
chargino and lightest neutralino. Parameters excluded by LEP are occluded with a white or black box. Figures from Ref. [4]
103
104 4 WIMP Models
The higgsino LSP appears in Fig. 4.4 as a vertical plateau in μ. The corresponding
mass difference between the lightest neutralino and chargino is much larger than for
the wino LSP; it now ranges around a GeV.
Also in Fig. 4.4 we see that a dark matter neutralino in the MSSM can be much
lighter than the pure wino and higgsino results in Eqs. (4.51) and (4.53) suggest.
For a strongly mixed neutralino the scaling of the annihilation cross section with
the neutralino mass changes, and poles in the s-channels appear. In the left panel of
Fig. 4.5 we add the leading Standard Model final state of the dark matter annihilation
process, corresponding to the distinct parameter regions
– the light Higgs funnel region with 2mχ̃ 0 = mh . The leading contribution to
1
dark matter annihilation is the decay to b quarks. As a consequence of the tiny
Higgs width the neutralino mass has to be finely adjusted. According to Eq. (4.43)
the neutralinos couple to the Higgs though gaugino-higgsino mixing. A small,
O(10%) higgsino component can then give the correct relic density. This very
narrow channel with a very light neutralino is not represented in Fig. 4.5. Decays
of the Higgs mediator to lighter fermions, like tau leptons, are suppressed by their
smaller Yukawa coupling and a color factor;
– the Z-mediated annihilation with 2mχ̃ 0 ≈ mZ , with a final state mostly
1
consisting out of light-flavor jets. The corresponding neutralino coupling requires
a sizeable higgsino content. Again, this finely tuned low-mass channel in not
shown in Fig. 4.5;
– s-channel annihilation through the higgsino content with some bino admixture
also occurs via the heavy Higgs bosons A0 , H 0 , and H ± with their large widths.
This region extends to large neutralino masses, provided the Higgs masses
follows the neutralino mass. The main decay channels are bb̄, t t¯, and t b̄. The
massive gauge bosons typically decouple from the heavy Higgs sector;
– with a small enough mass splitting between the lightest neutralino and lightest
chargino, co-annihilation in the neutralino–chargino sector becomes important.
For a higgsino-bino state there appears a large annihilation rate to χ̃10 χ̃10 →
W + W − with a t-channel chargino exchange. The wino-bino state will mostly
co-annihilate into χ̃10 χ̃1± → W ± → q q̄ , but also contribute to the W + W − final
state. Finally, as shown in Fig. 4.5 the co-annihilation of two charginos can be
efficient to reach the observed relic density, leading to a W + W + final state;
– one channel which is absent from our discussion of purely neutralino and
chargino dark matter appears for a mass splitting between the scalar partner of
the tau lepton, the stau, and the lightest neutralino of few per-cent or less the two
states can efficiently co-annihilate. In the scalar quark sector the same mechanism
exists for the lightest top squark, but it leads to issues with the predicted light
Higgs mass of 126 GeV.
In the right panel of Fig. 4.5 we show the mass difference between the lightest
chargino and the lightest neutralino. In all regions where chargino co-annihilation is
required, this mass splitting is small. From the form of the mass matrices shown in
Eqs. (4.39) and (4.41) this will be the case when either M2 or μ are the lightest mass
4.4 Effective Field Theory 105
parameters. Because of the light higgsino, the two higgsino states in the neutralino
sector lead to an additional level separation between the two lightest neutralinos,
the degeneracy of the lightest chargino and the lightest neutralino masses will be
less precise here. For pure winos the mass difference between the lightest chargino
and the lightest neutralino can be small enough that loop corrections matter and the
chargino becomes long-lived.
Note that all the above listed channels correspond to ways of enhancing the dark
matter annihilation cross section, to allow for light dark matter closer to the Standard
Model masses. In that sense they indicate a fine tuning around the generic scaling
σχχ ∝ 1/m2 0 which in the MSSM predicts TeV-scale higgsinos and even heavier
χ̃1
winos.
As another, final theoretical framework to describe the dark matter relic density
we introduce an effective theory of dark matter [5]. We will start from the MSSM
description and show how the heavy mediator can decouple from the annihilation
process. This will put us into a situation similar to the description for example of
the muon decay in Fermi’s theory. Next, we will generalize this result to an effective
Lagrangian. Finally, we will show how this effective Lagrangian describes dark
matter annihilation in the early universe.
Let us start with dark matter annihilation mediated by a heavy pseudoscalar A
in the MSSM, as illustrated in the right panel of Fig. 4.3. The Aχ̃10 χ̃10 coupling is
defined in Eq. (4.43). If we assume the heavy Higgs to decay to two bottom quarks,
the 2 → 2 annihilation channel is
This description of dark matter annihilation includes two different mass scales, the
dark matter mass mχ̃ 0 and a decoupled mediator mass mA
mχ̃ 0 . The matrix
1 1
element for the dark matter annihilation process includes the A-propagator. From
Sect. 3.2 we know that for WIMP annihilation the velocity of the incoming particles
is small, v 1. If the energy of the scattering process, which determines the
momentum flowing through the A-propagator is much smaller than the A-mass,
we can approximate the intermediate propagator as
1 1 m2b
→− 2 ⇔ σ (χ̃10 χ̃10 → b b̄) ∝ gA
2
g2
χ̃ 0 χ̃ 0 Abb
. (4.55)
q − mA
2 2 mA 1 1 m4A
The fact that the propagator of the heavy scalar A does not include a momentum
dependence is equivalent of removing the kinetic term of the A-field from the
Lagrangian. We remove the heavy scalar field from the propagating degrees of
106 4 WIMP Models
freedom of our theory. The only actual particles we can use in our description of the
annihilation process of Eq. (4.54) are the dark matter fermions χ̃10 and the bottom
quarks. Between them we observe a four-fermion interaction.
On the Lagrangian level, such a four-fermion interactions mediated by a non-
propagating state is given by an operator of the type
Given this Lagrangian, the question arises if we want to use this interaction as
a simplified description of the MSSM annihilation process or view it as a more
general structure without a known ultraviolet completion. For example for the muon
decay we nowadays know that the suppression is given by the W -mass of the
weak interaction. Using our derivation of Eq. (4.57) we are inspired by the MSSM
annihilation channel through a heavy pseudoscalar. In that case the scale Λ should
be given by the mass of the lightest particle we integrate out. This defines, modulo
order-one factors, the matching condition
From Eq. (4.59) we see that all predictions by the effective Lagrangian are
invariant under a simultaneous scaling of the new physics scale Λ and the underlying
coupling gann . Moreover, we know that the annihilation process χ̃10 χ̃10 → f f¯ can
be mediated by a scalar in the t-channel. In the limit mf mχ̃ 0 mf˜ this defines
1
essentially the same four-fermion interaction as given in Eq. (4.57).
Indeed, the effective Lagrangian is more general than its interpretation in terms of
one half-decoupled model. This suggests to regard the Lagrangian term of Eq. (4.57)
as the fundamental description of dark matter, not as an approximation to a full
model. For excellent reasons we usually prefer renormalizable Lagrangians, only
including operators with mass dimension four or less. Nevertheless, we can extend
this approach to examples including all operators up to mass dimension six. This
allows to describe all kinds of four-fermion interactions. From constructing the
Standard Model Lagrangian we know that given a set of particles we need selection
rules to choose which of the possible operators make it into our Lagrangian. Those
rules are given by the symmetries of the Lagrangian, local symmetries as well as
global symmetries, gauge symmetries as well as accidental symmetries. This way
4.4 Effective Field Theory 107
cj
L = LSM + Oj , (4.59)
Λn−4
j
where the operators Oj are organized by their dimensionality. The cj are couplings
of the kind shown in Eq. (4.57), called Wilson coefficients, and Λ is the new physics
scale.
The one aspect which is crucial for any effective field theory or EFT analysis
is the choice of operators contributing to a Lagrangian. Like for any respectable
theory we have to assume that any interaction or operator which is not forbidden by
a symmetry will be generated, either at tree level or at the quantum level. In practice,
this means that any analysis in the EFT framework will have to include a large
number of operators. Limits on individual Wilson coefficients have to be derived
by marginalizing over all other Wilson coefficients using Bayesian integration (or a
frequentist profile likelihood).
From the structure of the Lagrangian we know that there are several ways to
generate a higher dimensionality for additional operators,
– external particles with field dimensions adding to more than four. The four-
fermion interaction in Eq. (4.57) is one example;
– an energy scale of the Lagrangian normalized to the suppression scale, leading
to corrections to lower-dimensional operators of the kind v 2 /Λ2 ;
– a derivative in the Lagrangian, which after Fourier transformation becomes
a four-momentum in the Feynman rule. This gives corrections to lower-
dimensional operators of the kind p2 /Λ2 .
For dark matter annihilation we usually rely on dimension-6 operators of the
first kind. Another example would be a χ̃10 χ̃10 W W interaction, which requires a
dimension-5 operator if we couple to the gauge boson fields and a dimension-
7 operator if we couple to the gauge field strengths. The limitations of an EFT
treatment are obvious when we experimentally observe poles, for example the
A-resonance in the annihilation process of Eq. (4.54). In the presence of such a
resonance it does not help to add higher and higher dimensions—this is similar
to Taylor-expanding a pole at a finite energy around zero. Whenever there is a new
particle which can be produced on-shell we have to add it to the effective Lagrangian
as a new, propagating degree of freedom. Another limiting aspect is most obvious
from the third kind of operators: if the correction has the form p2 /Λ2 , and the
available energy for the process allows for p2 Λ2 , higher-dimensional operators
are no longer suppressed. However, this kind of argument has to be worked out
for specific observables and models to decide whether an EFT approximation is
justified.
Finally, we can estimate what kind of effective theory of dark matter can describe
the observed relic density, Ωχ h2 ≈ 0.12. As usual, we assume that there is
one thermally produced dark matter candidate χ. Two mass scales given by the
108 4 WIMP Models
propagating dark matter agent and by some non-propagating mediator govern our
dark matter model. If a dark matter EFT should ever work we need to require that
the dark matter mass is significantly smaller than the mediator mass,
mχ mmed . (4.60)
In terms of one coupling constant g governing the annihilation process we can use
the usual estimate of the WIMP annihilation rate, similar to Eq. (3.3),
We know that it is crucial for this rate to be large enough to bring the thermally
produced dark matter rate to the observed level. This gives us a lower limit on the
ratio mχ /m2med or alternatively an upper limit on the mediator mass for fixed dark
matter mass. As a rough relation between the mediator and dark matter masses we
find
The dark matter agent in the EFT model can be very light, and the mediator will
typically be significantly heavier. An EFT description of dark matter annihilation
seems entirely possible.
Going back to our two models, the Higgs portal and the MSSM neutralino, it is
less clear if an EFT description of dark matter annihilation works well. In part of the
allowed parameter space, dark matter annihilation proceeds through a light Higgs
in the s-channel on the pole. Here the mediator is definitely a propagating degree
of freedom. For neutralino dark matter we discuss t-channel chargino-mediated
annihilation, where mχ̃ ± ≈ mχ̃ 0 . Again, the chargino is clearly propagating at the
1 1
relevant energies.
Finally, to fully rely on a dark matter EFT we need to make sure that all
relevant processes are correctly described. For our WIMP models this includes the
annihilation predicting the correct relic density, indirect detection and possibly the
Fermi galactic center excess introduced in Chap. 5, the limits from direct detection
discussed in Chap. 6, and the collider searches of Chap. 7. We will comment on the
related challenges in the corresponding sections.
References 109
References
1. Plehn, T.: Lectures on LHC Physics. Lect. Notes Phys. 886 (2015). arXiv:0910.4182 [hep-ph].
http://www.thphys.uni-heidelberg.de/~plehn/?visible=review
2. Djouadi, A., Lebedev, O., Mambrini, Y., Quevillon, J.: Implications of LHC searches for Higgs–
portal dark matter. Phys. Lett. B 709, 65 (2014). arXiv:1112.3299 [hep-ph]
3. Bramante, J., Desai, N., Fox, P., Martin, A., Ostdiek, B., Plehn, T.: Towards the final word on
neutralino dark matter. Phys. Rev. D 93(6), 063525 (2016). arXiv:1510.03460 [hep-ph]
4. Bramante, J., Fox, P.J., Martin, A., Ostdiek, B., Plehn, T., Schell, T., Takeuchi, M.: Relic
neutralino surface at a 100 TeV collider. Phys. Rev. D 91, 054015 (2015). arXiv:1412.4789
[hep-ph]
5. Goodman, J., Ibe, M., Rajaraman, A., Shepherd, W., Tait, T.M.P., Yu, H.B.: Constraints on light
majorana dark matter from colliders. Phys. Lett. B 695, 185 (2011). arXiv:1005.1286 [hep-ph]
Chapter 5
Indirect Searches
There exist several ways of searching for dark matter in earth-bound or satellite
experiments. All of them rely on the interaction of the dark matter particle with
matter, which means they only work if the dark matter particles interacts more than
only gravitationally. This is the main assumption of these lecture notes, and it is
motivated by the fact that the weak gauge coupling and the weak mass scale happen
to predict roughly the correct relic density, as described in Sect. 3.1.
The idea behind indirect searches for WIMPS is that the generally small
current dark matter density is significantly enhanced wherever there is a clump of
gravitational matter, as for example in the sun or in the center of the galaxy. In these
regions dark matter should efficiently annihilate even today, giving us either photons
or pairs of particles and anti-particles coming from there. Particles like electrons or
protons are not rare, but anti-particles in the appropriate energy range should be
detectable. The key ingredient to the calculation of these spectra is the fact that dark
matter particles move only very slowly relative to galactic objects. This means we
need to compute all processes with incoming dark matter particles essentially at
rest. This approximation is even better than at the time of the dark matter freeze-out
discussed in Sect. 3.2.
Indirect detection experiments search for many different particles which are
produced in dark matter annihilation. First, this might be the particles that dark
matter directly annihilated into, for example in a 2 → 2 scattering process. This
includes protons and anti-protons if dark matter annihilates into quarks. Second,
we might see decay products of these particles. An example for such signatures are
neutrinos. Examples for dark matter annihilation processes are
χ̃10 χ̃10 → + −
χ̃10 χ̃10 → q q̄ → pp̄ + X
χ̃10 χ̃10 → τ + τ − , W + W − , b b̄ + X → + − , pp̄ + X ... (5.1)
The final state particles are stable leptons or protons propagating large distances
in the Universe. While the leptons or protons can come from many sources, the
anti-particles appear much less frequently. One key experimental task in many
indirect dark matter searches is therefore the ability to measure the charge of a
lepton, typically with the help of a magnetic field. For example, we can study the
energy dependence of the antiproton–proton ratio or the positron–electron ratio as a
function of the energy. The dark matter signature is either a line or a shoulder in the
spectrum, with a cutoff
The main astrophysical background is pulsars, which produce for example electron–
positron pairs of a given energy. There exists a standard tool to simulate the
propagation of all kinds of particles through the Universe, which is called GAL-
PROP. For example Pamela has seen such a shoulder with a positron flux pointing
to a very large WIMP annihilation rate. An interpretation in terms of dark matter
is inconclusive, because pulsars could provide an alternative explanation and the
excess is in tension with PLANCK results from CMB measurements, as discussed
in Sect. 3.4.
In these lecture notes we will focus on photons from dark matter annihilation,
which we can search for in gamma ray surveys over a wide range of energies. They
also will follow one of two kinematic patterns: if they occur in the direct annihilation
process, they will appear as a mono-energetic line in the spectrum
χχ → γ γ with E γ ≈ mχ , (5.3)
for any weakly interacting dark matter particle χ. This is because the massive dark
matter particles are essentially at rest when colliding. If the photons are radiated off
charged particles or appear in pion decays π 0 → γ γ
χχ → τ + τ − , b b̄, W + W − → γ + · · · , (5.4)
they will follow a fragmentation pattern. We can either compute this photon
spectrum or rely on precise measurements from the LEP experiments at CERN
(see Sect. 7.1 for a more detailed discussion of the LEP experiments). This photon
spectrum will constrain the kind of dark matter annihilation products we should
consider, as well as the mass of the dark matter particle.
The energy dependence of the photon flow inside a solid angle ΔΩ is given by
dΦγ σ v dNγ
= dΩ dz ρχ2 (z) , (5.5)
dEγ 8πm2 0 dEγ ΔΩ l
χ̃1
the distance from the observer to the actual annihilation event (line of sight). The
photon flux depends on the dark matter density squared because it arises from the
annihilation of two dark matter particles. A steeper dark matter halo profile, i.e.
the dark matter density increasing more rapidly towards the center of the galaxy,
results in a more stringent bound on dark matter annihilation. The key problem
in the interpretation of indirect search results in terms of dark matter is that we
cannot measure the dark matter distributions ρχ (l) for example in our galaxy
directly. Instead, we have to rely on numerical simulations of the dark matter profile,
which introduce a sizeable parametric or theory uncertainty in any dark-matter
related result. Note that the dark matter profile is not some kind of multi-parameter
input which we have the freedom to assume freely. It is a prediction of numerical
dark matter simulations with associated error bars. Not all papers account for this
uncertainty properly. In contrast, the constraints derived from CMB anisotropies
discussed in Sect. 1.4 are largely free of astrophysical uncertainties.
There exist three standard density profiles; the steep Navarro-Frenk-White
(NFW) profile is given by
ρ γ =1 ρ
ρNFW (r) = γ =
r 3−γ r r 2
, (5.6)
r
1+ 1+
R R R R
where r is the distance from the galactic center. Typical parameters are a character-
istic scale R = 20 kpc and a solar position dark matter density ρ = 0.4 GeV/cm3
at r = 8.5 kpc. In this form we can easily read off the scaling of the dark matter
density in the center of the galaxy, i.e. r R; there we find ρNFW ∝ r −γ . The
second steepest is the exponential Einasto profile ,
2 r α !
ρEinasto(r) = ρ exp − −1 , (5.7)
α R
with α = 0.17 and R = 20 kpc. It fits micro-lensing and star velocity data best.
Third is the Burkert profile with a constant density inside a radius R,
ρ
ρBurkert (r) =
, (5.8)
r r2
1+ 1+ 2
R R
where we assume R = 3 kpc. Assuming a large core results in very diffuse dark
matter at the galactic center, and therefore yields the weakest bound on neutralino
self annihilation. Instead assuming R = 0.1 kpc only alters the dark matter
annihilation constraints by an order-one factor. We show the three profiles in Fig. 5.1
and observe that the difference between the Einasto and the NFW parametrizations
are marginal, while the Burkert profile has a very strongly reduced dark matter
density in the center of the galaxy. One sobering result of this comparison is that
whatever theoretical considerations lie behind the NFW and Einasto profiles, once
114 5 Indirect Searches
DM (GeV/cm3)
J Factor
1000
Einasto 2
100 NFW 1
Burkert 1/70
10
r (kpc)
0.01 0.1 1 10
Dark Matter Halo Profiles
Fig. 5.1 Dark matter galactic halo profiles, including standard Einasto and NFW profiles along
with a Burkert profile with a 3 kpc core. J factors are obtained assuming a spherical dark matter
distribution and integrating over the radius from the galactic center from r 0.05 to 0.15 kpc. J
factors are normalized so that J (ρNFW ) = 1. Figure from Ref. [1]
their parameters are fit to data the possibly different underlying arguments play
hardly any role. The impact on gamma ray flux of different dark matter halo profiles
is conveniently parameterized by the factor
J ∝ dΩ dz ρχ2 (z) with J (ρNFW ) ≡ 1 . (5.9)
ΔΩ line of sight
Also in Fig. 5.1 we quote the J factors integrated over the approximate HESS
galactic center gamma ray search range, r = 0.05 . . . 0.15 kpc. As expected, the
Burkert profile predicts a photon flow lower by almost two orders of magnitude. In
a quantitative analysis of dark matter signals this difference should be included as
a theory error or a parametric error, similar to for example parton densities or the
strong coupling in LHC searches.
While at any given time there is usually a sizeable set of experimental anomalies
discussed in the literature, we will focus on one of them: the photon excess in the
center of our galaxy, observed by Fermi, but discovered in their data by several non-
Fermi groups. The excess is shown in Fig. 5.2 and covers the wide photon energy
range
and clearly does not form a line. The error bars refer to the interstellar emission
model, statistics, photon fragmentation, and instrumental systematics. Note that
the statistical uncertainties are dominated not by the number of signal events, but
by the statistical uncertainty of the subtracted background events. The fact that
uncertainties on photon fragmentation, means photon radiation off other Standard
5 Indirect Searches 115
E
dN/dE
103
102
modelling
statistics
10
fragmentation
systematics
1
1 10 102
E [GeV]
Fig. 5.2 Excess photon spectrum of the Fermi galactic center excess. Figure from Ref. [2],
including the original data and error estimates from Ref. [3]
Model particles are included in the analysis, indicates, that for an explanation we
resort to photon radiation off dark matter annihilation products, Eq. (5.4). This
allows us to link the observed photon spectrum to dark matter annihilation, where
the photon radiation off the final state particles is known very well from many
collider studies. Two aspects of Fig. 5.2 have to be matched by any explanation.
First, the total photon rate has to correspond to the dark matter annihilation rate. It
turns out that the velocity-averaged annihilation rate has to be in the same range as
the rate required for the observed relic density,
10−8 . . . 10−9
σχχ v = , (5.11)
GeV2
but with a much lower velocity spectrum now. Second, the energy spectrum of
the photons reflects the mass of the dark matter annihilation products. Photons
radiated off heavier, non-relativistic states will typically have higher energies. This
information is used to derive the preferred annihilation channels given in Fig. 5.3.
The official Fermi data confirms these ranges, but with typically larger error bars. As
an example, we quote the fit information under the assumption of two dark matter
Majorana fermions decaying into a pair of Standard Model states [5]:
10−25
τ +τ − W +W −
q̄q ZZ
c̄c hh
b̄b t̄t
gg
σv /A [cm3 s−1 ]
10−26
10−27
101 102
mχ [GeV]
Fig. 5.3 Preferred dark matter masses and cross sections for different annihilation channels [4].
Figure from Ref. [5]
For each of these annihilation channels the question arises if we can also generate
a sizeable dark matter annihilation rate at the center of the galaxy today, while also
predicting the correct relic density Ωχ h2 .
Similar to our calculation of the relic density, we will first show what range of
annihilation cross sections from the galactic center can be explained by Higgs portal
dark matter. Because the Fermi data prefers a light dark matter particle we will
focus on the two velocity-weighted cross sections accounting for the observed relic
density and for the galactic center excess around the Higgs pole mS /2 = mH . First,
we determine how large an annihilation cross section in the galactic center we can
achieve. The typical cross sections given in Eq. (5.11) can be explained by mS =
220 GeV and λ3 = 1/10 as well as a more finely tuned mS = mH /2 = 63 GeV
with λ3 ≈ 10−3 , as shown in Fig. 4.1.
We can for example assume that the Fermi excess is due to on-shell Higgs-
mediated annihilation, while the observed relic density does not probe the Higgs
pole. The reason we can separate these two annihilation signals based on the same
5.1 Higgs Portal 117
Feynman diagram this way is that the Higgs width is smaller than the typical
velocities, ΓH /mH v. We start with the general annihilation rate of a dark matter
scalar, Eq. (4.10) and express it including the leading relative velocity dependence
from Eq. (3.20),
v2
s= 4m2S + m2S v 2 = 4m2S 1+ . (5.12)
4
The WIMP velocity at the point of dark matter decoupling in the early universe we
find roughly
mS Eq. (3.9) mS mS 2 1
xdec := = 28 ⇔ Tdec ≈ = v ⇔ 2
vann = .
Tdec 28 2 ann 14
(5.13)
Today the Universe is colder, and the WIMP velocity is strongly red-shifted. Typical
galactic velocities today are
m 1 1
v0 ≈ 2.3 · 105 ≈ vann , (5.14)
s c 1300
This hierarchy in typical velocities between the era of thermal dark matter pro-
duction and annihilation and dark matter annihilation today is what will drive our
arguments below.
Only assuming mb s the general form of the scalar dark matter annihilation
rate is
Nc 2 2 1 s
σ v = λ3 mb √ 2
SS→bb̄ 2π mS s s − m 2 + m2H ΓH2
H
v2 Nc 2 2 1 4m2S
= 1+ + O(v 4 ) λ3 mb
8 2π 2m2S 4m2 − m2 + m2 v 2 2 + m2 Γ 2
S H S H H
2
v 2 Nc 2 2 1 4mS
= 1+ λ3 mb 2 2 2
8 2π 2mS 4m − m2 + 2(4m2 − m2 )m2 v 2 + m2 Γ 2
S H S H S H H
+ O(v 4 ) . (5.15)
The typical velocity of the dark matter states only gives a small correction for
scalar, s-wave annihilation. It includes two aspects: first, an over-all reduction of
the annihilation cross section for finite velocity v > 0, and second a combined
cutoff of the Breit-Wigner propagator,
m2H −10
max 2(4m2S − m2H )m2S v 2 , m2H ΓH2 = m4S max 8v 2
1− , 16 · 10 .
4m2S
(5.16)
118 5 Indirect Searches
Close to but not on the on-shell pole mH = mS /2 the modification of the Breit-
Wigner propagator can be large even for small velocities, while the rate reduction
can clearly not account for a large boost factor describing the galactic center excess.
We therefore ignore the correction factor (1 + v 2 /8) when averaging the velocity-
weighted cross section over the velocity spectrum. If, for no good reason, we assume
a narrow Gaussian velocity distribution centered around v̄ we can approximate
Eq. (5.15) as [6]
Nc 2 2 1 4m2S
σ v ≈ λ3 mb , (5.17)
SS→bb̄ 2π 2m2S 4m2 − m2 + ξ m2 v̄ 2 2 + 4m2 Γ 2
S H S S H
√
with a fitted ξ ≈ 2 2. This modified on-shell pole condition shifts the required
dark matter mass slightly below the Higgs mass 2mS mH . The size of this shift
depends on the slowly dropping velocity, first at the time of dark matter decoupling,
v̄ ≡ vann , and then today, v̄ ≡ v0 vann . This means that during the evolution of
the Universe the Breit-Wigner propagator in Eq. (5.17) is always probed above its
pole, probing the actual pole only today.
We first compute the Breit–Wigner suppression of vσ in the early universe,
starting with today’s on-shell condition responsible for the galactic center excess,
! mH mH
mS = ≈ ⇒ 4m2S − m2H + ξ m2S vann
2
v02 2
2 1+ √
2
2
vann
= 4m2S
1+ √ − m2H
2
2
2 vann v02
= 4mS √ − √
2 2
vann
v0 m2S
≈ . (5.18)
5
This means that the dark matter particle has a mass just slightly below the Higgs
pole. Using Eq. (5.17) the ratio of the two annihilation rates, for all other parameters
constant, then becomes
Higgs portal model. For this purpose we assume that unlike in Eq. (5.18) the pole
condition is fulfilled in the early universe, leading to a Breit-Wigner suppression
today of
! mH
mS =
2
vann
2 1+ √
2
v2 vann
v0 m2S
⇒ 4m2S − m2H +ξ m2S v02 = 4m2S 1 + √0 − m2H ≈ − .
2 5
(5.20)
The dark matter particle now has a mass further below the pole. This means that
we can interpolate between the two extreme ratios of velocity-averaged annihilation
rates using a very small range of mS < mH /2. If we are willing to tune this mass
relation we can accommodate essentially any dark matter annihilation rate today
with the Higgs portal model, close to on-shell Higgs pole annihilation. The key
to this result is that following Eq. (5.16) the Higgs width-to-mass ratio is small
compared to vann , so we can decide to assign the on-shell condition to each of
the two relevant annihilation processes. In between, none of the two processes
will proceed through the on-shell Higgs propagator, which indeed gives σ0 v ≈
σann v. The corresponding coupling λ3 we can read off Fig. 4.1. Through this
argument it becomes clear that the success of the Higgs portal model rests on the
wide choice of scalings of the dark matter annihilation rate, as shown in Eq. (4.21).
know that pure wino or higgsino dark matter particles reproducing the observed
relic density are much heavier than the Fermi data suggests. Instead of these pure
states we will rely on mixed states. A major obstacle of all MSSM interpretations
are the mass ranges shown in Fig. 5.3, indicating a clear preference of the galactic
center excess for neutralino masses mχ̃ 0 60 GeV. This does not correspond to the
1
typical MSSM parameter ranges giving us the correct relic density. This means that
in an MSSM analysis of the galactic center excess the proper error estimate for the
photon spectrum is essential.
We start our discussion with the finely tuned annihilation through a SM-like light
Higgs or through a Z-boson, i.e. χ̃10 χ̃10 → h∗ , Z ∗ → bb̄. The properties of this
channel are very similar to those of the Higgs portal. On the left y-axes of Fig. 5.4
we show the (inverse) relic density for a bino-higgsino LSP, both for a wide range
of neutralino masses and zoomed into the Higgs pole region. We decouple the wino
to M2 = 700 GeV and vary M1 to give the correct relic density for three fixed,
small higgsino mass values. We see that the bb̄ annihilation channel only predicts
the correct relic density in the two pole regions of the MSSM parameter space,
with mχ̃ 0 = 46 GeV and mχ̃ 0 = 63 GeV. The width of both peaks is given by the
1 1
momentum smearing through velocity spectrum rather than physical Higgs width
and Z-width. The enhancement of the two peaks over the continuum is comparable,
with the Z-funnel coupled to the velocity-suppressed axial-vector current and the
Higgs funnel suppressed by the small bottom Yukawa coupling.
On the right y-axis of Fig. 5.4, accompanied by dashed curves, we show the
annihilation rate in the galactic center. The rough range needed to explain the Fermi
excess is indicated by the horizontal line. As discussed for the Higgs portal, the
difference to the relic density is that the velocities are much smaller, so the widths
of the peaks are now given by the physical widths of the two mediators. The scalar
Higgs resonance now leads to a much higher peak than the velocity-suppressed
axial-vector coupling to the Z-mediator. This implies that continuum annihilation
as well as Z-pole annihilation would not explain the galactic center excess, while
the Higgs pole region could.
<σ v>GCE
10
10 −28
40 50 60 70
mχ [GeV]
5.2 Supersymmetric Neutralinos 121
This is why in the right panel of Fig. 5.4 we zoom into the Higgs peak regime. A
valid explanation of the galactic center excess requires the solid relic density curves
to cross the solid horizontal line and at the same time the dashed galactic center
excess lines to cross the dashed horizontal line. We see that there exist finely tuned
regions around the Higgs pole which allow for an explanation of the galactic center
excess via a thermal relic through the process χ̃10 χ̃10 → bb̄. The physics of this
channel is very similar to scalar Higgs portal dark matter.
For slightly larger neutralino masses, the dominant annihilation becomes
χ̃10 χ̃10 → W W , mediated by a light t-channel chargino combined with chargino-
neutralino co-annihilation for the relic density. Equation (4.43) indicates that in
this parameter region the lightest neutralino requires either a wino content or a
higgsino content. In the left panel of Fig. 5.5 we show the bino-higgsino mass
plane indicating the preferred regions from the galactic center excess. The lightest
neutralino mass varies from mχ̃ 0 ≈ 50 GeV to more than 250 GeV. Again, we
1
decouple the wino to M2 = 700 GeV, so the LSP is a mixture of higgsino, coupling
to electroweak bosons, and bino. For this slice in parameter space an increase in
|μ| compensates any increase in M1 , balancing the bino and higgsino contents. The
MSSM parameter regions which allow for efficient dark matter annihilation into
gauge bosons are strongly correlated in M1 and μ, but not as tuned as the light Higgs
funnel region with its underlying pole condition. Around M1 = |μ| = 200 GeV a
change in shape occurs. It is caused by the on-set of neutralino annihilation to top
pairs, in spite of a heavy Higgs mass scale of 1 TeV.
To trigger a large annihilation rate for χ̃10 χ̃10 → t t¯ we lower the heavy
pseudoscalar Higgs mass to mA = 500 GeV. In the right panel of Fig. 5.5 we show
the preferred parameter range again in the bino-higgsino mass plane and for heavy
winos, M2 = 700 GeV. As expected, for mχ̃ 0 > 175 GeV the annihilation into top
1
400 450
250
400 400
300 300 350
200 200
200 300
100
mχ1
250
mχ1
150
μ
0
μ
100 200
−100
100 150
0 −200
−300 100
−100 50 −400 50
0
−200 0 200 −400 −200 0 200 400
M1 M1
Fig. 5.5 Left: lightest neutralino mass based on the Fermi photon where χ̃10 χ̃10 → W W is a
dominant annihilation channel. Right: lightest neutralino mass based on the Fermi photon spectrum
for mA = 500 GeV, where we also observe χ̃10 χ̃10 → t t¯. The five symbols indicate local best-fitting
parameter points. The black shaded regions are excluded by the Fermi limits from dwarf spheroidal
galaxies
122 5 Indirect Searches
pairs follows the W W annihilation region in the mass plane. The main difference
between the W W and t t¯ channels is the smaller M1 values around |μ| = 200 GeV.
The reason is that an increased bino fraction compensates for the much larger top
Yukawa coupling. The allowed LSP mass range extends to mχ̃ 0 200 GeV.
1
The only distinctive feature for mA = 500 GeV in the M1 vs μ plane is the set of
peaks around M1 ≈ 300 GeV. Here the lightest neutralino mass is around 250 GeV,
just missing the A-pole condition. Because on the pole dark matter annihilation
through a 2 → 1 process becomes too efficient, the underlying coupling is reduced
by a smaller higgsino fraction of the LSP. The large-|M1| regime does not appear in
the upper left corner of Fig. 5.5 because at tree level this parameter region features
mχ̃ + < mχ̃ 0 and we have to include loop corrections to revive it.
1 1
In principle, for mχ̃ 0 > 126 GeV we should also observe neutralino annihilation
1
into a pair of SM-like Higgs bosons. However, the t-channel neutralino diagram
which describes this process will typically be overwhelmed by the annihilation
to weak bosons with the same t-channel mediator, shown in Fig. 4.3. From the
annihilation into top pairs we know that s-channel mediators with mA,H ≈ 2mh
are in principle available, and depending on the MSSM parameter point the heavy
scalar Higgs can have a sizeable branching ratio into two SM-like Higgses. For
comparably large velocities in the early universe both s-channel mediators indeed
work fine to predict the observed relic density. For the smaller velocities associated
with the galactic center excess the CP-odd mediator A completely dominates, while
the CP-even H is strongly velocity-suppressed. On the other hand, only the latter
couples to two light Higgs bosons, so an annihilation into Higgs pairs responsible
for the galactic center excess is difficult to realize in the MSSM.
Altogether we see that the annihilation channels
can explain the Fermi galactic center excess and the observed relic density in the
MSSM. Because none of them correspond to the central values of a combined fit
to the galactic center excess, it is crucial that we take into account all sources of
(sizeable) uncertainties. An additional issue which we will only come to in Chap. 6
is that direct detection constraints in addition to requiring the correct relic density
and the correct galactic center annihilation rate is a serious challenge to the MSSM
explanations.
transformations, with its singlino partner. The singlet state forms a second scalar H1
and a second pseudo-scalar A1 , which will appear in the dark matter annihilation
process. As singlets they will only couple to gauge bosons through mixing with the
Higgs fields, which guarantees that they are hardly constrained by many searches.
The singlino will add a fifth Majorana state to the neutralino mass matrix in
Eq. (4.39),
⎛ ⎞
M1 0 −mZ cos βsw mZ sin βsw 0
⎜ mZ cos βcw −mZ sin βcw ⎟
⎜ 0 M2 0 ⎟
⎜ ⎟
M = ⎜−mZ cos βsw mZ cos βcw 0 −μ −mZ sin β λ̃ ⎟ .
⎜ ⎟
⎝ mZ sin βsw −mZ sin βcw −μ 0 −mZ cos β λ̃⎠
0 0 −mZ sin β λ̃ −mZ cos β λ̃ 2κ̃μ
(5.23)
The singlet/singlino sector can be described by two parameters, the mass parameter
κ̃ and the coupling for example to the other neutralinos λ̃ [7]. First, we need to
include the singlino in our description of the neutralino sector. While the wino and
the two higgsinos form a triplet or two doublets under SU (2)L , the singlino just adds
a second singlet under SU (2)L . The only difference to the bino is that the singlino
is also a singlet under U (1)Y , which makes no difference unless we consider co-
annihilation driven by hypercharge interaction. A singlet neutralino will therefore
interact and annihilate to the observed relic density through its mixing with the
wino or with the higgsinos, just like the usual bino.
What is crucial for the explanation of the galactic center excess is the s-channel
dark matter annihilation through the new pseudoscalar,
mA1
χ̃10 χ̃10 → A1 → bb̄ with mχ̃ 0 = ≈ 50 GeV
1 2
√
2
gA0 χ̃ 0 χ̃ 0 = 2 g λ̃ N13 N14 − κ̃N15 . (5.24)
1 1 1
We can search for these additional singlet and singlino states at colliders. One
interesting aspect is the link between the neutralino and the Higgs sector, which
can be probed by looking for anomalous Higgs decays, for example into a pair of
dark matter particles. Because an explanation of the galactic center excess requires
the singlet and the singlino to be light and to mix with their MSSM counterparts,
the resulting invisible branching ratio of the Standard-Model-like Higgs boson can
be large.
The discussion of the dark matter annihilation processes responsible for today’s
dark matter density as well as a possible galactic center excess nicely illustrates
the limitations of the effective theory approach introduced in Sect. 4.4. To achieve
124 5 Indirect Searches
the currently observed density with light WIMPs we have to rely on an efficient
annihilation mechanism, which can be most clearly seen in the MSSM. For example,
we invoke s-channel annihilation or co-annihilation, both of which are not well
captured by an effective theory description with a light dark matter state and a heavy,
non-propagating mediator. In the effective theory language of Sect. 4.4 this means
the mediators are not light compared to the dark matter agent,
mχ mmed . (5.25)
In addition, the MSSM and the NMSSM calculations illustrate how one full
model extending the Standard Model towards large energy scales can offer several
distinct explanations, only loosely linked to each other. In this situation we can
collect all the necessary degrees of freedom in our model, but ignore additional
states for example predicted by an underlying supersymmetry of the Lagrangian.
This approach is called simplified models. It typically describes the dark matter
sector, including co-annihilating particles, and a mediator coupling the dark matter
sector to the Standard Model. In that language we have come across a sizeable set
of simplified models in our explanation of the Fermi galactic center excess:
– dark singlet scalar with SM Higgs mediator (Higgs portal, SS → bb̄);
– dark fermion with SM Z mediator (MSSM, χ̃10 χ̃10 → f f¯, not good for galactic
center excess);
– dark fermion with SM Higgs mediator (MSSM, χ̃10 χ̃10 → bb̄);
– dark fermion with t-channel fermion mediator (MSSM, χ̃10 χ̃10 → W W );
– dark fermion with heavy s-channel pseudo-scalar mediator (MSSM, χ̃10 χ̃10
→ t t¯);
– dark fermion with light s-channel pseudo-scalar mediator (NMSSM, χ̃10 χ̃10
→ bb̄).
In addition, we encountered a set of models in our discussion of the relic density in
the MSSM in Sect. 4.3:
– dark fermion with fermionic co-annihilation partner and charged s-channel
mediator (MSSM, χ̃10 χ̃1− → t¯b);
– dark fermion with fermionic co-annihilation partner and SM W -mediator
(MSSM, χ̃10 χ̃1− → ūd);
– dark fermion with scalar t-channel mediator (MSSM, χ̃10 χ̃10 → τ τ );
– dark fermion with scalar co-annihilation partner (MSSM, χ̃10 τ̃ → τ ∗ )
Strictly speaking, all the MSSM scenarios require a Majorana fermion as the dark
matter candidate, but we can replace it with a Dirac neutralino in an extended
supersymmetric setup.
One mediator which is obviously missing in the above list is a new, heavy vector
V or axial-vector. Heavy gauge bosons are ubiquitous in models for physics beyond
the Standard Model, and the only question is how we would link or couple them
to a dark matter candidate. In principle, there exist different mass regimes in the
5.4 Simplified Models and Vector Mediator 125
mχ − mV mass plane,
To allow for a global analysis including direct detection as well as LHC searches,
we couple the vector mediator to a dark matter fermion χ and the light up-quarks,
L ⊃ gu ū γ μ Vμ u + gχ χ̄ γ μ Vμ χ . (5.27)
ΓV
0.4 . . . 10% for gu = gχ = 0.2 . . . 1 . (5.28)
mV
χχ → V ∗ → uū (5.29)
we can compute the predicted relic density or the indirect detection prospects. While
the χ − χ − V interaction also induces a t-channel process χχ → V ∗ V ∗ , its
contribution to the total dark matter annihilation rate is always strongly suppressed
by its 4-body phase space. The on-shell annihilation channel
χχ → V V (5.30)
becomes important for mV < mχ , with a subsequent decay of the mediator for
example to two Standard Model fermions. In that case the dark matter annihilation
rate becomes independent of the mediator coupling to the Standard Model, giving
much more freedom to avoid experimental constraints.
In Fig. 5.6 we observe that for a light mediator the predicted relic density is
smaller than the observed values, implying that the annihilation rate is large. In the
left panel we see the three kinematic regimes defined in Eq. (5.26). First, for small
mediator masses the 2 → 2 annihilation process is χχ → uū. The dependence
on the light mediator mass is small because the mediator is always off-shell and the
position of its pole is far away from the available energy of the incoming dark matter
particles. Around the pole condition 2mχ ≈ mV ±ΓV the model predicts the correct
relic density with very small couplings. For heavy mediators the 2 → 2 annihilation
process rapidly decouples with large mediator masses, as follows for example from
Eq. (3.3). In the right panel of Fig. 5.6 we assume a constant mass ratio mV /mχ 1,
finding that our simplified vector model has no problems predicting the correct relic
density over a wide range of model parameters.
126 5 Indirect Searches
Ω h2
Ω h2
10 10
1
1
10−1
10−2 10−1
10−3 10−2
−4
10
mχ=10 GeV 10−3 mV/mχ=1.5
10−5
mχ=50 GeV −4 mV/mχ=3
10
10−6 mχ=100 GeV mV/mχ=10
10−7 3
10−5 3
102 10 104 102 10 104
mV [GeV] mχ [GeV]
Fig. 5.6 Relic density for the simplified vector mediator model of Eq. (5.27) as a function of the
mediator mass for constant dark matter mass (left) and as a function of the dark matter mass for a
constant ratio of mediator to dark matter mass (right). Over the shaded bands we vary the couplings
gu = gχ = 0.2, . . . , 1. Figure from Ref. [8]
One issue we can illustrate with this non-MSSM simplified model is a strong
dependence of our predictions on the assumed model features. The Lagrangian of
Eq. (5.27) postulates a coupling to up-quarks, entirely driven by our goal to link
dark matter annihilation with direct detection and LHC observables. From a pure
annihilation perspective we can also define the mediator coupling to the Standard
Model through muons, without changing any of the results shown in Fig. 5.6.
Coupling to many SM fermions simultaneously, as we expect from an extra gauge
group, will increase the predicted annihilation rate easily by an order of magnitude.
Moreover, it is not clear how the new gauge group is related to the U (1)Y × SU (2)L
structure of the electroweak Standard Model. All this reflects the fact that unlike
the Higgs portal model or supersymmetric extensions a simplified model is hardly
more than a single tree-level or loop-level Feynman diagram describing dark matter
annihilation. It describes the leading effects for example in dark matter annihilation
based on 2 → 2 or 2 → 1 kinematics or the velocity dependence at threshold.
However, because simplified models are usually not defined on the full quantum
level, they leave a long list of open questions. For new gauge bosons, also discussed
in Sect. 4.2, they include fundamental properties like gauge invariance, unitarity, or
freedom from anomalies.
References
1. Bramante, J., Desai, N., Fox, P., Martin, A., Ostdiek, B., Plehn, T.: Towards the final word on
neutralino dark matter. Phys. Rev. D 93(6), 063525 (2016). arXiv:1510.03460 [hep-ph]
2. Butter, A., Murgia, S., Plehn, T., Tait, T.M.P.: Saving the MSSM from the galactic center excess.
Phys. Rev. D 96(3), 035036 (2017). arXiv:1612.07115 [hep-ph]
References 127
The experimental strategy for direct dark matter detection is based on measuring a
recoil of a nucleus after scattering with WIMP dark matter. For this process we can
choose the optimal nuclear target based on the largest possible recoil energy. We
start with the non-relativistic relation between the momenta in relative coordinates
between the nucleus and the WIMP, assuming a nucleus composed of A nucleons
and with charge Z. The relative WIMP velocity v0 /2 is defined in Eq. (3.20), so in
terms of the reduced mass mA mχ /(mA + mχ ) we find
2
2 v02
mA mχ mA 2
2 v0
2mA EA = |pA | ≈ ⇔ EA = m
mA + mχ
4 (mA + mχ )2 χ 8
! 2
dEA 1 (−2)mA 2 v0 !
⇒ = + m χ = 0 ⇔ mA = mχ
dmA (mA + mχ )2 (mA + mχ )3 8
m2χ 1 v02 mχ 2
⇒ EA = = v ≈ 104 eV, (6.1)
4 2mχ 4 32 0
with v0 ≈ 1/1300 and for a dark matter around 1 TeV. Because of the above
relation, an experimental threshold from the lowest observable recoil can be directly
translated into a lower limit on dark matter masses we can probe with such
experiments. This also tells us that for direct detection all momentum transfers
are very small compared to the electroweak or WIMP mass scale. Similar masses
of WIMP and nuclear targets produce the largest recoil in the 10 keV range.
Remembering that the Higgs mass in the Standard Model is roughly the same as
the mass of the gold atom we know that it should be possible to find appropriate
nuclei, for example Xenon with a nucleus including A = 131 nucleons, of which
Z = 54 are protons.
Strictly speaking, the dark matter velocity relevant for direct detection is a
combination of the thermal, un-directional velocity v0 ≈ 1/1300 and the earth’s
movement around the sun,
t − 152.5 d m
vearth-sun c = 15000 cos 2π
365.25 d s
−5 t − 152.5 d v0 t − 152.5 d
⇔ vearth-sun = 5 · 10 cos 2π ≈ cos 2π .
365.25 d 15 365.25 d
(6.2)
If we had full control over all annual modulations in a direct detection scattering
experiment we could use this modulation to confirm that events are indeed due to
dark matter scattering.
Given that a dark matter particle will (typically) not be charged under SU (3)c , the
interaction of the WIMP with the partons inside the nucleons bound in the nucleus
will have to be mediated by electroweak bosons or the Higgs. We expect a WIMP
charged under SU (2)L to couple to a nucleus by directly coupling to the partons in
the nucleons through Z-exchange. This means with increased resolution we have to
compute the scattering processes for the nucleus, the nucleons, and the partons:
χ χ χ χ χ χ
Z Z Z
(A, Z) (A, Z) p, n p, n u, d u, d
This gauge boson exchange will be dominated by the valence quarks in the
combinations p ≈ (uud) and n ≈ (udd). Based on the interaction of individual
nucleons, which we will calculate below, we can express the dark matter interaction
with a heavy nucleus as
1
σ SI (χA → χA) = |ZMp + (A − Z)Mn |2
16πs
N
⎧ 2
⎪
⎪ A )
⎪
⎨ 64πs N |Mp + Mn | for Z = A/2
2
= (6.3)
⎪ A2 )
⎪
⎪
⎩ |Mn |2 for Mp = Mn .
16πs N
We refer to this coherent interaction as spin-independent scattering with the cross
section σ SI . The scaling with A2 appears as long as the exchange particle probes
all nuclei in the heavy nucleon coherently, which means we have to square the
6 Direct Searches 131
sum of the individual matrix elements. The condition for coherent scattering can be
formulated
√ in terms of the size of the nucleus and the wavelength of the momentum
transfer 2mA EA ,
# # 1 mp 109 eV
2mA EA ≈ 2Amp EA < ≈ ⇔ EA < . (6.4)
A1/3 rp A 1/3 2A5/3
for mp ≈ 1 GeV. This is clearly true for the typical recoils given in Eq. (6.1). In
the next step, we need to compute the interaction to the individual A nucleons in
terms of their partons. Because there are very different types of partons, valence
quarks, sea quarks, and gluons, with different quantum numbers, this calculation is
best described in a specific model.
One of the most interesting theoretical questions in direct detection is how
different dark matter candidates couple to the non-relativistic nuclei. The general
trick is to link the nucleon mass (operator) to the nucleon-WIMP interaction
(operator). We know that three quarks can form a color singlet state; in addition,
there will be a gluon and a sea quark content in the nucleons, but in a first attempt we
assume that those will play a sub-leading role for the nucleon mass or its interaction
to dark matter, as long as the mediator couples to the leading valence quarks. We
start with the nucleon mass operator evaluated between two nucleon states and write
it in terms of the partonic quark constituents,
N|mN 1|N = mN N|N = N|mq q̄q|N = mq N|q̄q|N , (6.5)
q q
These two estimates suggest that we can link the nucleon interaction operator to the
nucleon mass operator in the naive quark parton model. Based on the nucleon mass
we define a non-relativistic quark density inside the nucleon as
Eq. (6.5) mq mq
fN := N|N = N|q̄q|N = fq ⇔ fq := N|q̄q|N
q
mN q
mN
mN
⇒ N| χχ q̄q|N = χχ fq .
q q
mq
(6.7)
132 6 Direct Searches
The form factors fq describe the probability of finding a (valence) quark inside the
proton or neutron at a momentum transfer well below the nucleon mass. They can
for example be computed using lattice gauge theory.
The issue with Eq. (6.7) is that it neither includes gluons nor any quantum effects.
Things become more interesting with a Higgs-mediated WIMP–nucleon interaction,
as we encounter it in our Higgs portal models. To cover this case we need to compute
both, the nucleon mass and the WIMP–nucleon interaction operators beyond the
quark parton level. From LHC we know that at least for relativistic protons the
dominant Higgs coupling is through the gluon content. In the Standard Model the
Higgs coupling to gluons is mediated by a top loop, which does not decouple for
large top masses. The fact that, in contrast, the top quark does decouple from the
nucleon mass will give us a non-trivial form factor for gluons.
Defining our quantum field theory framework, in proper QCD two terms
contribute to the nucleon mass: the valence quark masses accounted for in Eq. (6.5)
and the strong interaction, or gluons, leading to a binding energy. This view is
supported by the fact that pions, consisting of two quarks, are almost an order of
magnitude lighter than protons and neutrons, with three quarks. We can describe
both sources of the nucleon mass using the energy–momentum tensor T μν as it
appears for example in the Einstein–Hilbert action in Eq. (1.15),
Scale invariance, or the lack of fundamental mass scales in our theory implies
that the energy–momentum tensor is traceless. A non-zero trace of the energy–
momentum tensor indicates a change in the Lagrangian with respect to a scale
variation, where in our units a variation of the length scale and a variation of the
energy scale are equivalent. Lagrangians which are symmetric under such a scale
variation cannot include explicit mass terms, because those correspond to a fixed
energy scale.
In addition to the quark masses, for the general form of the nucleon mass given
in Eq. (6.8) we need to consider contributions from the running strong coupling to
the trace of the energy–momentum tensor. At one-loop order the running of αs with
the underlying energy scale p2 is given by
1 1 2nq 11
αs (p2 ) = with b0 = − − Nc , (6.9)
p2 4π 3 3
b0 log 2
ΛQCD
running, i.e. which would otherwise force us to take the logarithm of a dimensionful
scale p2 . Physically, this scale is defined by the point at which the strong coupling
explodes and we need to switch degrees of freedom. That occurs at positive energy
scales as long as b0 > 0, or as long as the gluons dominate the running of αs .
Because the running of the strong coupling turns the dimensionless parameter
αs into the dimensionful parameter ΛQCD , this mechanism is called dimensional
transmutation.
The contribution of the running strong coupling to the nucleon mass is given
through the kinetic gluon term in the Lagrangian, combined with the momentum
variation of the strong coupling. Altogether we find
2 dαs
mN N|N = mq N|q̄q|N + N|Gaμν Ga μν |N
q
αs d log p2
αs b0
= mq N|q̄q|N −
N|Gaμν Ga μν |N
q
2
αs 2nq 11
= mq N|q̄q|N + − Nc N|Gaμν Ga μν |N ,
q
8π 3 3
(6.10)
again written at one loop and neglecting the anomalous dimension of the quark
fields. One complication in this formula is the appearance of all six quark fields in
the sum, suggesting that all quarks contribute to the nucleon mass. While this is true
for the up and down valence masses, and possibly for the strange mass, the three
heavier quarks hardly appear in the nucleon. Instead, they contribute to the nucleon
mass through gluon splitting or self energy diagrams in the gluon propagator. We
can compute this contribution in terms of a heavy quark effective theory, giving us
the leading contribution per heavy quark
αs 1
N|q̄q|N =− N|Gaμν Ga μν |N + O . (6.11)
c,b,t 12πmq m3q
We can insert this result in the above expression and find the complete expression
for the nucleon mass operator
αs
mN N|N = mq N|q̄q|N − N|Gaμν Ga μν |N
12π
u,d,s c,b,t
αs 2 × 6 11
+ − Nc N|Gaμν Ga μν |N
8π 3 3
αs 2 × 3 11
= mq N|q̄q|N + − Nc N|Gaμν Ga μν |N .
8π 3 3
u,d,s
(6.12)
134 6 Direct Searches
Starting from the full beta function of the strong coupling this result implies that we
only need to consider the running due to the three light-flavor quarks and the gluon
itself for the nucleon mass prediction,
(u,d,s)
αs b0
mN N|N = mq N|q̄q|N − N|Gaμν Ga μν |N
2
u,d,s
1 2nlight 11
with b0(u,d,s) =− − Nc . (6.13)
4π 3 3
This reflects a full decoupling of the heavy quarks in their contribution to the
nucleon mass. From the derivation it is clear that the same structure appears for
any number of light quarks defining our theory.
Exactly in the same way we now describe the WIMP–nucleon interaction in
terms of six quark flavors. The light quarks, including the strange quark, form
the actual quark content of the nucleon. Virtual heavy quarks occur through gluon
splitting at the one-loop level. In addition to the small Yukawa couplings of the light
quarks we know from LHC physics that we can translate the Higgs-top interaction
into an effective Higgs–gluon interaction. In the limit of large quark masses the
loop-induced coupling defined by the Feynman diagram
H t
is given by
1 αs
LggH ⊃ gggH H Gμν Gμν with gggH = . (6.14)
vH 12π
In terms of an effective field theory the dimension-5 operator scales like 1/v and
not 1/mt . The reason is that the dependence on the top mass in the loop and on the
Yukawa coupling in the numerator cancel exactly in the limit of small momentum
transfer through the Higgs propagator. Unlike for the nucleon mass operators this
means that in the Higgs interaction the Yukawa coupling induces a non-decoupling
feature in our theory. Using this effective field theory level we can successively
6.1 Higgs Portal 135
compute the ggH n+1 coupling from the ggH n coupling via
∂ 1
gggH n+1 = mn+1
q gggH n . (6.15)
∂mq mnq
This relation also holds for n = 0, which means it formally links the Higgs–
nucleon coupling operator to the nucleon mass operator in Eq. (6.13). The only
difference between the effective Higgs-gluon interaction at LHC energies and at
direct detection energies is that in direct detection all three quarks c, b, t contribute
to the effective interaction defined in Eq. (6.14).
Keeping this link in mind we see that the Higgs-mediated WIMP interaction
operator again consists of two terms
αs
N| mq H q̄q|N − N| H Gaμν Ga μν |N . (6.16)
12π
u,d,s c,b,t
S S S S
H H
u, d, s u, d, c g g
The Yukawa interaction, described by the first terms in Eq. (6.16) has a form
similar to the nucleon mass in Eq. (6.13). Comparing the two formulas for light
quarks only we indeed find
mq H q̄q|N H mN N|N
Eq. (6.13)
N| =H mq N|q̄q|N = .
q u,d,s u,d,s u,d,s
(6.17)
This reproduces the simple recipe for computing the light-quark-induced WIMP–
nucleon interaction as proportional to the nucleon mass. The remaining, numerically
dominant gluonic terms is defined in the so-called chiral limit mu,d,s = 0. Because
of the non-decoupling behavior this contribution is independent of the heavy quark
136 6 Direct Searches
The contribution to the nucleon mass comes from the gluon and nlight light quark
loops, while the gluonic contribution to the nucleon–Higgs coupling is driven by
the nheavy heavy quark loops. The boundary condition is nlight + nheavy = 6. At
the energy scale of direct detection we can compensate for this mismatch in the
Higgs–nucleon coupling of the naive scaling between the nucleon mass and nucleon
Yukawa interaction shown in Eq. (6.7). We simply include an additional factor
2nheavy
mq
N|H q̄q|N = 3 H N|N , (6.19)
mN 11 2n light
q c,b,t Nc − c,b,t,g
3 3
which we can estimate at leading order and at energy scales relevant for direct dark
matter searches to be
2nheavy 2
nlight =3
3× 2
3 = 3 = . (6.20)
11 2nlight 2×3 9
Nc − 11 −
3 3 3
This effect leads to a suppression of the already small Higgs–nucleon interaction at
low momentum transfer. The exact size of the suppression depends on the number of
active light quarks in our effective theory, which in turn depends on the momentum
transfer.
At the parton level, the weakly interacting part of the calculation of the nucleon–
WIMP scattering rate closely follows the calculation of WIMP annihilation in
Eq. (4.7). In the case of direct detection the valence quarks in the nucleons couple
through a t-channel Higgs to the dark matter scalar S. We account for the parton
nature of the three relevant heavy quarks by writing the nucleon Yukawa coupling
as fN mN × 2/9,
−2ifN mN −i
M = ū(k2 ) u(k1 ) (−2iλ3 vH ) . (6.21)
9vH (k1 − k2 )2 − m2H
For an incoming and outgoing fermion the two spinors are ū and u. As long as the
Yukawa coupling is dominated by the heavy quarks, it will be the same for neutrons
and protons, i.e. Mp = Mn . We have to square this matrix element, paying attention
to the spinors v and u, and then sum over the spins of the external fermions. In this
6.1 Higgs Portal 137
case we already know that we are only interested in scattering in the low-energy
limit, i.e. |(k1 − k2 )2 | m2N m2H ,
|M |2
spin
⎛ ⎞ ⎛ ⎞
16 2 2 2 ⎝ 1
= λ3 fN mN u(k2 )ū(k2 )⎠ ⎝ u(k1 )ū(k1 )⎠ 2
81 (k1 − k2 )2 − m2H
spin spin
16 2 2 2 1
= λ3 fN mN Tr [(k/ 2 + mN 1) (k/ 1 + mN 1)] 2
81 (k1 − k2 )2 − m2H
32 2 2 2 1
= λ3 fN mN 2k1 · k2 + 2m2N 2
81 (k1 − k2 )2 − 2m2H
32 2 2 2 1
= λ3 fN mN −(k1 − k2 )2 + 4m2N 2
81 (k1 − k2 )2 − m2H
128 2 2 m4N 64 2 2 m4N
≈ λ3 fN 4 ⇒ |M |2 = λ f (6.22)
81 mH 81 3 N m4H
spin,color
where in the last step we assume mS
mN . For WIMP–Xenon scattering this gives
us
The two key ingredients to this expression can be easily understood: the suppression
1/m4H appears after we effectively integrate out the Higgs in the t-channel, and the
high power of m4N occurs because in the low-energy limit the Higgs coupling to
fermions involve a chirality flip and hence one power of mN for each coupling. The
angle-independent matrix element in the low-energy limit can easily be translated
into a spectrum of the scattering angle, which will then give us the recoil spectrum,
if desired. We limit ourselves to the total rate, assuming that the appropriate WIMP
138 6 Direct Searches
mass range ensures that the total cross section gets converted into measurable
recoil. This approach reflects the fact that we consider the kinematics of scattering
processes and hence the existence of phase space a topic for experimental lectures.
Next, we can ask which range of Higgs portal parameters with the correct
relic density, as shown in Fig. 4.1, is accessible to direct detection experiments.
According to Eq. (6.24) the corresponding cross section first becomes small when
λ3 1, which means mS mH /2 with the possibility of explaining the Fermi
galactic center excess. Second, the direct detection cross section is suppressed for
heavy dark matter and leads to a scaling λ3 ∝ mS .
From Eq. (4.21) we know that a constant annihilation rate leading to the correct
relic density also corresponds to λ3 ∝ mS . However, while the direct detection rate
features an additional suppression through the nucleon mass mN , the annihilation
rate benefits from several subleading annihilation channels, like for example the
annihilation to two gauge bosons or two top quarks. This suggests that for large mS
the two lines of constant cross sections in the λ3 -mS plane run almost in parallel,
with a slightly smaller slope for the annihilation rate. This is exactly what we
observe in Fig. 4.1, leaving heavy Higgs portal dark matter with mS 300 GeV
a viable model for all observations related to cold dark matter. This minimal dark
matter mass constraint rapidly increases with new direct detection experiments
coming online. On the other hand, from our discussion of the threshold behavior
in Sect. 3.5 is should be clear that we can effectively switch off all direct detection
constraints by making the scalar Higgs mediator a pseudo-scalar.
Finally, we can modify our model and the quantitative link between the relic
density and direct detection, as illustrated in Fig. 4.1. The typical renormalizable
Higgs portal includes a scalar dark matter candidate. However, if we are willing
to include higher-dimensional terms in the Lagrangian we can combine the Higgs
portal with fermionic and vector dark matter. This is interesting in view of the
velocity dependence discussed in Sect. 3.4. The annihilation of dark matter fermions
is velocity-suppressed at threshold, so larger dark matter couplings predict the
observed relic density. Because direct detection is not sensitive to the annihilation
threshold, it will be able to rule out even the mass peak region for fermionic dark
matter (Fig. 6.1).
Fig. 6.1 Relic density (labelled PLANCK) vs direct dark matter detection constraints. The dark
matter agent is switched from a real scalar (left) to a fermion (right). Figure from Ref. [1]
6.2 Supersymmetric Neutralinos 139
gNN χ̃ 0 χ̃ 0
M = 1 1
v̄χ̃ 0 vχ̃ 0 ūN uN
Λ2 1 1
g2 0 0
NN χ̃1 χ̃1
|M |2 = Tr (p/ 2 − mχ̃ 0 1) (p/ 1 − mχ̃ 0 1)
Λ4 1 1
spins
× Tr (k/ 2 + mN 1) (k/ 1 + mN 1)
m2 0 m2N m2 0 m2N
χ̃1 χ̃1
≈ 2
64gNN χ̃10 χ̃10
⇒ |M |2 = 16 2
gNN χ̃10 χ̃10
.
Λ4 Λ4
spin,color
(6.25)
g2 m2N
SI NN χ̃10 χ̃10
σ (χ̃10 N → χ̃10 N) ≈ . (6.26)
π m4h0
As for the Higgs portal case in Eq. (6.24) the rate is suppressed by the mediator
mass to the fourth power. The lower power of m2N appears only because we absorb
the Yukawa coupling in gNN χ̃ 0 χ̃ 0 = 2mN fN /9,
1 1
gh0 χ̃ 0 χ̃ 0 2fN mN
gNN χ̃ 0 χ̃ 0 = 1 1
1 1 9
2fN mN
∝ g N11 − gN12 (sin α N13 + cos α N14 ) , (6.27)
9
following Eq. (4.43). We see that this scaling is identical to the Higgs portal case
in Eq. (6.24), but with an additional suppression through the difference in mixing
angles in the neutralino and Higgs sectors.
However, in supersymmetric models the dark matter mediator will often be the
Z-boson, because the interaction gNN χ̃ 0 χ̃ 0 is not suppressed by a factor of the kind
1 1
mN /v. In this case we need to describe a (transverse) vector coupling between
140 6 Direct Searches
the WIMP and the nucleon in our four-fermion interaction. Following exactly
the argument as for the scalar exchange we can look at a Z-mediated interaction
between a (Dirac) fermion χ and the nucleons,
gNNχχ
M = v̄χ γμ vχ ūN γ μ uN
Λ2
2
gNNχχ
|M |2 = 4
Tr (p/ 2 − mχ )γμ (p/ 1 − mχ )γν
Λ
spins
× Tr (k/ 2 + mN )γ μ (k/ 1 + mN )γ ν
2
gNNχχ
≈ (8m2χ )(8m2N )
Λ4
2
m2χ m2N 4gNNχχ m2N
≈ 64gNNχχ
2
⇒ σ SI (χN → χN) ≈ .
Λ4 π Λ4
(6.28)
g2 0 0
NN χ̃1 χ̃1
|M |2 = Tr (p/ 2 − mχ̃ 0 ) γ5 γμ (p/ 1 − mχ̃ 0 ) γ5 γν
Λ4 1 1
spins
Tr (k/ 2 + mN ) γ5 γ μ (k/ 1 + mN ) γ5 γ ν . (6.29)
For axial vector couplings the current is defined by γμ γ5 . This means it depends
on the chirality or the helicity of the fermions. The spin operator is defined in terms
of the Dirac matrices as s = γ5 γ 0 γ . This indicates that the axial vector coupling
4 tan
n =10 4 tan =2
3
3
6.2 Supersymmetric Neutralinos
2
2
M2 TeV
M2 TeV
1 1
4
0 4
0
3 3
2 2
1 3 4 4
M1 TeV
2 1 3
M1 TeV
0 1 1 2
0 2 1 0 1 0
4 3 3 2
TeV Spin-independent n 4 TeV
( 0
n 0
n) = <10-50 | 10-49 | 10-48 | 10-47 | 10-46 | 10-45 | >10-44 cm2
1 1 Projected Exclusion: XENON1T |LZ
Fig. 6.2 Left: spin-independent nucleon-scattering cross-section for relic neutralinos. Right: relic neutralino exclusions from XENON100 and LUX and
prospects from XENON1T and LZ. The boxed out area denotes the LEP exclusion. Figure from Ref. [2]
141
142 6 Direct Searches
Fig. 6.3 Spin-independent WIMP–nucleon cross section limits and projections (solid, dotted,
dashed curves) and hints for WIMP signals (shaded contours) and projections (dot and dot-dashed
curves) for direct detection experiments. The yellow region indicates dangerous backgrounds from
solar, atmospheric, and diffuse supernovae neutrinos. Figure from Ref. [3]
actually is a coupling to the spin of the nucleon. This is why the result is called a
spin-dependent cross section, which for each nucleon reads
m2 0 m2N
χ̃1
|M | ≈
2
16 × 4gNN
2
χ̃10 χ̃10 Λ4
spins
4g 2 m2N
NN χ̃10 χ̃10
⇒ σ SD
(χ̃10 N → χ̃10 N) ≈ . (6.30)
π Λ4
Again, we can read off Eq. (4.43) that for the light quarks q = u, d, s the effective
coupling should have the form
gNN χ̃ 0 χ̃ 0 = gZ χ̃ 0 χ̃ 0 gZqq ∝ g 2 |N13 |2 − |N14 |2 . (6.31)
1 1 1 1
References
1. Arcadi, G., Dutra, M., Ghosh, P., Lindner, M., Mambrini, Y., Pierre, M., Profumo, S., Queiroz,
F.S.: The waning of the WIMP? A review of models, searches, and constraints. Eur. Phys. J. C
78(3), 203 (2018). arXiv:1703.07364 [hep-ph]
2. Bramante, J., Desai, N., Fox, P., Martin, A., Ostdiek, B., Plehn, T.: Towards the final word on
neutralino dark matter. Phys. Rev. D 93(6), 063525 (2016). arXiv:1510.03460 [hep-ph]
3. Feng, J.L., et al.: Planning the Future of U.S. Particle Physics (Snowmass 2013): Chapter 4:
Cosmic Frontier. arXiv:1401.6085 [hep-ex]
Chapter 7
Collider Searches
Collider searches for dark matter rely on two properties of the dark matter particle:
first, the new particles have to couple to the Standard Model. This can be either
a direct coupling for example to the colliding leptons and quarks, or an indirect
coupling through an mediator. Second, we need to measure traces of particles which
interact with the detectors as weakly as for example neutrinos do. And unlike
dedicated neutrino detectors their collider counter parts do not include hundreds
of cubic meters of interaction material. Under those boundary conditions collider
searches for dark matter particles will benefit from several advantages:
1. we know the kinematic configuration of the dark matter production process.
This is linked to the fact that most collider detectors are so-called multi-purpose
detectors which can measure a great number of observables;
2. the large number of collisions (parametrized by the luminosity L ) can give us a
large number of dark matter particles to analyze. This allows us to for example
measure kinematic distributions which reflect the properties of the dark matter
particle;
3. all background processes and all systematic uncertainties can be studied, under-
stood, and simulated in detail. Once an observation of a dark matter particle
passes all conditions the collider experiments require for a discovery, we will
know that we discovered such a new particle. Otherwise, if an anomaly turns out
to not pass these conditions we have at least in my life time always been able to
identify what the problem was.
One weakness we should always keep in mind is that a particle which does not decay
while crossing the detector and which interacts weakly enough to not leave a trace
does not have to be stable on cosmological time scales. To make this statement we
need to measure enough properties of the dark matter particle to for example predict
its relic density the way we discuss it in Chap. 3.
The key observable we can compute and analyze at colliders is the number of events
expected for a certain production and decay process in a given time interval. The
number of events is the product of the luminosity L measured for example in
inverse femtobarns, the total production cross section measured in femtobarns, and
the detection efficiency measured in per-cent,1
This way the event rate is split into a collider-specific number describing the initial
state, a process-specific number describing the physical process, and a detector-
specific efficiency for each final state particle. The efficiency includes for example
phase-space dependent cuts defining the regions of sensitivity of a given experiment,
as well as the so-called trigger requirements defining which events are saved and
looked at. This structure holds for every collider.
When it comes to particles with electroweak interactions the most influential
experiments were ALEPH, OPAL, DELPHI, and L3 at the Large Electron-Positron
Collider (LEP) at CERN. It ran from 1989 until 2000, first with a e+ e− energy
right on the Z pole, and then with energies up to 209 GeV. Its life-time integrated
luminosity is 1 fb−1 . The results form running on the Z pole are easily summarized:
the SU (2)L gauge sector shows no hints for deviations from the Standard Model
predictions. Most of these results are based on an analysis of the Breit–Wigner
propagator of the Z boson which we introduce in Eq. (4.11),
Ee2+ e−
σ (e+ e− → Z) ∝ . (7.2)
(Ee2+ e− − m2Z )2 + m2Z ΓZ2
If we know what the energy of the incoming e+ e− system is we can plot the cross
section as a function of Ee+ e− and measure the Z mass and the Z width,
From this Z mass measurement in relation to the W mass and the vacuum
expectation value vH = 246 GeV we can extract the top quark and Higgs masses,
because these particles contribute to quantum corrections of the Z properties. The
total Z width includes a partial width from the decay Z → ν ν̄, with a branching
ratio around 20%. It comes from three generations of light neutrinos and is much
larger than for example the 3.4% branching ratio of the decay Z → e+ e− . Under
the assumption that only neutrinos contribute to the invisible Z decays we can
translate the measurement of the partial width into a measurement of the number
1 Cross sections and luminosities are two of the few observables which we do not measure in eV.
7.1 Lepton Colliders 147
of light neutrinos, giving 2.98 ± 0.008. Alternatively, we can assume that there are
three light neutrinos and use this measurement to constrain light dark matter with
couplings to the Z that would lead to an on-shell decay, for example Z → χ̃10 χ̃10
in our supersymmetric model. If a dark matter candidate relies on its electroweak
couplings to annihilate to the observed relic density, this limit means that any WIMP
has be heavier than
mZ
mχ̃ 0 ,S > = 45 GeV . (7.4)
1 2
The results from the higher-energy runs are equally simple: there is no sign of new
particles which could be singly or pair-produced in e+ e− collisions. The Feynman
diagram for the production of a pair of new particles, which could be dark matter
particles, is
e+ χ
e− χ
The experimental results mean that it is very hard to postulate new particles
which couple to the Z boson or to the photon. The Feynman rules for the
corresponding f f¯Z and f f¯γ couplings are
e
−iγ μ (PL + rPR ) with = T3 − 2Qsw2 r = (Zf f¯)
sw cw T3 =0
= r = Qe (γf f¯) ,
(7.5)
with the isospin quantum number T3 = ±1/2 and sw2 ≈ 1/4. Obviously, a pair of
charged fermions will always be produced through an s-channel photon. If a particle
has SU (2)L quantum numbers, the Z-coupling can be cancelled with the help of
the electric charge, which leads to photon-induced pair production. Dark matter
particles cannot be charged electrically, so for WIMPs there will exist a production
process with a Z-boson in the s-channel. This result is important for co-annihilation
in a more complex dark matter sector. For example in our supersymmetric model
the charginos couple to photons, which means that they have to be heavier than
Eemax
+ e−
mχ̃ ± > = 104.5 GeV; , (7.6)
1 2
in order to escape LEP constraints. The problem of producing and detecting a pair
of dark matter particles at any collider is that if we do not produce anything else
148 7 Collider Searches
those events with ‘nothing visible happening’ are hard to identify. Lepton colliders
have one big advantage over hadron colliders, as we will see later: we know the
kinematics of the initial state. This means that if, for example, we produce one
invisibly decaying particle we can reconstruct its four-momentum from the initial
state momenta and the final-state recoil momenta. We can then check whether for the
majority of events the on-shell condition p2 = m2 with a certain mass is fulfilled.
This is how OPAL managed to extract limits on Higgs production in the process
e+ e− → ZH without making any assumptions about the Higgs decay, notably
including a decay to two invisible states. Unfortunately, because this analysis did
not reach the observed Higgs mass of 126 GeV, it does not constrain our dark matter
candidates in Higgs decays.
The pair production process
e+ e− → γ ∗ Z ∗ → χχ (7.7)
Experimentally, this photon recoils against the two dark matter candidates, defining
the signature as a photon plus missing momentum. A Feynman diagram for the
production of a pair of dark matter particles and a photon through a Z-mediator is
e+ χ
χ
Z
e− γ
Because the photon can only be radiated off the incoming electrons, this
process is often referred to as initial state radiation (ISR). Reconstructing the four-
momentum of the photon allows us to also reconstruct the four-momentum of the
pair of dark matter particles. The disadvantage is that a hard photon is only present
in a small fraction of all e+ e− collisions for example at LEP. This is one of the few
instances where the luminosity or the size of the cross section makes a difference
at LEP. Normally, the relatively clean e+ e− environment allows us to build very
efficient and very precise detectors, which altogether allows us to separate a signal
from a usually small background cleanly. For example, the chargino mass limit in
Eq. (7.6) applies to a wide set of new particles which decay into leptons and missing
energy and is hard to avoid.
7.2 Hadron Colliders and Mono-X 149
We should mention that for a long time people have discussed building another
e+ e− collider. Searching for new particles with electroweak interactions is one of
the main motivations. Proposals range from a circular Higgs factory with limited
energy due to energy loss in synchrotron radiation (FCC-ee/CERN or CEPC/China)
to a linear collider with an energy up to 1 TeV (ILC/Japan), to a multi-TeV linear
collider with a driving beam technology (CLIC/CERN).
Historically, hadron colliders have had great success in discovering new, massive
particles. This included UA1 and UA2 at SPS/CERN discovering the W and Z
bosons, CDF and D0 at the Tevatron/Fermilab discovering the top quark, and most
recently ATLAS and CMS at the LHC with their Higgs discovery. The simple
reason is that protons are much heavier than electrons, which makes it easier to
store large amounts of kinetic energy and release them in a collision. On the other
hand, hadron collider physics is much harder than lepton collider physics, because
the experimental environment is more complicated, there is hardly any process with
negligible backgrounds, and calculations are generically less precise.
This means that at the LHC we need to consider two kinds of processes. The first
involves all known particles, like electrons or W and Z bosons, or the top quark, or
even the Higgs boson. These processes we call backgrounds, and they are described
by QCD. The Higgs boson is in the middle of a transition to a background, only a
few years ago is was the most famous example for a signal. By definition, signals
are very rare compared to backgrounds. As an example, Fig. 7.1 shows that at the
LHC the production cross section for a pair of bottom quarks is larger than 105 nb
or 1011 fb, the typical production rate for W or Z bosons ranges around 200 nb or
2 · 108 fb, the rate for a pair of 500 GeV supersymmetric gluinos would have been
4 · 104 fb.
One LHC aspect we have to mention in the context of dark matter searches is
the trigger. At the LHC we can only save and study a small number of all events.
This means that we have to decide very fast if an event has the potential of being
interesting in the light of the physics questions we are asking at the LHC; only these
events we keep. For now we can safely assume that above an energy threshold we
will keep all events with leptons or photons, plus, if at all possible, events with
missing energy, like neutrinos in the Standard Model and dark matter particles in
new physics models and jets with high energy coming from resonance decays.
When we search for dark matter particles at hadron colliders like the LHC,
these analyses cannot rely on our knowledge of the initial state kinematics. What
we know is that in the transverse plane the incoming partons add to zero three-
momentum. In contrast, we are missing the necessary kinematic information in the
beam direction. This means that dark matter searches always rely on production
with another particle, leading to un-balanced three-momenta in the plane transverse
to the beam direction. This defines an observable missing transverse momentum
150 7 Collider Searches
109 109
108 108
tot
107 107
Tevatron LHC
106 106
105 105
-2 -1
= 10 cm s
b
104 104
103 103
33
jet
(ET > s/20)
102 jet 102
(nb)
101 101
100 Z
100
jet
jet
(ET > 100 GeV)
10-1 10-1
10-2 10-2
10-3 10-3
t
jet
(ET > s/4)
10-4 jet 10-4
(MH=120 GeV)
10-5 Higgs 10-5
200 GeV
10-6 10-6
WJS2009
500 GeV
10-7 10-7
0.1 1 10
s (TeV)
Fig. 7.1 Production rates for signal and background processes at hadron colliders. The disconti-
nuity is due to the Tevatron being a proton–antiproton collider while the LHC is a proton–proton
collider. The two colliders correspond to the x-axis values of 2 TeV and something between 7 and
14 TeV. Figure from Ref. [1]
three-vector with two relevant dimensions. The missing transverse energy is the
absolute value of this two-dimensional three-vector. The big problem with missing
transverse momentum is that it relies on reconstructing the entire recoil. This causes
several experimental problems:
1. there will always be particles in events which are not observed in the calorime-
ters. For example, a particle can hit a support structure of the detector, generating
fake missing energy;
7.2 Hadron Colliders and Mono-X 151
for all x. The proton consists of uud quarks, plus quantum fluctuations either
involving gluons or quark–antiquark pairs. The expectation values for up- and down-
quarks have to fulfill
1 1
dx (fu (x) − fū (x)) = 2 dx fd (x) − fd̄ (x) = 1 .
0 0
(7.10)
152 7 Collider Searches
Finally, the proton momentum has to be the sum of all parton momenta, defining the
QCD sum rule
⎛ ⎞
1
xi = dx x ⎝ fq (x) + fq̄ (x) + fg (x)⎠ = 1 . (7.11)
0 q q̄
We can compute this sum accounting for quarks and antiquarks. The sum comes out
to 1/2, which means that half of the proton momentum is carried by gluons.
Using the parton densities we can compute the hadronic cross section,
1 1
σtot = dx1 dx2 fi (x1 ) fj (x2 ) σ̂ij (x1 x2 S) , (7.12)
0 0 ij
where i, j are the incoming partons. The partonic energy of the√scattering process
is s = x1 x2 S with the LHC proton energy of currently around S = 13 TeV. The
partonic cross section includes energy–momentum conservation.
On the parton level, the analogy to photon radiation in e+ e− production will be
dark matter production together with a quark or a gluon. Two Feynman diagrams
for this mono-jet signature with an unspecified mediator are
q̄ χ q̄ χ
χ χ
1 1 1 1
= = 0 0 . (7.13)
2 |k1 ||k2 | cos θ12 − k10 k20 2k1 k2 cos θ12 − 1
This propagator diverges when the radiated parton is soft (k20 → 0) or collinear
with the incoming parton (θ12 → 0). Phenomenologically, the soft divergence is
less dangerous, because the LHC experiments can only detect any kind of particle
above a certain momentum or transverse momentum threshold. The actual pole in
the collinear divergence gets absorbed into a re-definition of the parton densities
7.2 Hadron Colliders and Mono-X 153
fq,g (x), as they appear for example in the hadronic cross section of Eq. (7.12). This
so-called mass factorization is technically similar to a renormalization procedure
for example of the strong coupling, except that renormalization absorbs ultraviolet
divergences and works on the fundamental Lagrangian level [2]. One effect of this
re-definition of the parton densities is that relative to the original definition the quark
and gluon densities mix, which means that the two Feynman diagrams shown above
cannot actually be separated on a consistent quantum level.
Experimentally, the scattering or polar angle θ12 is not the variable we actually
measure. The reason is that it is not boost invariant and that we do not know
the partonic rest frame in the beam direction. Instead, we can use two standard
kinematic variables,
m2χχ 1 − cos θ12
t = −s 1 − (Mandelstam variable)
s 2
2
m2χχ 1 − cos θ12 1 + cos θ12
pT2 =s 1− (transverse momentum) .
s 2 2
(7.14)
Comparing the two forms we see that the transverse momentum is symmetric
under the switch cos θ12 ↔ − cos θ12 , which in terms of the Mandelstam variables
corresponds to t ↔ u. From Eq. (7.14) we see that the collinear divergence appears
as a divergence of the partonic transverse momentum distribution,
dσχχj 1 1
∝ |Mχχj |2 ∝ ∝ 2 . (7.15)
dpT ,j t pT ,j
For an integration of the full phase space including a lower limit pTmin,j = 0, this
logarithm is divergent. When we apply an experimental cut to generate for example
,j = 10 GeV, the logarithm gets large, because pT ,j 2mχ is given
max
a value of pTmin
by the typical energy scales of the scattering process. When we absorb the collinear
divergence into re-defined parton densities and use the parton shower to enforce and
simulate the correct behavior
dσχχj pT ,j →0
−→ 0 , (7.17)
dpT ,j
154 7 Collider Searches
the large collinear logarithm in Eq. (7.16) gets re-summed to all orders in pertur-
bation theory. However, over a wide range of values the transverse momentum
distribution inherits the collinearly divergent behavior. This means that most jets
radiated from incoming partons appear at small transverse momenta, and even after
including the parton shower regulator the collinear logarithm significantly enhances
the probability to radiate such collinear jets. The same is (obviously) true for the
initial state radiation of photons. The main difference is that for the photon process
we can neglect the amplitude with an initial state photon due to the small photon
parton density.
Once we know that at the LHC we can generally look for the production of dark
matter particles with an initial state radiation object, we can study different mono-X
channels. Some example Feynman diagrams for mono-jet, mono-photon, and mono-
Z production are
q̄ χ q̄ χ q̄ χ
χ
χ χ
f
Z
q g q γ q f¯
For the radiated Z-boson we need to specify a decay. While hadronic decays
Z → q q̄ come with a large branching ratio, we need to ask what they add to the
universal mono-jet signature. Leptonic decays like Z → μμ can help in difficult
experimental environments, but are suppressed by a branching ratio of 3.4% per
lepton generation. Mono-W events can occur through initial state radiation when
we use a q q̄ initial state to generate a hard q q̄ scattering. Finally, mono-Higgs
signatures obviously make no sense for initial state radiation. From the similarity of
the above Feynman diagrams we can first assume that at least in the limit mZ → 0
the total rates for the different mono-X processes relative to the mono-jet rate scale
like
σχχγ α Q2q 1
≈ ≈
σχχj αs CF 40
2 2
σχχμμ α Qq sw 1
≈ BR(Z → μμ) ≈ . (7.18)
σχχj αs CF 4000
The actual suppression of the mono-Z channel is closer to 10−4 , once we include
the Z-mass suppression through the available phase space. In addition, the similar
Feynman diagrams also suggest that any kinematic x-distribution scales like
Here, the suppression of the mono-photon is stronger, because the rapidity coverage
of the detector for jets extends to |η| < 4.5, while photons rely on an efficient
electromagnetic calorimeter with |η| < 2.5. On the other hand, photons can be
detected to significantly smaller transverse momenta than jets.
Note that the same scaling as in Eq. (7.18) applies to the leading mono-X
backgrounds, namely
possibly with the exception of mono-Z production, where the hard process and the
collinear radiation are now both described by Z-production. This means that the
signal scaling of Eq. (7.18) also applies to backgrounds,
σννγ α Q2q 1
≈ ≈
σννj αs CF 40
If our discovery channel is statistics limited, the significances nσ for the different
channels are given in terms of the luminosity, efficiencies, and the cross sections
% σχχj % σχχγ
nσ,j = j L √ ⇒ nσ,γ = γ L √
σννj σννγ
%
1 γ σχχj 1 γ
≈ j L √ √ = nσ,j . (7.22)
40 j σννj 6.3 j
Unless the efficiency correction factors, including acceptance cuts and cuts rejecting
other backgrounds, point towards a very significant advantage if the mono-photon
channel, the mono-jet channel will be the most promising search strategy. Using the
same argument,
√ the factor in the expected mono-jet and mono-Z significances will
be around 6000 = 77.
This estimate might change if the uncertainties are dominated by systematics or
a theory uncertainty. These errors scale proportional to the number of background
events in the signal region, again with a signature-dependent proportionality factor
describing how well we know the background distributions. This means for the
significances
σχχγ γ σχχj γ
nσ,γ = γ = j = nσ,j . (7.23)
σννγ j σννj j
Typically, we understand photons better than jets, both experimentally and theoreti-
cally. On the other hand, systematic and theory uncertainties at the LHC are usually
limited by the availability and the statistics in control regions, regions which we can
safely assume to be described by the Standard Model.
156 7 Collider Searches
We can simulate mono-X signatures for vector mediators, described in Sect. 5.4.
In that case the three mono-X signatures are indeed induced by initial state radiation.
The backgrounds are dominated by Z-decays to neutrinos. The corresponding
LHC searches are based on the missing transverse momentum distribution and the
transverse momentum pT ,X of the mono-X object. There are (at least) two strategies
to control for example the mono-jet background: first, we can measure it for example
using Z → μ+ μ− decays or hard photons produced in association with a hard jet.
Second, if the dark matter signal is governed by a harder energy scale, like the mass
of a heavy mediator, we can use the low-pT region as a control region and only
extrapolate the pT distributions.
Figure 7.2 gives an impression of the transverse momentum spectra in the
mono-jet, mono-photon, and mono-Z channels. Comparing the mono-jet and mono-
photon rates we see that the shapes of the transverse momentum spectra of the
jet or photon, recoiling against the dark matter states, are essentially the same in
both cases, for the respective signals as well as for the backgrounds. The signal
and background rates follow the hierarchy derived above. Indeed, the mono-photon
hardly adds anything to the much larger mono-jet channel, except for cases where
in spite of advanced experimental strategies the mono-jet channel is limited by
systematics. The mono-Z channel with a leptonic Z-decay is kinematically almost
identical to the other two channels, but with a strongly reduced rate. This means
that for mono-X signatures induced by initial state radiation the leading mono-
Fig. 7.2 Transverse momentum spectrum for signals and backgrounds in the different mono-X
channels for a heavy vector mediator with mZ = 1 TeV. Figure from Ref. [3]
7.3 Higgs Portal 157
jet channel can be expected to be the most useful, while other mono-X analyses
will only become interesting when the production mechanism is not initial state
radiation.
Finally, one of the main challenges of mono-X signatures is that by definition the
mediator has to couple to the Standard Model and to dark matter. This means for
example in the case of the simple model of Eq. (5.27)
The relative size of the branching ratios is given by the ratio of couplings gχ2 /gu2 .
Instead of the mono-X signature we can constrain part of the model parameter
space through resonance searches with the same topology as the mono-X search
and without requiring a hard jet,
q̄ q q̄ q
V
V
q̄ q̄
q g q
On the other hand, for the parameter space gu gχ but constant gu gχ and
mediator mass, the impact of resonance searches is reduced, whereas mono-X
searches remain relevant.
In addition to the very general mono-jet searches for dark matter, we will again
look at our two specific models. The Higgs portal model only introduces one more
particle, a heavy scalar with mS
mH and only coupling to the Higgs. This
means that the Higgs has to act as an s-channel mediator not only for dark matter
annihilation, but also for LHC production,
pp → H ∗ → SS + jets . (7.25)
The Higgs couples to gluons in the incoming protons through a top loop, which
implies that its production rate is very small. The Standard Model predicts an on-
shell Higgs rate of 50 pb for gluon fusion production at a 14 TeV LHC. Alternatively,
we can look for weak-boson-fusion off-shell Higgs production, i.e. production in
158 7 Collider Searches
q q
S
W H
W S
q q
These so-called tagging jets will allow us to trigger the events. For an on-shell
Higgs boson the weak boson fusion cross section at the LHC is roughly a factor 1/10
below gluon fusion, and its advantages are discussed in detail in Ref. [2].
In particular in this weak-boson-fusion channel ATLAS and CMS are conducting
searches for invisibly decaying Higgs bosons. The main backgrounds are invisible
Z-decays into a pair of neutrinos, and W -decays where we miss the lepton and are
only left with one neutrino. For high luminosities around 3000 fb−1 and assuming
an essentially unchanged Standard Model Higgs production rate, the LHC will be
sensitive to invisible branching ratios around
The key to this analysis is to understand not only the tagging jet kinematics, but also
the central jet radiation between the two forward tagging jets.
Following the discussion in Sect. 4.1 the partial width for the SM Higgs boson
decays into light dark matter is
λ23 vH
2 4m2 Γ (H → SS)
Γ (H → SS) = 1 − 2S ⇔
32πMH mH mH
λ2 2m2 λ2
≈ 3 1 − 2S < 3 . (7.27)
8π mH 8π
This value has to be compared to the Standard Model prediction ΓH /mH = 4·10−5 .
For example, a 10% invisible branching ratio BR(H → SS) into very light scalars
mS mH /2 corresponds to a portal coupling
λ23 √
= 4 · 10−6 ⇔ λ3 = 32π · 10−3 ≈ 10−2 . (7.28)
8π
The light scalar reference point in agreement with the observed relic density
Eq. (4.14) has λ3 = 0.3 and roughly assuming mS 50 GeV. This is well above
the approximate final reach for the invisible Higgs branching ratio at the high-
luminosity LHC.
7.4 Supersymmetric Neutralinos 159
For larger dark matter masses above mS = 200 GeV the LHC cross section for
pair production in weak boson fusion is tiny, namely
λ23 λ =0.1
σ (SSjj ) ≈ fb 3 = 10−3 fb (7.29)
10
Without going into much detail this means that heavy scalar dark matter is unlikely
to be discovered at the LHC any time soon, because the final state is heavy and the
coupling to the Standard Model is strongly constrained through the observed relic
density.
BR(h → χ̃10 χ̃10 ) = (10 . . . 50)% mχ̃ 0 = (35 . . . 40) GeV and (50 . . . 55) GeV ,
1
(7.30)
Similarly, Tevatron and LHC experiments have for a long time used an effective
transverse mass scale which is usually evaluated for jets only, but can trivially be
extended to leptons,
HT = ET = pT , (7.32)
,j ,j
This effective mass is known to trace the mass of the heavy new particles decaying
for example to jets and missing energy. This interpretation relies on the non-
relativistic nature of the production process and our confidence that all jets included
are really decay jets.
In the Standard Model the neutrino produces such missing transverse energy,
typically through the decays W → + ν and Z → ν ν̄. In W + jets events we can
learn how to reconstruct the W mass from one observed and one missing particle.
We construct a transverse mass in analogy to an invariant mass, but neglecting the
longitudinal momenta of the decay products
2 2
m2T = ET ,miss + ET , − pT ,miss + pT ,
= m2 + m2miss + 2 ET , ET ,miss − pT , · pT ,miss , (7.34)
The leptons in the decay of the heavier neutralinos and charginos can be replaced
by other fermions. Kinematically, the main question is if the fermions arise from
7.4 Supersymmetric Neutralinos 161
q̄ χ̃10 q̄ χ̃10
χ̃10 +
χ̃20 χ̃20
Z + ˜ −
q − q
χ̃10
The question which decay topologies of the heavier neutralino dominate, depends
on the point in parameter space. The first of the two diagrams predicts dark matter
production in association with a Z-boson. This is the same signature as found to be
irrelevant for initial state radiation in Sect. 7.2, namely mono-Z production.
The second topology brings us the question how many masses we can extract
from two observed external momenta. Endpoint methods rely on lower (threshold)
and upper (edge) kinematic endpoints of observed invariant mass distributions.
The art is to identify distributions where the endpoint is probed by realistic phase
space configurations. The most prominent example is m in the heavy neutralino
decay in Eq. (7.35), proceeding through an on-shell slepton. In the rest frame of the
intermediate slepton the 2 → 2 process corresponding to the decay of the heavy
neutralino,
resembles the Drell–Yan process. Because of the scalar in the s-channel, angular
correlations do not influence the m distribution, so it will have a triangular shape.
Its upper limit or edge can be computed in the slepton rest frame. The incoming and
outgoing three-momenta have the absolute values
|m2 0 − m2˜ |
χ̃1,2
|p|
= , (7.37)
2m˜
assuming m = 0. The invariant mass of the two leptons reaches its maximum if the
two leptons are back-to-back and the scattering angle is cos θ = −1
q̄ +
˜∗
γ χ̃10
χ̃10
˜
q −
Again, the question arises how many masses we can extract from the measured
external momenta. For this topology the variable mT 2 generalizes the transverse
mass known from W decays to the case of two massive invisible particles, one from
each leg of the event. First, we divide the observed missing energy in the event into
two scalar fractions pT ,miss = q1 + q2 . Then, we construct the transverse mass
for each side of the event, assuming that we know the invisible particle’s mass or
scanning over hypothetical values m̂miss .
7.4 Supersymmetric Neutralinos 163
Inspired by the transverse mass in Eq. (7.34) we are interested in a mass variable
with a well-defined upper endpoint. For this purpose we construct some kind of
minimum of mT ,j as a function of the fractions qj . We know that maximizing the
transverse mass on one side of the event will minimize it on the other side, so we
define
!
mT 2 (m̂miss ) = min max mT ,j (qj ; m̂miss) . (7.41)
pT ,miss =q1 +q2 j
For the correct value of mmiss the mT 2 distribution has a sharp edge at the mass of
the decaying particle. In favorable cases mT 2 allows the measurement of both, the
decaying particle and the invisible particle masses. These two aspects for the correct
value m̂miss = mmiss we can see in Fig. 7.3: the lower threshold is indeed given by
mT 2 − mχ̃ 0 = mπ , while the upper edge of mT 2 − mχ̃ 0 coincides with the dashed
1 1
line for mχ̃ + − mχ̃ 0 .
1 1
An interesting aspect of mT 2 is that it is boost invariant if and only if m̂miss =
mmiss . For a wrong assignment of mmiss the value of mT 2 has nothing to do with the
actual kinematics and hence with any kind of invariant (and house numbers are not
boost invariant). We can exploit this aspect by scanning over mmiss and looking for
so-called kinks, defined as points where different events kinematics all return the
same value for mT 2 .
Finally, we can account for the fact that supersymmetry predicts new strongly
interacting particles. These are the scalar partners of the quarks and the fermionic
partner of the gluon. For dark matter physics the squarks are more interesting,
m[χ+1] - m[χ01]
q̄ q̄
q̄ χ̃10
q̃ ∗
χ̃10
q̃ χ̃10
q̃
q χ̃10 q q
Note that these two squark-induced signatures cannot be separated, because they
rely on the same two couplings, the quark-squark-neutralino coupling and the QCD-
induced squark coupling to a gluon. Kinematically, they add nothing new to the
above arguments: the first diagram will contribute to the mono-jet signature, with
the additional possibility to radiate a gluon off the t-channel mediator, and to pair-
production of neutralinos and charginos; the second diagram asks for a classic mT 2
analysis. Moreover, the production process of Eq. (7.43) is QCD-mediated and the
100% branching fraction gives us no information about the mediator interaction to
dark matter. In other words, for this pair-production process there exists no link
between LHC observables and dark matter properties.
The non-negligible effect of the t-channel squark mediator adding to the s-
channel Z-mediator for processes of the kind
has to do with the couplings. From Eq. (4.43) we know that for neutralinos the
higgsino content couples to the Z-mediator while the gaugino content couples to
light-flavor squarks. In addition, the s-channel and t-channel diagrams typically
interfere destructively, so we can tune the squark mass to significantly reduce
the neutralino pair production cross section. The largest cross section for direct
neutralino-chargino production is usually
pp → χ̃1+ χ̃20 → (+ ν χ̃10 ) (+ − χ̃10 ) with σ (χ̃1± χ̃20 ) 1 pb , (7.45)
for mχ > 200 GeV. This decay leads to a tri-lepton signature with off-shell gauge
bosons in the decay. The backgrounds are pair production of weak bosons and hence
7.4 Supersymmetric Neutralinos 165
First, we need to remove top pair production as the main background for such
signatures. The key observation is that in cascade decays the leptons are flavor-
locked, which means the combination e+ e− + μ+ μ− − e− μ+ − e+ μ− is roughly
twice μ+ μ− for the signal, while it cancels for top pairs. In addition, such cascade
decays are an opportunity to search for kinematic endpoints in many distributions,
like m , mq , or three-body combinations. Unfortunately, the general interest in the
kinematics of supersymmetric cascade decays is for now postponed.
One thing we know for example from the di-lepton edge is that invariant masses
can just be an invariant way of writing angular correlations between outgoing
particles. Those depend on the spin and quantum numbers of all particles involved.
While measuring for example the spin of new particles is hard in the absence of fully
reconstructed events, we can try to probe it in the kinematics of cascade decays. The
squark decay chain was the first case where such a strategy was worked out [5]:
1. Instead of measuring individual spins in a cascade decay we assume that cascade
decays radiate particles with known spins. For radiated quarks and leptons the
spins inside the decay chain alternate between fermions and bosons. Therefore,
we contrast supersymmetry with another hypothesis, where the spins in the decay
chain follow the Standard Model assignments. An example for such a model
are Universal Extra Dimensions, where each Standard Model particle acquires
a Kaluza–Klein partner from the propagation in the bulk of the additional
dimensions;
2. The kinematical endpoints are completely determined by the masses and cannot
be used to distinguish between the spin assignments. In contrast, the distributions
between endpoints reflect angular correlations. For example, the mj distribution
in principle allows us to analyze spin correlations in squark decays in a Lorentz-
invariant way. The only problem is the link between ± and their ordering in the
decay chain;
3. As a proton–proton collider the LHC produces considerably more squarks than
anti-squarks in the squark–gluino production process. A decaying squark radiates
a quark while an antisquark radiates an antiquark, which means that we can
define a non-zero production-side asymmetry between mj + and mj − . Such
an asymmetry we show in Fig. 7.4, for the SUSY and for the UED hypotheses.
Provided the masses in the decay chain are not too degenerate we can indeed
distinguish the two hypotheses.
166 7 Collider Searches
–0.2
–0.3
0 0.2 0.4 0.6 0.8 1
∧
m
In Sect. 4.4 we introduced an effective field theory of dark matter to describe dark
matter annihilation in the early universe. If the annihilation process is the usual
2 → 2 WIMP scattering process it is formulated in terms of a dark matter mass mχ
and a mediator mass mmed , where the latter does not correspond to a propagating
degree of freedom. It can hence be identified with a general suppression scale Λ
in an effective Lagrangian, like the one illustrated in Eq. (4.59). All experimental
environments discussed in the previous sections, including the relic density, indirect
detection, and direct detection, rely on non-relativistic dark matter scattering. This
means they can be described by a dark matter EFT if the mediator is much heavier
than the dark matter agent,
mχ mmed . (7.47)
In contrast, LHC physics is entirely relativistic and neither the incoming partons nor
the outgoing dark matter particles in the schematic diagram shown in Sect. 1.3 are
at low velocities. This means we have to add the partonic energy of the scattering
process to the relevant energy scales,
√
{ mχ , mmed , s}. (7.48)
In the case of mono-jet production, described in Sect. 7.2, the key observables here
are the E/ T and pT ,j distributions. For simple hard processes the two transverse
momentum distributions are rapidly dropping and strongly correlated. This defines
the relevant energy scales as
For the total rate, different phase space regions which individually agree poorly
between the effective theory and some underlying model, might combine to a decent
rate. For the main distributions this is no longer possible.
Finally, the hadronic LHC energy of 13 TeV, combined with reasonable parton
momentum fractions defines an absolute upper limit, above which for example a
particle in the s-channel cannot be produced as a propagating state,
√
{ mχ , mmed , E/ Tmin , s max } . (7.51)
This fourth scale is not the hadronic collision energy 13 TeV. From the typical
√ LHC
reach for heavy resonances in the s-channel we expect it to be in the range s max =
5 . . . 8 TeV, depending on the details of the mediator.
From what we know from these lecture notes, establishing a completely general
dark matter EFT approach at the LHC is not going to work. The Higgs portal
results of Sect. 7.3 indicate that the only way to systematically search for its dark
matter scalar is through invisible Higgs decays. By definition, those will be entirely
dominated by on-shell Higgs production, not described by an effective field theory
with a non-propagating mediator. Similarly, in the MSSM a sizeable fraction of the
mediators are either light SM particles or s-channel particles within the reach of the
LHC. Moreover, we need to add propagating degrees of co-annihilation partners,
more or less close to the dark matter sector.
On the other hand, the fact that some of our favorite dark matter models are not
described well by an effective Lagrangian does not mean that we cannot use such
an effective Lagrangian for other classes of dark matter models. One appropriate
way to test the EFT approach at the LHC is to rely on specific simplified models, as
introduced in Sect. 5.4. Three simplified models come to mind for a fermion dark
matter agent [6]:
1. tree-level s-channel vector mediator, as discussed in Sect. 5.4;
2. tree-level t-channel scalar mediator, realized as light-flavor scalar quarks in the
MSSM, Sect. 7.4;
3. loop-mediated s-channel scalar mediator, realized as heavy Higgses in the
MSSM, Sect. 7.4.
For the tree-level vector the situation at the LHC already becomes obvious in
Sect. 5.4. The EFT approach is only applicable when also at the LHC the vector
mediator is produced away from its mass shell, requiring roughly mV > 5 TeV. The
168 7 Collider Searches
problem in this parameter range is that the dark matter annihilation cross section
will be typically too small to provide the observed relic density. This makes the
parameter region where this mediator can be described by global EFT analyses very
small.
We start our more quantitative discussion with a tree-level t-channel scalar ũ.
Unlike for the vector mediator, the t-channel mediator model only makes sense in
the half plane with mχ < mũ ; otherwise the dark matter agent would decay. At the
LHC we have to consider different production processes. Beyond the unobservable
process uū → χχ the two relevant topologies leading to mono-jet production are
They are of the same order in perturbation theory and experimentally indistin-
guishable. The second process can be dominated by on-shell mediator production,
ug → χ ũ → χ (χ̄u). We can cross its amplitude to describe the co-annihilation
process χ ũ → ug. The difference between the (co-) annihilation and LHC
interpretations of the same amplitude is that it only contributes to the relic density
for mũ < mχ + 10%, while it dominates mono-jet production for a wide range of
mediator masses.
Following Eq. (7.43) we can also pair-produce the necessarily strongly interact-
ing mediators with a subsequent decay to two jets plus missing energy,
The partonic initial state of this process can be quarks or gluons. For a wide range
of dark matter and mediator masses this process completely dominates the χχ+jets
process.
When the t-channel mediator becomes heavy, for example mono-jet production
with the partonic processes given in Eq. (7.52) can be described by an effective four-
fermion operator,
c
L ⊃ (ūR χ) (χ̄uR ) . (7.54)
Λ2
The natural matching scale will be around Λ = mũ . Note that this operator mediates
the t-channel as well as the single-resonant mediator production topologies and the
pair production process induced by quarks. In contrast, pair production from two
gluons requires a higher-dimensional operator involving the gluon field strength,
like for example
c
L ⊃ (χ̄χ) Gμν Gμν . (7.55)
Λ3
This leads to a much faster decoupling pattern of the pair production process for a
heavy mediator.
7.5 Effective Field Theory 169
Because the t-channel mediator carries color charge, LHC constraints typically
force us into the regime mũ 1 TeV, where an EFT approach can be viable. In
addition, we again need to generate a large dark matter annihilation rate, which
based on the usual scaling can be achieved by requiring mũ mχ . For heavy
mediators, pair production decouples rapidly and leads to a parameter region where
single-resonant production plays an important role. It is described by the same
effective Lagrangian as the generic t-channel process, and decouples more rapidly
than the t-channel diagram for mũ 5 TeV. These actual mass values unfortunately
imply that the remaining parameter regions suitable for an EFT description typically
predict very small LHC rates.
The third simplified model we discuss is a scalar s-channel mediator. To generate
a sizeable LHC rate we do not rely on its Yukawa couplings to light quarks,
but on a loop-induced coupling to gluons, in complete analogy to SM-like light
Higgs production at the LHC. The situation is slightly different for most of
the supersymmetric parameter space for heavy Higgses, which have reduced top
Yukawa couplings and are therefore much harder to produce at the LHC. Two
relevant Feynman diagrams for mono-jet production are
g g g g
χ t χ
t
S S
g χ̄ g χ̄
Coupling the scalar only to the top quark, we define the Lagrangian for the
simplified scalar mediator model as
y t mt
L ⊃− S t¯t + gχ S χ̄χ (7.56)
v
The factor mt /v in the top Yukawa coupling is conventional, to allow for an easier
comparison to the Higgs case. The scalar coupling to the dark matter fermions can
be linked to mχ , but does not have to. We know that the SM Higgs is a very narrow
resonance, while in this case the total width is bounded by the partial width from
scalar decays to the top quark,
3/2
ΓS 3GF m2t yt2 4m2 mS
mt 3GF m2t yt2
> √ 1 − 2t = √ ≈ 5% , (7.57)
mS 4 2π mS 4 2π
To get a rough idea what kind of parameter space might be interesting, we can
look at the relic density. The problem in this prediction is that for mχ < mt the
annihilation channel χχ → t t¯ is kinematically closed. Going through the same
amplitude as the one for LHC production, very light dark matter will annihilate
to two gluons through a top loop. If we allow for that coupling, the tree-level
process χχ → cc̄ will dominate for slightly heavier dark matter. If there also
exists a Yukawa coupling of the mediator to bottom quarks, the annihilation channel
χχ → b b̄ will then take over for slightly heavier dark matter. An even heavier
mediator will annihilate into off-shell top quarks, χχ → (W + b)(W − b̄), and for
mχ > mt the tree-level 2 → 2 annihilation process χχ → t t¯ will provide
very efficient annihilation. None of the aspects determining the correct annihilation
channels are well-defined within the simplified model. Moreover, in the Lagrangian
of Eq. (7.56) we can easily replace the scalar S with a pseudo-scalar, which will
affect all non-relativistic processes.
For our global EFT picture this means that if a scalar s-channel mediator is
predominantly coupled to up-quarks, the link between the LHC production rate and
the predicted relic density essentially vanishes. The two observables are only related
if the mediator is very light and decays through the one-loop diagram to a pair of
gluons. This is exactly where the usual dark matter EFT will not be applicable.
If we only look at the LHC, the situation becomes much simpler. The dominant
production process
defines the mono-jet signature through initial-state radiation and through gluon
radiation off the top loop. The mono-jet rate will factorize into σS+j × BRχχ . The
production process is well known from Higgs physics, including the phase space
region with a large jet and the logarithmic top mass dependence of the transverse
momentum distribution,
dσSj dσSj p2
4 T ,j
= ∝ log . (7.59)
dpT ,j dpT ,S m2t
Based on the Lagrangian given in Eq. (7.56) and the transverse momentum depen-
dence given in Eq. (7.59), the mono-jet signal at the LHC depends on the four energy
scales,
{ mχ , mS , mt , E/ T = pT ,j } , (7.60)
This effective theory will retain all top mass effects in the distributions.
3. Finally, we can decouple the top as well as the mediator,
c mS ≈mt
mχ αs yt gχ 1
= , (7.69)
Λ3 12π m2S v
Fig. 7.5 Total mono-jet rate in the loop-mediated s-channel scalar model as function of the
mediator mass for. We show all three different mχ = 10 GeV (left) and mχ = 100 GeV (right).
For the shaded regions the annihilation cross section reproduces the observed relic density within
Ωχobs /3 and Ωχobs + 10% for a mediator coupling only to up-type quarks (red) or to both types of
quarks (green). Figure from Ref. [6]
We show the predictions for the total LHC rate based on these three effective
theories and the simplified model in the left panel of Fig. 7.5. The decoupled top
ansatz L (1) of Eq. (7.62) indeed reproduces the correct total rate for mS < 2mt .
Above that threshold it systematically overestimates the cross section. The effective
Lagrangian L (2) with a decoupled mediator, Eq. (7.65), reproduces the simplified
model for mS 5 TeV. Beyond this value the LHC energy is not sufficient
to produce the mediator on-shell. Finally, the effective Lagrangian L (3) with a
simultaneously decoupled top quark and mediator, Eq. (7.68), does not reproduce
the total production rate anywhere.
In the right panel of Fig. 7.5 we show the mono-jet rate for heavier dark matter
and the parameter regions where the simplified model predicts a roughly correct
relic density. In this range only the EFT with the decoupled mediator, defined in
Eq. (7.65), makes sense. Because the model gives us this freedom, we also test what
happens to the combination with the relic density when we couple the mediator to
all quarks, rather than up-quarks only. Altogether, we find that in the region of heavy
mediators the EFT is valid for LHC observables if
This is similar to the range of EFT validity for the s-channel vector model.
References 173
References
1. Campbell, J.M., Huston, J.W., Stirling, W.J.: Hard interactions of quarks and gluons: a primer
for LHC physics. Rep. Prog. Phys. 70, 89 (2007). arXiv:hep-ph/0611148
2. Plehn, T.: Lectures on LHC Physics. Lect. Notes Phys. 886 (2015). arXiv:0910.4182 [hep-ph].
https://www.thphys.uni-heidelberg.de/~plehn/?visible=review
3. Bernreuther, E., Horak, J., Plehn, T., Butter, A.: Actual physics behind mono-X. SciPost.
arXiv:1805.11637 [hep-ph]
4. Barr, A., Lester, C., Stephens, P.: m(T2): the truth behind the glamour. J. Phys. G 29, 2343
(2003). arXiv:hep-ph/0304226
5. Smillie, J.M., Webber, B.R.: Distinguishing spins in supersymmetric and universal extra
dimension models at the large hadron collider. J. High Energy Phys. 0510, 069 (2005).
arXiv:hep-ph/0507170
6. Bauer, M., Butter, A., Desai, N., Gonzalez-Fraile, J., Plehn, T.: Validity of dark matter effective
theory. Phys. Rev. D 95(7), 075036 (2017). arXiv:1611.09908 [hep-ph]
Chapter 8
Further Reading
First, we would like to emphasize that our list of references is limited to the, legally
required, sources of figures and to slightly more advanced material providing more
details about the topics discussed in these lecture notes.
Our discussion on the general relativity background and cosmology is a very
brief summary. Dedicated textbooks include the classics by Kolb and Turner [1],
Bergström and Goobar [2], Weinberg [3], as well as the more modern books by
Dodelson [4] and Tegmark [5]. More details on the role of dark matter in the history
of the universe is given in the book by Bertone and Hooper [6] and in the notes
by Tanedo [7] and Mambrini [8]. Jim Cline’s TASI lectures [9] serve as an up-to-
date discussion on the role of dark matter in the history of the Universe. Further
details on the cosmic microwave background and structure formation are also in the
lecture notes on cosmological perturbation theory by Hannu Kurki-Suonio that are
available online [10], as well as in the lecture notes on Cosmology by Rosa [11] and
Baumann [12].
For models of particle dark matter, Ref. [13] provides a list of consistency tests.
For further reading on WIMP dark matter we recommend the didactic review
article Ref. [14]. Reference [15] addresses details on WIMP annihilation and the
resulting constraints from the comic microwave background radiation. A more
detailed treatment of the calculation of the relic density for a WIMP is given in
Ref. [16]. Felix Kahlhöfer has written a nice review article on LHC searches for
WIMPs [17]. For further reading on the effect of the Sommerfeld enhancement, we
recommend Ref. [18].
Extensions of the WIMP paradigm can result in a modified freeze-out mecha-
nism, as is the case of the co-annihilation scenario. These exceptions to the most
straightforward dark matter freeze-out have originally been discussed by Griest and
Seckel in Ref. [19]. A nice systematic discussion of recent research aiming can be
found in Ref. [20].
For models of non-WIMP dark matter, the review article Ref. [21] provides many
details. A very good review of axions is given in Roberto Peccei’s notes [22].
while axions as dark matter candidates are discussed in Ref. [23]. Mariangela
Lisanti’s TASI lectures [24] provide a pedagogical over these different dark matter
candidates. Details on light dark matter, in particular hidden photons, can be found
in Tongyan Lin’s notes for her 2018 TASI lecture [25].
Details on calculations for the direct search for dark matter can be found in
the review by Lewin and Smith [26]. Gondolo and Silk provide details for dark
matter annihilation in the galactic center [27], as do the TASI lecture notes of
Hooper [28]. For many more details on indirect detection of dark matter we refer to
Tracy Slatyer’s TASI lectures [29].
Note the one aspect these lecture notes are still missing is the chapter on the
discovery of WIMPs. We plan to add an in-depth discussion of the WIMP discovery
to an updated version of these notes.
References
1. Kolb, E.W., Turner, M.S.: The early universe. Front. Phys. 69, 1 (1990)
2. Bergstrom, L., Goobar, A.: Cosmology and Particle Astrophysics. Chichester, Wiley (1999)
3. Weinberg, S.: Cosmology. Oxford University, Oxford (2008)
4. Dodelson, S.: Modern Cosmology. Academic, Amsterdam (2003)
5. Tegmark, M.: Measuring space-time: from big bang to black holes. In: The Early Universe and
Observational Cosmology. Lect. Notes in Phys. 646, 169 (2004). arXiv:astro-ph/0207199
6. Bertone, G., Hooper, D.: A History of Dark Matter (2016). arXiv:1605.04909
7. Tanedo, F.: Defense Against the Dark Arts. http://www.physics.uci.edu/~tanedo/files/notes/
DMNotes.pdf
8. Mambrini, Y.: Histories of Dark Matter in the Universe. http://www.ymambrini.com/My_
World/Physics_files/Universe.pdf
9. Cline, J.M.: TASI Lectures on Early Universe Cosmology: Inflation, Baryogenesis and Dark
Matter (2018). arXiv:1807.08749 [hep-ph]
10. Kurki-Suonio, H.: Cosmology I and II. http://www.helsinki.fi/~hkurkisu
11. Rosa, J.G.: Introduction to Cosmology. http://gravitation.web.ua.pt/cosmo
12. Baumann, D.: Cosmology. http://www.damtp.cam.ac.uk/people/d.baumann
13. Taoso, M., Bertone, G., Masiero, A.: Dark matter candidates: a ten-point test. J. Cosmol.
Astropart. Phys. 0803, 022 (2008). arXiv:0711.4996 [astro-ph]
14. Arcadi, G., Dutra, M., Ghosh, P., Lindner, M., Mambrini, Y., Pierre, M., Profumo, S., Queiroz,
F.S.: The waning of the WIMP? A review of models, searches, and constraints. Eur. Phys. J. C
78(3), 203 (2018). arXiv:1703.07364 [hep-ph]
15. Slatyer, T.R., Padmanabhan, N., Finkbeiner, D.P.: CMB constraints on WIMP annihila-
tion: energy absorption during the recombination epoch. Phys. Rev. D 80, 043526 (2009).
arXiv:0906.1197 [astro-ph.CO]
16. Steigman, G., Dasgupta, B., Beacom, J.F.: Precise relic WIMP abundance and its impact on
searches for dark matter annihilation. Phys. Rev. D 86, 023506 (2012). arXiv:1204.3622 [hep-
ph]
17. Kahlhoefer, F.: Review of LHC dark matter searches. Int. J. Mod. Phys. A 32, 1730006 (2017).
arXiv:1702.02430 [hep-ph]
18. Arkani-Hamed, N., Finkbeiner, D.P., Slatyer, T.R., Weiner, N.: A theory of dark matter. Phys.
Rev. D 79, 015014 (2009). arXiv:0810.0713 [hep-ph]
References 177
19. Griest, K., Seckel, D.: Three exceptions in the calculation of relic abundances. Phys. Rev. D
43, 3191 (1991)
20. D’Agnolo, R.T., Pappadopulo, D., Ruderman, J.T.: Fourth exception in the calculation of relic
abundances. Phys. Rev. Lett. 119(6), 061102 (2017). arXiv:1705.08450 [hep-ph]
21. Baer, H., Choi, K.Y., Kim, J.E., Roszkowski, L.: Dark matter production in the early universe:
beyond the thermal WIMP paradigm. Phys. Rep. 555, 1 (2015). arXiv:1407.0017 [hep-ph]
22. Peccei, R.D.: The strong CP problem and axions. Lect. Notes Phys. 741, 3 (2008).
arXiv:hep-ph/0607268
23. Arias, P., Cadamuro, D., Goodsell, M., Jaeckel, J., Redondo, J., Ringwald, A.: WISPy cold
dark matter. J. Cosmol. Astropart. Phys. 1206, 013 (2012). arXiv:1201.5902 [hep-ph]
24. Lisanti, M.: Lectures on Dark Matter Physics. arXiv:1603.03797 [hep-ph]
25. Lin, T.: Dark Matter Models and Direct Searches, Lecture at TASI 2018. Lecture Notes. https://
www.youtube.com/watch?v=fQSWMsOfOcc
26. Lewin, J.D., Smith, P.F.: Review of mathematics, numerical factors, and corrections for dark
matter experiments based on elastic nuclear recoil. Astropart. Phys. 6, 87 (1996)
27. Gondolo, P., Silk, J.: Dark matter annihilation at the galactic center. Phys. Rev. Lett. 83, 1719
(1999). arXiv:astro-ph/9906391
28. Hooper, D.: Particle Dark Matter (2009). arXiv:0901.4090 [hep-ph]
29. Slatyer, T.R.: Indirect Detection of Dark Matter (2017). arXiv:1710.05137 [hep-ph]
Index
Acoustic oscillations, 22, 31 Degrees of freedom, 12, 13, 38, 59, 105
Angular diameter distance, 11 Dimensional transmutation, 133
Anti-matter, 49 Direct detection, 129
Axion, 46
Axion-like particle, 48
Effective field theory, 105, 107, 166
Einstein equation, 5
Baryon number violation, 50 Einstein-Hilbert action, 2
Boltzmann equation, 61, 63, 80 Endpoint methods, 161
Entropy, 9, 39, 49
Cascade decay, 164 Euler equation, 27
Co-annihilation, 67
Collider, 145
Collinear divergence, 152 FIMP, 80
Co-moving distance, 10 Freeze in, 79, 82
Continuity equation, 27 Freeze out, 37, 50, 54, 55, 59, 70, 92
Cosmic microwave background, 17 Friedmann equation, 5, 12, 27
Cosmological constant, 2 Friedmann–Lemaitre–Robertson–Walker
CP violation, 52 model, 6
Critical density, 6
Curvature, 4
Galactic center excess, 115
Dark matter
annihilation, 58, 69, 87, 105, 111 Halo profile, 114
asymmetric, 53 Burkert, 113
cold, 34 Einasto, 113
fuzzy, 35, 49 Navarro-Frenk-White, 113
hot, 40 Heavy quark effective theory, 133
light, 35, 40 Hidden photon, 94
mixed, 35 Higgs
secluded, 96 funnel, 104
self-interacting, 35 portal, 85, 116, 132, 157
supersymmetric, 100 potential, 85
warm, 34 Hubble constant, 1
Kinetic mixing, 92
Sachs–Wolfe effect, 22
Sakharov conditions, 50
Mandelstam variables, 62, 153
Scale factor, 3, 5
Matter-radiation equality, 11
Schrödinger equation, 74
Mean free path, 16
Simplified model, 124, 126, 159, 169
Mediator, 55, 67, 80, 85, 95, 145
Sommerfeld enhancement, 71, 102
s-channel, 70, 104, 157, 169
Speed of sound, 23, 34
t-channel, 70, 167
Sphaleron, 51
Misalignment mechanism, 42
Spin-dependent cross section, 140
Missing energy, 148
Spin-independent cross section, 130
Mono-X, 149, 154
Stefan–Boltzmann scaling, 13
MSSM, 97
Sterile neutrinos, 35
Structure formation, 26
Nambu-Goldstone boson, 44, 91 Systematics, 155
N-body simulations, 31
Neutralino, 97, 139, 159
bino, 99, 100, 120 Vector portal, 92, 123
higgsino, 98, 102
singlino, 123
wino, 98, 99, 101 WIMP, 58, 61, 62, 68, 85, 92, 111, 129
Neutrino-electron scattering, 38 WMAP, 26
NMSSM, 122
Nucleosynthesis, 16
Number density, 13, 16, 39, 60, 64, 81 Xenon, 137