From LTE To LTE-Advanced Pro and 5G
From LTE To LTE-Advanced Pro and 5G
Moe Rahnema
Marcin Dryjanski
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Congress.
All rights reserved. Printed and bound in the United States of America. No part of this book
may be reproduced or utilized in any form or by any means, electronic or mechanical, including
photocopying, recording, or by any information storage and retrieval system, without permission
in writing from the publisher.
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of
a term in this book should not be regarded as affecting the validity of any trademark or service
mark.
10 9 8 7 6 5 4 3 2 1
Contents
Preface xv
1 Introduction 1
References 8
v
vi From LTE to LTE-Advanced Pro and 5G
3.14 UE Categories 79
References 79
14 Toward 5G 315
Index 345
Preface
The demand for broadband mobile services has brought new challenges for op-
erators to provide higher-capacity and higher-quality networks with lower per
bit cost. The Long-Term Evolution (LTE) fourth generation (4G) broadband
mobile communication systems and LTE-Advanced brings higher capacities
and spectral efficiencies for supporting high data rate services and the flexibility
for mixed media all-Internet Protocol (IP) communication with flexible spec-
trum allocation and utilization. The orthagonal frequency division multiple
access (OFDMA) concept underlying the air interface of LTE-based systems is
also what is being considered as the technology choice for implementation of
the fifth generation (5G) mobile communication system, which is still under
standardization. This book provides a detailed description of the LTE and LTE
advanced air interface at the physical and link layer protocol levels, the evolved
packet core network architecture and protocols, the network planning, and op-
timization with the many design trade-offs, performance analysis and results,
and the self-organizing network (SON) feature specifications and realization,
as well as the new features under LTE-Advanced Pro, the transition to 5G,
the requirements and the technology options, the standardization efforts, and
related issues and challenges. The book also provides a detailed chapter on the
end-to-end data transfer optimization mechanisms based on the Transmission
Control Protocol (TCP).
Chapter 1 introduces the basics of the LTE 4G broadband mobile com-
munication systems air interface (E-UTRAN) and the evolved packet core net-
work architecture. It explains the advantages of the new air interface design
based on the multicarrier transmission scheme of OFDMA and the simplified
distributed radio access network with the all-IP system architecture provides
over the earlier third generation (3G) cellular networks in terms of increased
xv
xvi From LTE to LTE-Advanced Pro and 5G
layers, and covers the capacity-based dimensioning and its integration with the
coverage based dimensioning to derive the final site counts based on the traffic
model. The chapter also derives and presents the trends in the optimum sys-
tem bandwidth for coverage and provides an analysis of the trade-offs involved
between network load and coverage performance and presents the trends in
graphical format.
Chapter 5 discusses the three important prelaunch planning tasks: the
allocation of physical cell identities (PCI), the uplink reference signal sequence
planning, and the physical random access channel (PRACH) parameter plan-
ning. Thereafter, it presents related algorithms. The chapter also discusses deri-
vation of the 64 PRACH preambles in each cell based on cyclic shifting of
Zadoff-Chu sequences, the PRACH capacity planning and optimization in
terms of configuration parameters such as the RACH physical resources, the
RACH preamble allocation, the RACH persistence level and backoff control,
and the RACH transmission power control.
Chapter 6 covers the main services and functions of the RRC sublayer
such as the cell selection and reselection in the idle state and the handover func-
tions and performance in the connected state with the transition between the
states. The related measurement events, the filtering schemes, and the triggering
mechanisms and related parameters are defined and optimization-related con-
siderations are explained. The chapter also provides the detailed description of
the paging mechanisms and the related parameters for optimization, defines the
discontinuous reception (DRX) cycle and parameters, and explains the uplink
power control on the data and control channels along with the related param-
eters and optimization considerations.
Chapter 7 discusses the 3GPP proposed mechanisms for dealing with in-
tercell interference on both the uplink and the downlink in LTE systems. These
mechanisms are based on interference cancellation, interference randomization,
and interference avoidance through intercell interference coordination (ICIC)
measures in the resource assignment processes and are all discussed in detail.
Both the static and the dynamic variants of ICIC schemes such as those based
on resource sharing between the inner and outer cell edge users in a self-config-
ured centralized or distributed scheme and the fractional or partial frequency
reuse are explained. This chapter provides the detailed implementation mecha-
nisms to adequately cover this important feature of LTE systems.
Chapter 8 covers the subject of self-organizing network (SON), which
is a feature specification of LTE networks. The SON feature specifications in-
clude self-configuration, self-healing, and self-optimization and various 3GPP
use cases for each. These are all discussed in detail with sample algorithms and
mechanisms for implementing each use case. The chapter also covers a review
of the related standardization history and the status and provides some key ref-
erences for further study on related algorithms and implementation means for
xviii From LTE to LTE-Advanced Pro and 5G
various use cases. This chapter also provides the stimulus background for those
who are involved in ongoing research and development of new algorithm for
the implementation of the SON concepts and use cases.
Chapter 9 covers the Evolved Packet Core (EPC) network architecture
and defines and explains the various EPC network elements, their functions,
and interfaces with the protocols used and the UE mobility and connection
management states within the EPC. The chapter also discusses the quality of
service (QoS) classes, parameters, their mapping to various Evolved Packet Sys-
tem (EPS) bearers, and service samples and the performances achieved, discuss-
es the parameter mappings strategies for QoS realization, the flow control pa-
rameters, and the charging parameters, and provides the necessary background
and the guidelines for the design of the core network topology and the dimen-
sioning of the network elements and their interfaces based on well characterized
traffic models and related engineering formulas. This chapter complements the
rest of the book in providing a thorough understanding of the Evolved System
Architecture (ESA) of the 4G mobile communication systems.
In Chapter 10, the main key enhancements introduced in LTE-Advanced
Releases 10, 11, and 12 are explained and discussed, which include carrier ag-
gregation and the associated signaling and scheduling means, the improvements
in multiantenna techniques on downlink (DL) and uplink (UL) and the new
UE-specific demodulation reference symbols, relaying mechanisms to extend
cell coverage and the required extra functions and channels, various mecha-
nisms for IP traffic offload, and PDN connectivity with IP flow mobility, the
enhanced PDCCH for handing the required signaling for new features, and the
various scenarios for implementing coordinated multipoint transmission and
reception and the achievable performance benefits. This chapter also provides
a detailed discussion of the increasingly important area of machine-to-machine
(M2M) communication, issues involved, and the provisions made or under de-
velopment in LTE and LTE advanced to facilitate its use for efficient machine
type communication from both the access side and the core network side.
Chapter 11 provides the reader with a thorough detailed discussion of the
important TCP and its many variations and important parameters. The chapter
presents several solutions and their performance based on research results from
the literature for optimization of TCP operation and performance in wireless
networks, particularly for LTE systems, Universal Mobile Telecommunications
System (UMTS), and General Packet Radar Service (GPRS) networks. The
solutions include parameter tuning and feature selection at the radio link layer
protocols, automatic repeat request (ARQ) proxies involving no changes to the
TCP, parameter tuning and optimization within the transport protocol itself,
and possible modifications of the protocol to help it to best suit the charac-
teristics of the lossy radio links. The optimization of the TCP performance in
wireless networks where packet losses on the radio link is a common setback is
Preface xix
very crucial for optimum use of the limited radio resources and maximization
of the goodput while preventing unnecessary delays.
Chapter 12 covers the implementation of voice over LTE (VOLTE) based
on voice over IP technology. This chapter discusses the semipersistent sched-
uling for voice packets on the air interface to reduce the signaling overhead,
explains the IP multimedia system element functions and interfaces in VOLTE
architecture, and provides a comprehensive discussion of the signaling proto-
cols and naming and addressing, as well as a rather simple means for estimating
the voice capacity of LTE networks. The chapter also presents typical voice
capacity estimates obtained from simulations results in the literature for com-
parison purposes.
Chapter 13 serves as an overview of the newly standardized features in
the LTE area including Rel-12 and Rel-13. It covers multilink aggregation with
the use of dual connectivity, unlicensed LTE, and tight radio-level interwork-
ing with WiFi, and outlines the further carrier aggregation enhancements. The
chapter addresses the device-to-device communications, explains the architec-
tural and air interface details, and concludes with a section that compares the
features and performance of the key LTE evolution steps.
Chapter 14 offers an overview of the potential 5G technologies and out-
lines the standardization approach and the main requirements and spectrum
aspects. The chapter presents the novel radio and architecture concepts includ-
ing multiple numerologies and network slicing, which serves as an input to the
overall system view and migration paths including the tight interworking with
the evolved LTE system.
This book is an outgrowth of the authors’ independent research and study
in the field for more than five years and involvement in the planning, design,
and analysis of the first LTE networks in Latin America, the design of a holistic
framework for the implementation of the LTE-Advanced Pro features includ-
ing licensed-assisted access (LAA), LTE-WLAN aggregation (LWA), dual con-
nectivity (DC), and carrier aggregation (CA), narrowband internet of things
(NB-IoT) implementation evaluation within the LTE system framework, the
design of 5G scheduler within the 5G radio access network (RAN) architecture
in the tier 1 vendor’s research and development (R&D) office, implementation
of the L2/L3 protocol stack on the LTE radio interface, and providing technical
trainings in the area of LTE, LTE-Advanced and UMTS for operators, vendors,
R&D institutions in the United States, Mexico, Asia, the Netherlands, Swe-
den, Spain, and Poland. The authors have more than 20 years of professional
engineering experience and consulting in the wireless telecommunication field
starting from low Earth orbit satellites at Motorola to GSM, to 3G planning
in the United States and Asia, and LTE network planning and analysis in Mex-
ico and Sweden with several related technical publications and patents in the
field, and have been working with 5G since its early research phase from 2012,
xx From LTE to LTE-Advanced Pro and 5G
Acknowledgments
The authors would like to thank the many pioneering researchers in the indus-
try and the academia, the results of whose efforts helped to create the ground-
work for composing this book. Without those efforts, it would never have been
possible to put together this book. We would also like to thank the many clients
who provided the consulting opportunities to us in this industry, which helped
us with the knowledge, experience, and insight to be able to put together a book
of this scope in the field.
Special thanks are due to the review and production teams at Artech
House Publishers who provided excellent support in the production of this
book and helped to meet a reasonable schedule. In particular, we would like to
thank Aileen Storry, the senior acquisitions editor, for coordinating the initial
review of the scope and content of the proposal for publication approval and
her subsequent kind efforts in the arrangement of review and refinement of the
material. We would also like to thank Sarah O’Rourke and her team for their
kind cooperation in the copyediting and production of the book.
1
Introduction
The three generation (3G) Long-Term Evolution (LTE) is the technology speci-
fication from the 3GPP Standards Group that provides the cellular operators
with the capability to offer wireless broadband services to their subscribers with
increased capacity, coverage and speed of mobile over the earlier wireless sys-
tems. The LTE-Advanced from Release 10, which fulfills the requirements from
IMT-Advanced, is referred to as the fourth generation (4G) mobile network-
ing technology. With increased data applications, content, and video services,
broadband is a significant part of today’s mobile user experience. This has been
demonstrated by the rapid increase in the uptake of wideband code division
multiple access (WCDMA) and high speed packet access (HSPA) networks
worldwide. Mobile video streaming and social networking now make up the
largest segment of data traffic in networks. Video makes up approximately 45%
of mobile data traffic and it is expected to increase to 55% by 20201.
This trend in the demand for broadband mobile services has brought new
challenges for operators to provide higher capacity and higher quality networks
with reduced delays and lower per bit cost. The GSA report2 in May 2016
stated that 503 operators have commercially launched LTE networks across 167
countries. The LTE networks are already dominating the world’s mobile infra-
structure markets. The technology is one of the fastest network migrations ever
seen and is making it possible for operators to either compete with fixed service
providers or to provide broadband services in areas where the fixed infrastruc-
1. These trends are the kind that have been reported, for instance, in the Cisco white paper,
“VNI Forecast and Methodology 2015-2020,” http://www.cisco.com/c/en/us/solutions/col-
lateral/service-provider/visual-networking-index-vni/complete-white-paper-c11-481360.pdf.
2. According to http://gsacom.com/
1
2 From LTE to LTE-Advanced Pro and 5G
ture does not exist and would be too expensive to deploy. Many industry ana-
lysts have forecasted that LTE’s global momentum will continue between now
and 2020. By the end of 2015, global LTE and LTE-Advanced connections
reached 1.068 billion according to GSMA report. By 2020, LTE is expected to
account for nearly 30% of global connections.
In the LTE and LTE-Advanced radio access network architecture, the
control and management of the radio resources, as implemented in the central-
ized RNCs in UMTS, have been distributed basically in the evolved Node Bs
(eNode B) which intercommunicate through the X2 interface. The X2 interface
has a key role in the intra-LTE handover operation. These measures result in
speedier response to real-time channel conditions for resource scheduling. It
also helps to reduce the protocol processing overhead in the control plane and
along the user data paths, which, in turn, reduces the latency in setting up calls
and data connections as well as the network deployment costs. The extension
of an IP mode of transmission to the radio access for all communications has
helped to reduce the time it takes to access the radio and the core network re-
sources helping to significantly reduce, for instance, a data session setup time
that can take anywhere from 1 to 15 seconds in the current wireless mobile
networks.
LTE introduces a new air interface based on orthogonal frequency divi-
sion multiple access (OFDMA) and multiple antenna techniques to achieve
high-data-rate and low-latency data services. The OFDMA technology is avail-
able freely on the market and has been used in digital audio satellite broadcast
systems as well as in the implementation of WiMAX as specified in the IEEE
802.16 Standards for WLANs. The OFDMA removes the intracell interference
that exists in 3G CDMA-based networks and hence allows for much higher
data rates, increased capacity, and larger cell sizes. Speeds of beyond 100 Mbps
are made possible whereas the highest speed achieved in 3G networks using
HSDPA feature is at around 43 Mbps. In heavily loaded networks, the highest
speed observed with the advanced 3G networks has been at around 2 Mbps,
whereas LTE has allowed for speeds of up to 20 Mbps, a factor of 10 times
higher. Moreover, the mobility across the cellular network can be maintained
at speeds from 120 km/h to 350 km/h or even up to 500 km/h depending
on the frequency band for usage on high-speed trains, as the number of high-
speed rail lines increases and train operators aim to offer an attractive working
environment to their passengers. Moreover, the use of dedicated channels at the
transport level has been eliminated in LTE, which simplifies the MAC (elimi-
nating MAC-d entity) and improves resource utilization. The radio interface is
based on shared and broadcast channels, and there is no longer any fixed level of
resources dedicated to each user independent of the real-time data requirement.
This increases efficiency of the air interface, as the network can control the use
of the air interface resources according to the real-time demand from each user,
Introduction 3
LTE networks had been launched of which 106 networks are VOLTE capable,
according to GSA reports3.
LTE provides security in a similar way to its predecessors UMTS and
GSM. Because of the sensitivity of signaling messages exchanged between the
eNodeB and the terminal, or between the MME (a core network element) and
the terminal, all the information is protected against eavesdropping and altera-
tion. The implementation of security architecture of LTE is carried out by two
functions: ciphering of both control plane (RRC) data and user plane data, and
integrity protection, which is used for control plane (RRC) data only. Cipher-
ing is used in order to protect the data streams from being received by a third
party, while integrity protection allows the receiver to detect packet insertion
or replacement. RRC always activates both functions together, either following
connection establishment or as part of the handover to LTE.
The structural-functional simplifications and unifications made possible
with the LTE networks make them practically more suitable to self-optimi-
zation features and hence are sometimes classified as self-organizing or self-
optimizing networks (SONs). The SONs help to reduce the operating costs and
automatically tune the network in real time to changing network conditions
such as load and interferences. This has been brought with the three groups of
use cases including: self-configuration, self-optimization, and self-healing, with
centralized, distributed, and hybrid architectures covered in detail in Chapter 8.
The 3GPP standards for LTE are provided in 3GPP documents with the
numbering format of 36.SSS-xyz (e.g., TS 36.300-800). The TS stands for the
technical specification, 36 is the series number, SSS is the number of the speci-
fication within that series, x is the release number, y is the technical version
number within that release, and the final z is an editorial version number that is
incremented to track nontechnical changes. The standardization work on LTE
has been going on since 2004, building on the GSM/UMTS family of standards
that dates from 1990. Stage 1 specifications define the service from the user’s
point of view and lie exclusively in the 22 series. The Stage 2 of the standard,
the system’s high-level architecture and operation was completed in 2007. The
most useful stage two specifications for LTE are TS 23.401 [1] and TS 36.300
[2], which cover both the air interface and RAN architecture and protocols,
respectively. The stage-3 specifications define all the functional details and were
completed at the end of 2008 and are mainly provided in the 36 series.
The specifications for LTE are now encapsulated in 3GPP Release 8,
which is the earliest set of standards that defines the technical evolution of the
3GPP 3G mobile network systems. Release 7 that includes specifications for
HSPA+, the missing link between HSPA and LTE also allows for the introduc-
tion of a simpler, flat, IP-oriented network architecture extending the IP trans-
mission to the radio access network. Further evolution of LTE are specified in
3GPP Release 9 and then into IMT-advanced (LTE-Advanced) as Release 10
with further enhancements in Releases 11 and 12 towards data rates of 1 Gbps.
UE enhancements, such as the new signaling to support advanced interfer-
ence cancellation receivers in the UE, has been a focus for Release 12. Release
12 also introduces the dual connectivity feature that enables aggregation of
component carriers from different enhanced NodeBs (eNodeBs), and the abil-
ity to support carrier aggregation (CA) between frequency division duplexing
(FDD) and TDD component carriers. Other enhancements in Release 12 in-
clude Web Real Time Communication (WebRTC) and other multimedia such
as Enhanced Voice Services (EVS), enhanced multicast and broadcast services
and video with also continued work from Release 11 to Policy and Charging
Control (PCC). The core specification for Rel-12 specifications was frozen in
September 2014 and the ASN.1 was completed in March 2015.
Moving forward to Release 13, 3GPP has provided a new marker for the
LTE brand namely LTE-Advanced Pro, which is to show the point, where the
system has been significantly enhanced with respect to LTE-Advanced. Among
others, this is achieved through the dual connectivity enhancements, carrier ag-
gregation enhancements with the use of up to 32 component carriers, SON for
Advanced Antenna Systems (AAS) and full-dimension MIMO (FD-MIMO).
Another set of features under Rel-13 consideration include the aggregation of
licensed and unlicensed spectrum. From one side, the WiFi is exploited, with
the LTE-WLAN Aggregation (LWA), LTE-WLAN Radio Level Integration
with IPsec Tunnel (LWIP) and RAN-Controlled LTE-WLAN Interworking
(RCLWI). Those three features are addressing the same aspect (i.e., the tight
WiFi interworking with RAN), but require different upgrades to the WiFi net-
work and to the UE. Therefore, all of the options are included in the standard,
so that an operator can select a suitable option with respect to the required
upgrades. On the other end of the unlicensed spectrum usage modes, the re-
sources from licensed and unlicensed spectrum can be aggregated exploiting
the CA framework under Licensed-Assisted Access (LAA), where the tailored
version of LTE radio has been designed to support (e.g., Listen-Before-Talk
(LBT) mechanism). Narrowband Internet-of-Things (NB-IoT) is addressing
the need for low-power machine-type communications (MTC) through the
lean air interface using 180-kHz carriers and network optimizations enabling
the transmission of the short packets over NAS. The Device-to-Device (D2D)
and sidelink (SL) design enables to utilize the direct communication between
UEs within the LTE coverage or out of coverage to enable both public safety
Introduction 7
References
[1] 3GPP TS23.401, v. 8.0.0, “General Packet Radio Service (GPRS) Enhancements for
Evolved Universal Terrestrial Radio Access Network (E-UTRAN) Access,” December,
2007.
[2] 3GPP TS36.300, v. 8.0.0,“Evolved Universal Terrestrial Radio Access (E-UTRA) and
Evolved Universal Terrestrial Radio Access Network (E-UTRAN); Overall Description;
Stage 2,” April, 2007.
[3] 3GPP 38-series: http://www.3gpp.org/ftp/Specs/html-info/38-series.htm.
[4] 3GPP RP-170741, “Way Forward on the Overall 5G-NR eMBB Workplan,” March
2017.
[5] “Recommendation ITU-R M.2083: IMT Vision – Framework and Overall Objectives of
the Future Development of IMT for 2020 and Beyond,” September 2015.
2
The Underlying DFT Concepts and
Formulations
The use of DFT for the implementation of OFDM modulation was first pro-
posed in 1971 by Weinstein and Ebert [1]. However, the multicarrier com-
munication concept used in LTE today was invented in the 1960s but did
not become popular, because of the high level of complexity involved in the
generation of the transmitted signal and the processing in the receiver. Some
decades later, rapid advance in integrated circuit design and the advent of digi-
tal signal processing provided a breakthrough for the efficient implementation
of multicarrier systems based on the discrete Fourier transform (DFT) and the
related fast Fourier transform (FFT) realization of DFT [2–4]. Today, multicar-
rier systems based on OFDM are used in digital audio and video broadcasting
(DAB and DVM), the WLAN standard, and in the asymmetrical digital sub-
scriber line modems (ADSL). This chapter provides a comprehensive review of
the DFT and FFT concepts and mechanisms as used in the implementation
of OFDM in LTE and is expected to answer many of the usually unanswered
questions often faced by the deeper thinking readers in this regard. We will start
the discussions by first introducing the Fourier transform of discrete time func-
tions known as discrete-time Fourier transform (DTFT).
9
10 From LTE to LTE-Advanced Pro and 5G
of n and a sampling time interval T. The DTFT is then defined following the
Fourier transform concept as
F [ ∑ s (t )e − j 2 πkt /T ] = ∑ F [s (t )e − j 2 πkt /T ]
k k (2.4)
= ∑ S ( f − k /T )
k
In terms of the normalized frequency, f/fs (cycles per sample), the period is
1. In terms of radians per sample, the period is 2π, which also follows directly
from the periodicity of e–jωn. That is:
e − j ( ω+ 2 πk )n = e − j ωn
S ( ω) = S ( ω + 2 πK )
ω = 2 πf / f s = 2 πfT (2.5)
2π
ωk = k k = 0,1,,L − 1
L
We obtain
12 From LTE to LTE-Advanced Pro and 5G
k
1 L −1 − j 2π n
S (k ) / LT = S ( ωk ) / LT = ∑ s (n )e L k = 0,1,, L − 1
(2.7)
L n =0
k
∞ 1 L −1 − j 2π t
S f (k ) = F ∑ s (n )δ(t − nT ) = ∫ ∑ s (n )δ(t − nT )e LT dt
n =−∞ LT LT n = 0
L −1 k
1 − j 2π t
=
LT
∑ s (n ) ∫ δ(t − nT )e LT dt (2.8)
n =0 LT
L −1 k
1 − j 2π n
= ∑ s (n )e L k = 0,1,, L − 1
L n =0
1
− j 2π
where the e LT term is the fundamental frequency as defined by the se-
1
− j 2π
quence period LT, and the e , k = 0, 1, … are the harmonics of the
LT
L −1 k k k
j 2π n 1 L −1 L −1 − j 2π m j 2π n
∑ S f (k )e L = ∑ ∑
L k =0 m =0
s (m )e L e L
k =0
k k
(2.9a)
1 L −1 L −1 − j 2 π m j 2 π n
= ∑ s (m ) ∑ e L .e L
L m −0 k =0
k
− j 2π m
but the vectors e L form an orthogonal basis over the set of the L-dimen-
L =1 − j 2 π k m k
j 2π n
∑ e L ⋅e L = L .δmn
k =0
where δmn is the Kronecker delta function, and is 1 when m = n, and zero
otherwise.
With this, (2.9) becomes
L −1 k
j 2π n 1 L −1
∑ S f (k )e L = ∑ s (m )L .δmn = s (n ) for 0,1,, L − 1
L m =0
(2.9b)
k =0
showing that the sequence s(n) can be fully recovered from its DFT, through the
inverse DFT process. The DFT and the DTFT can be viewed as the results ob-
tained by applying the standard continuous Fourier transform to discrete data.
From that perspective, if the input data are discrete, the transform becomes a
DTFT. If the input data are periodic and continuous, the Fourier transform be-
comes a Fourier series, whereas if the input data are both discrete and periodic,
the Fourier transform becomes a DTF. One may view the DFT as a transform
for Fourier analysis of finite-domain, discrete-time functions.
Note that the frequencies represented for k > L/2 are just the shifted ver-
sion or better dislocated version of the negative frequencies in the Fourier map-
ping of the real signal s(n) and are redundant in that they do not present any
new information. This artifact occurred when we defined the frequency index
in the definition of the DFT, in (2.8), from K = 0 to L − 1 for convenience
instead of from something like –L/2 to +L/2 − 1. In fact, a real signal such as
s(n) or s(t) can only be expanded in terms of either sinusoidal or cosinusoidal
harmonics depending on weather the signal is odd or even with respect to t = 0.
But for representing a sinusoidal or cosinusoidal in the Fourier series or Fourier
transform expansion, it takes a pair of positive and negative frequencies that
are e −jnk/L, and e +jnk/L to form the function. We can further back up this notion
by considering that the frequency amplitudes S(k) are complex conjugate sym-
14 From LTE to LTE-Advanced Pro and 5G
metric with respect to the Nyquist frequency calculated for k = L/2 (half the
sampling rate), as shown below:
S (L / 2 + m ) = ∑ s (n )e − j 2 πn (L /2 +m )/L
n
= ∑ s (n )e − j 2 πn /2
.e − j 2 πnm /L = ∑ s (n )[Cos ( πn )].e − j 2 πm /L
n n
Similarly,
S (L / 2 − m ) = ∑ s (n )e − j 2 πn (L /2 −m )/L
n
= ∑ s (n )e − j 2 πn /2
.e + j 2 πnm /L = ∑ s (n )[Cos ( πn )].e + j 2 πm /L
n n
Since the cosine term and the sequence s(n) are both real, we see that
S (N 2 + m ) = S ∗ (N 2 − m ) , for m = 1,, N 2 − 1
This shows that the DFT coefficients S(k) for K > L/2 are just the same
values of S(k) by replacing the frequency index k, with –k for the values calcu-
lated for K < L/2. This also stems from the well-known symmetric nature of
the Fourier transform of a real signal. That is, the spectrum magnitude is sym-
metric with respect to the zero frequency DC line, which is represented by the
mid-sample corresponding to k = L/2 in the conventional frequency indexing
of the DFT. Thus, the highest frequency extracted by the DFT on a sampled
version of a continuous time signal s(t), corresponds to k = L/2 which translates
to a frequency of ω = π rad/sample, or 1/2T Hz, using the definition in (2.5)
for normalized frequency. This is just half the sampling rate fs/2, as expected by
the Nyquist sampling rate theory. The DFT coefficients corresponding to the
frequency index k beyond the Nyquist rate L/2 simply calculate the spectral
amplitudes for an aliased version of these higher frequencies into the actual
negative frequency half (i.e., the image frequencies) of the spectrum.
In passing, we will further note that the above conclusions would not
change when applied to complex signals in engineering if they are viewed as
simply representing real data in two orthogonal independent dimensions. For
instance, the QAM complex modulation symbols consist of real data sequences
that will be carried on the in-phase and the quadrature components of a carrier
signal (the cosine and sine). Hence, each component represents real data with
DFT and FFT of the same exact properties.
The Underlying DFT Concepts and Formulations 15
1
ωs = 2 π
L
1
which using the definition in (2.5) translates to a frequency spacing of Hz.
LT
This shows that the frequency resolution of DFT is inversely proportional to
the period LT of the sampled waveform. However, the sequence period LT can
be made larger by choosing a period N that is actually larger than the actual
nonzero portion of s(n), and setting the rest to zero over the period. That means
defining a new sequence by zero padding the old sequence s(n) over the L non-
zero portions to bring its period to NT, that is,
s ( ) (n ) = {s (n ) for n = 0,1., L − 1
zp
= 0, for n = L , L + 1,, N − 1
N −1 k
1 − j 2π n
S ( zp ) (k ) =
N
∑ s ( zp ) (n )e N k = 0,1,2,, N − 1 (2.10)
n =0
with N much larger than L. This results in a smoother spectral profile by filling
in for samples between the estimated frequencies through an interpolated DFT.
Similarly, the inverse DFT (IDFT) is given by [in analogy with (2.9b)],
N −1 k
1 j 2π n
s ( zp ) (n ) =
N
∑ S ( zp ) (k )e N
(2.11)
k =0
It is important to note that zero padding does not add any new informa-
tion to the data to increase the frequency resolution. It only helps to increase the
FFT resolution (i.e., reducing the FFT binwidth) and provides a denser sam-
pling or interpolation of the estimated frequency spectrum. However, the FFT
resolution or binwidth has nothing to do with the frequency resolution in the
sense of either resolving closed spaced frequencies or covering the full frequency
span of the continuous waveform s(t) that has been represented by its sampled
The Underlying DFT Concepts and Formulations 17
version s(n). Appending zeros does not change the input sampling rate, and
hence it would not affect the frequency span of the FFT output. And the length
of the sampled signal s(n) determines how closely spaced frequencies within the
original continuous waveform represented by its sampled version can be dif-
ferentiated, that is the inverse of the observation time of the signal given by 1/
LT. This is simply because it is the nonzero portion of the padded sequence that
determines how closely spaced sinusoidal harmonics can be differentiated over
the duration provided [in the Fourier series expansion of the sequence through
the correlation process as given in (2.10)].
Zero padding changes the intersample spacing in the FFT output and
results in a denser sampling of the frequency spectrum that would result from
a DFT on the nonzero-padded data sequence. The zero padding would not
help to distinguish closely spaced spectral peaks when the original input signal
lacks sufficient frequency-domain resolution. In other words, the zero padding
results in a sequence with a larger period and hence a smaller fundamental
frequency whose harmonics are estimated in the Fourier series expansion of
the periodic sequence. The zero padding results in a frequency interpolation
of the spectrum conveyed by the nonzero-padded sequence and provides finer
FFT binwidth and improved amplitude estimation. It does not increase the fre-
quency resolution in the sense of either introducing new spectral information,
or allowing to resolve closely spaced frequencies contained within the underly-
ing continuous waveform s(t). It is possible to have very fine FFT resolution, yet
not be able to resolve two coarsely separated frequencies contained within the
original continuous waveform.
The frequency resolution can be increased only by sampling the data
more finely (sampling required with at least twice the Nyquist rate to cover
the full frequency spectrum of the continuous waveform s(t) or by taking more
data points to be able to differentiate between closely spaced frequencies with
distinct peaks. (i.e., observing it over a longer time). Generally, the original data
sequence s(n) should provide enough samples to provide a frequency resolution
that is smaller than the minimum spacing between the frequencies of interest.
The FFT resolution should at least support the same resolution (binwidth) as
the frequency resolution provided by the nonpadded data. Zero padding also
shifts the frequency bins within the DFT computation of the nonzero padded
data sequence. This shift can cause problems if it alters the estimated samples
relative to a known frequency of interest. In order to provide a bin at a fre-
quency of interest, the FFT length should be set to cover an integer number
of cycles at that frequency. Otherwise, the amplitude of the desired frequency
may be estimated through further interpolation between the two adjacent bins
covering the frequency.
However, as far as these discussions would relate to what we need as
the background for LTE, the frequencies of interest within the modulated
18 From LTE to LTE-Advanced Pro and 5G
waveforms are the known subcarriers spaced at 15 kHz. In the case of 5G, the
baseline subcarrier spacing of 15 kHz has been scaled by an integer number to
support a maximum of 480 kHz (see Chapter 14). The spacing of the spectral
samples in the FFT from zero padding in the time domain is given by f s /N,
where fs is the sampling frequency (1/T ) and N is the number of FFT points
(the length of the new zero padded sequence). With this spacing, slot n of the
FFT output array represents the frequency n × (f s /N). Ideally, one may choose
values for fs and N such that the subcarriers contained in the modulated wave-
form will end up at the computed bin locations in the FFT for precise ampli-
tude estimation (demodulation of the subcarriers in LTE, for example). This
happens if you apply an FFT to a record length that covers an integer num-
ber of cycles at the frequency of interest. However, the reduced FFT binwidth
(higher density bins) resulting from zero padding the time sequence makes it
more likely that a contained subcarrier frequency is more closely represented by
one of the bins within the FFT, hence providing for a more accurate amplitude
estimation through for instance a simple parabolic interpolation between the
two adjacent bins containing the subcarrier frequency. For instance, the 3GPP
assignment of an FFT size of 1,536 for the bandwidth of 15 MHz results in a
binwidth of 15/1,536 =0.0098 MHz or 9.8 kHz, which is considerably smaller
than the LTE subcarrier spacing of 15 kHz. Whereas if FFT size of 1,024 were
to be assigned, it would result in a binwidth of 15/1,024 = 0.0146 or 14.6 kHz,
which does not provide enough margin in the accurate amplitude estimation
of the 15-kHz spaced subcarriers (through, for instance, interpolation of the
closely spaced adjacent bins) carrying the information transmitted.
References
[1] Weinstein, S. B., and P. M. Ebert, “Data Transmission by Frequency-Division Multiplex-
ing Using the Discrete Fourier Transform,” IEEE Trans. on Commun. Technol., Vol. 19,
No. 5, October 1971, pp. 628–634.
[2] Bracewell, R., The Fourier Transform and Its Applications, 2nd ed., New York: McGraw-
Hill, 1986.
[3] Oppenheim, A. V., and R. W. Schafer, Discrete-Time Signal Processing, 2nd ed., Upper
Saddle River, NJ: Prentice Hall, 1999.
[4] Brigham, E. O., The Fast Fourier Transform and Its Applications, Englewood Cliffs, N.J.:
Prentice Hall, 1988.
[5] Arfken, G. B., and H. J. Weber, Mathematical Methods for Physicists, 5th ed., Boston, MA:
Academic Press, 2000.
[6] Córdoba, A., “La formule sommatoire de Poisson,” C.R. Acad. Sci. Paris, Series I, Vol. 306,
1988, pp. 373–376.
The Underlying DFT Concepts and Formulations 19
[7] http://www.dspguru.com/dsp/howtos/how-to-interpolate-in-time-domain-by-zero-pad-
ding-in-frequency-domain.
3
Air Interface Architecture and Operation
LTE is built on an all new air interface based on orthogonal frequency divi-
sion multiplexing (OFDM) technology. The air interface, which is termed En-
hanced UTRA (E-UTRA), uses orthogonal frequency division multiple access
(OFDMA) scheme on the downlink and a slight variation of it referred to as
single-carrier frequency division multiple access (SC-FDMA) on the uplink.
The SC-FDMA will help to lower the peak to average power ratio (PAPR) in
the transmitter which is a critical factor for the handsets to prolong battery life.
The LTE physical specifications [1, 2] allow for the allocation of channel bands
from 1.4 MHz, 3 MHz, 5 MHz, 10 MHz, 15 MHz, and 20 MHz and support
both frequency division duplex (FDD) and the time division duplex (TDD)
modes for transmission on paired and unpaired spectrum. An advantage of
TDD is that the allocated system bandwidth can be dynamically proportioned
between the uplink and downlink to meet the load conditions on each link.
However, the focus of this chapter will be on the FDD, mode which offers the
advantages of self-interference protection and user to user interference mitiga-
tion through duplex frequency spacing and duplex gap, while it does not have
the time synchronization complexities associated with the TDD mode. Fur-
thermore, the E-band duplex 71–76 and 81–86 GHz is being deployed as FDD
for terrestrial point-to-point and for high-throughput low Earth orbit satellites
(HTS), so this in itself is a good reason to favor FDD.�
21
22 From LTE to LTE-Advanced Pro and 5G
plenty of radio spectrum at the 1,800-MHz band, and a rollout at 1,800 MHz
will be less costly than at the 2,600-MHz band.
The OFDMA channels are allocated within an operator’s licensed spec-
trum allocation. The center frequency is identified by an EARFCN (E-UTRA
Absolute Radio Frequency Channel Number). The precise location of the
EARFCN is an operator decision, but it must be placed on a 100-kHz raster
and the transmission bandwidth must not exceed the operator’s licensed spec-
trum. Separate EARFCNs are required to describe an uplink and a downlink
frequency pair in an FDD channel.
Figure 3.1 OFDM waveforms (assumes 1-bit symbols, as each waveform carries one symbol).
code rate adjustment (to channel conditions) and interleaving is divided into
a number of lower rate parallel bit streams, which are used to modulate each
subcarrier assigned for the connection using the spectrally efficient modulation
schemes of QAM, 16 QAM, or 64 QAM. The modulation order and the final
coding rate from the process will depend on the indicated channel conditions
(i.e., the signal-to-noise ratio) at the time on the subcarrier. Turbo codes were
used rapidly in UMTS after their publication in 1993, with the benefits of their
near-Shannon limit performance outweighing the associated costs of memory
and processing requirements.
Since each subcarrier of 15 kHz is narrow enough, the channel response
is made flat (nonfrequency selective) over the subcarrier modulation frequency
range. In other words, the symbol time of 66.7 µs on each subcarrier (i.e., the
OFDM symbol time) is made much longer than the typical channel dispersion
time (delay spread) of 0.5 to 15 µs, as observed in cellular communication,
making the channel resistant to multipath delays. By changing the frequency-
selective wideband channel to a flat fading condition on each subcarrier, chan-
nel estimation and compensation are done easily in the frequency domain in
the receiver. The subcarriers are allocated to a connection in units of 12 adja-
cent subcarriers (180 kHz), called physical resource blocks (PRBs), which last
for a minimum time of 1 ms, where PRB is the smallest element of resource
allocation assigned by the base station scheduler.
The number of PRBs and the number of subcarriers that LTE has pro-
vided for various channel bandwidths are given in Table 3.1. Note that the
downlink has an unused central subcarrier.
When some data are lost occasionally due to channel error conditions
on some of the subcarriers, they can be recovered through the error correcting
properties of the convolutional-Turbo codes used. Since each subcarrier is able
to carry data at a maximum symbol rate of 15 ksps (kilo symbols per second),
this results in a raw symbol rate of 18 Ms per second over the maximum of 20-
MHz channel band that can be allocated to the system. This translates into 108
Mbps of raw data when the modulation order of 64 QAM is used. The actual
Air Interface Architecture and Operation 25
Table 3.1
Number of Resource Blocks with Channel Bandwidth
Available channel 1.4 3 5 10 15 20
bandwidth (MHz)
Number of occupied 72 180 300 600 900 1,200
subcarriers
Number of PRBs 6 15 25 50 75 100
peak data rates are obtained by subtracting from the theoretical peak rates the
error coding and protocol overheads and adding the gains arising for any spatial
multiplexing such as achieved via MIMO.
The perfect mutual orthogonality between user channels (the subcarriers)
is preserved all the way within the detection process with the OFDM technol-
ogy, making the radio planning more flexile with LTE. The coverage shrinkage
due to cell load phenomenon observed in CDMA-based systems (cell breath-
ing) as a result of intracell mutual interference between users is no longer the
case. Nevertheless, intercell interference, particularly at cell edges, will exist
since LTE is also based on frequency reuse. Specifically, the intercell interfer-
ence will arise in OFDM systems when the same physical resources (PRBs) are
used simultaneously in neighboring cells. To deal with this problem, 3GPP [4]
has been investigating a number of alternative interference mitigation mecha-
nisms based on intercell resource usage coordination, intercell interference av-
eraging and intercell interference cancellation, which are discussed further in
Chapter 7. However, with the absence of intracell interference, LTE can deliver
optimum performance in cells up to 5 km in radius, while still able of delivering
effective performance in cell sizes of up to 30-km radius. Both the OFDMA
used on downlink and the SC-FDMA scheme used on the uplink are imple-
mented through discrete Fourier transform (DFT) mechanisms as discussed in
more detail in later sections.
LTE extends the packet-mode IP transport for all services to the radio
access and into the handset. Voice that is transmitted in the circuit mode in
UMTS will be transmitted over IP in LTE. Thus, no channels are dedicated to
any one user at the transport level, as all user information will be transported
over shared channels. For each transmission time interval of 1 ms, which de-
fines the size of a transport block in LTE, a new scheduling decision is taken
regarding which users are assigned to a frequency resource block (RB) for the
time interval. This results in more resource sharing over the air and hence a
more efficient use of the scare radio resources. The short transmission interval
used also results in quicker response to changing channel conditions and more
real-time, multiuser resource sharing over the radio link. For the same user, the
frequency sets (RBs) can be changed on a per 0.5-ms basis, which is the slot
duration defined in LTE.
26 From LTE to LTE-Advanced Pro and 5G
Figure 3.2 Frequency-time grid for one FDD mode subframe, the REs are numbered from 0
= 6 in each of the two time slots within the 1-ms subframe (with the downlink single antenna
reference symbol structure on antenna port 0).
are further split into five subframes, each being 1 ms long. The subframes may
consist of three fields:
These fields are individually configurable in terms of length, but the total
length of the three must sum to 1 ms. The subfields can be dynamically config-
ured as special subfields and switched to uplink or downlink within a frame to
dynamically allocate the channel for UL or DL transmission depending on the
load on each link. In TDD the allocated spectrum, say, 20 MHz, is unpaired in
that it is one piece of 20 MHz which is shared between the DL and UL.
places within the PRBs. There are different types of reference symbols that are
used in the downlink and uplink frames for timing synchronization with the
network and coherent demodulation and are each positioned differently within
the PRBs.
On the downlink, there are two types of physical signals that are transmit-
ted. There are the cell-specific reference signals or symbols and the synchroniza-
tion signals or symbols. The reference symbols are a set of known symbols that
are transmitted during the first and fifth symbol location (resource element)
within each 0.5-ms slot when the short CP is used and during the first and
fourth OFDM symbol (resource element) when the long CP is used. The refer-
ence symbols are transmitted on every sixth subcarrier and are staggered in time
as illustrated in Figure 3.2 for the case of a single Tx antenna (on antenna port
0). LTE supports a number of transmission modes based on multiple Tx and
Rx antenna configurations. Up to 4 antennas can be configured in the eNodeB,
each designated with port numbers 0 to 3. In the downlink, multiple antenna
transmissions are organized using antenna ports, each of which has its own copy
of the resource grid that we introduced earlier. Ports 0 to 3 are used for single
antenna transmission, transmit diversity, and spatial multiplexing, while port
5 is reserved for beamforming. There are reference symbols also provided for
antenna ports 1, 2, and 3 as will be discussed in Section 3.11. Antenna ports
0 and 1 use eight reference symbols per PRB, while antenna ports 2 and 3 use
only four. This is because a cell is likely to use four antenna ports when it is
dominated by slowly moving mobiles, for which the amplitude and phase of the
received signal will only vary slowly with time.
The reference signals have two functions. (1) provide an amplitude and
phase reference in support of channel estimation and demodulation, and (2)
provide a power reference in support of channel quality measurements and fre-
quency-dependent scheduling. The cell-specific reference signals support both
of these functions. The channel response on subcarriers that contain the refer-
ence symbols is estimated directly. Then interpolation is used to estimate the
channel response on the remaining subcarriers.
Besides the reference symbols used in channel estimation for the coherent
demodulation of the downlink channels, there are the synchronization signals
or symbols that are used for deriving the network timing information in the
UE as well as cell search and neighbor cell monitoring as they carry the cell
identity. E-UTRA uses a hierarchical cell search scheme similar to WCDMA.
This means that the synchronization acquisition (frequency and time synchro-
nization) and the cell group identifier are obtained from a combination of a
primary synchronization signal (P-SCH) and a secondary synchronization sig-
nal (S-SCH) defined with a predefined structure. They are transmitted on the
72 center subcarriers (around the DC subcarrier) in predefined slots twice per
30 From LTE to LTE-Advanced Pro and 5G
frame in the first and sixth subframes, as shown in Figure 3.4. It is to be noted
that the accurate clock references and clock distribution both become more
critical as throughput rates increase. The link http://www.rttonline.com/tt/
TT2016_004.pdf provides a summary of information from Chronos on this
topic and relevant ITU timing standards documents.
The primary synchronization signal (PSS) is used to discover the symbol
timing and obtain some information about the physical cell identity. The sec-
ondary synchronization signal (SSS) is then used to obtain the frame timing,
the physical cell identity, the transmission mode (FDD or TDD), and the cyclic
cell-specific reference signals. These provide an amplitude and phase reference
for the channel estimation process, so they are essential for everything that fol-
lows. The mobile then receives the physical broadcast channel and reads the
master information block. With this, it finds the number of transmit antennas
at the base station, the downlink bandwidth, the system frame number and a
quantity called the PHICH configuration that describes the physical hybrid
ARQ indicator channel. The mobile can then start reception of the physical
control format indicator channel (PCFICH), so as to read the control format
indicators. These indicate how many symbols are reserved at the start of each
downlink subframe for the physical control channels to read the information.
Each synchronization signal sequence is generated as a symbol-by-symbol
product of an orthogonal sequence (3 of them existing) and a pseudo-random
sequence (170 of them existing). Each cell is identified by a unique combina-
tion of one orthogonal sequence and one pseudorandom sequence allowing 510
different cell identities.
On the uplink there are two types of reference symbols. These are:
Figure 3.4 PSS and SSS frame and slot structure in the time-domain FDD case.
Air Interface Architecture and Operation 31
with the PUSCH and PUCCH. On the PUSCH, one DRS symbol
is transmitted per slot in the fourth symbol position (symbol number
3). On the PUCCH, from 2 to 3 DRS symbols per slot may be config-
ured. The DRS symbols occupy the same allocated uplink bandwidth
as for the user data. Therefore, the length of the reference symbol
sequence will be the same as the number of allocated subcarriers in
the uplink transmission bandwidth and hence a multiple of 12. Mul-
tiple symbol sequences have been designed to accommodate different
bandwidth allocations. There are 30 base sequences for bandwidth
allocations from 1 to 3 resource blocks, whereas more than 30 base
sequences have been defined for bandwidth allocations of more than
three resource blocks. These symbol sequences have been organized
into 30 sequence groups. Each sequence group contains one base DRS
sequence of a length up to that suitable for bandwidth allocations up
to five resource blocks, and two base DRS sequences for bandwidth
allocations above five resource blocks. Each cell is allocated one se-
quence group. The DRS sequences are based on the Constant Am-
plitude Zero Auto-Correlation (CAZAC) prime-length Zadoff-Chu
sequences, which are cyclically extended to the desired length and are
discussed in [5]. Multiple orthogonal DRS sequences are created from
a single base sequence using cyclic shifts resulting in 12 orthogonal
sequences for each base sequence. These orthogonal sequences are as-
signed to different UEs in the same cell and are carried in the PRBs
allocated for both the uplink traffic and the uplink control channels.
2. Channel sounding reference symbols: When there is no uplink trans-
mission taking place, the eNodeB cannot take measurements on the
channel to perform, for example, frequency-selective scheduling. In
these circumstances, UE may be instructed to perform uplink sound-
ing. This will involve the UE transmitting a sounding reference signal
(SRS) within an uplink resource allocation specifically set aside for the
purpose. The eNodeB then performs channel estimation on the re-
ceived SRS signals to choose the resource blocks that contain the best
performing set of subcarriers for a UE. This is similar to the downlink
scheduling in which the UE reported CQI is used for the purpose.
The SRS symbols are allocated over multiples of four resource blocks
and always transmitted in the last symbol of a subframe. The SRS
transmissions can be set as periodic, with variable bandwidth, the con-
figuration of which is set using higher-layer signaling. �
32 From LTE to LTE-Advanced Pro and 5G
1
Ts = seconds (3.1)
2048 *15000
in which the 1/15,000 is the subcarrier spacing in seconds, making the OFDM
symbol in the time domain equal to 2,048 × Ts. The value 2,048, as will be
seen in later sections, is the FFT size for the system bandwidth of 20 MHz
in the OFDM transceiver implementation. That is, Ts can also be considered
the sampling time for the OFDM signal in LTE where an FFT size of 2,048 is
used in the modulation/demodulation process. However, with the definition in
(3.1), the LTE frame of 10 ms will equal to exactly 307,200 × Ts. A time slot, of
which 20 forms a frame, will be exactly 15,360 × Ts. The short and long cyclic
prefixes can also be shown to be multiples of this basic time unit, that is, 144 ×
Ts, and 512 × Ts, respectively, except for the first symbol of each slot, which has
a slightly longer cyclic prefix equal to 160 × Ts.
One other significance of this basic time unit, Ts, is the fact that it is also
an exact multiple of the UMTS and 1xEV-DO chip rates. The chip rate in
UMTS is 3.84 Mcps, and for 1x-based technologies it is 1.2288 Mcps. These,
if expressed as the chip periods, become exactly as 8 × Ts for UMTS and 25 ×
Ts for 1xEV-DO. Such integer relationships play an important role in reducing
the chipset complexity when the same chipset has to support both UMTS and
1xEV-DO technologies simultaneously in the handset. That is, the basic time
unit Ts in LTE will allow the same clock source be used for all.
Table 3.2
Relationship of FFT Size and Sampling Frequency Used with the Transmission Bandwidth in
LTE OFDM Modulation and Demodulation Processes
Transmission BW (MHz) 1.4 3 5 10 15 20
No. of Resource Blocks (PRBs) 6 15 25 50 75 100
No. of Subcarriers 72 180 300 600 900 1200
FFT Size 128 256 512 1024 1536 2048
Effective Sampling Frequency (MHZ) 3.84 7.68 15.36 30.72 46.08 61.44
over a time duration equal to the channel impulse response (maximum delay
spread) is called adding a cyclic prefix to each OFDM symbol waveform at the
transmitter. The basic idea is to replicate part of the OFDM symbol waveform
from the back to the front and make the OFDM symbol look cyclic, that is,
periodic over the observation time of the channel’s worse-case delay spread in
the intended environment. Thus, by repeating the signal cyclically, we make the
linear convolution that actually takes place look circular. Since the addition of
cyclic prefix consumes bandwidth (i.e., reduces the data rate), its length should
be minimized and set to no more than the expected worse-case delay spread of
the multipath channel in the operating environment. The 3GPP has defined
three different size categories for the cyclic prefix and they are referred to as the
normal, long, and the extended cyclic prefixes whose lengths and applications
were discussed in Section 3.2. It is important to note that the function of the
cyclic prefix as explained above is wider than the guard band used in traditional
TDMA or CDMA systems, which simply compensates for intersymbol and
intercarrier interference. In traditional systems, a guard band or alternatively
special pulse shape filtering may be used to prevent adjacent symbol overlap-
ping at either symbol or chip rate sampling points and hence prevent signal-
to-noise degradation caused by intersymbol interference (ISI). The cyclic prefix
in OFDM systems completely removes any ISI while it also turns the linear
convolution that takes place between the channel impulse response and the
modulation symbol blocks into a circular convolution.
With the cyclic prefix added, the symbols are converted into a continuous
signal using two digital-to-analog converters (DAC) modules for the real and
imaginary parts of the signal. The continuous signal is then upconverted to the
allocated carrier frequency by a local oscillator for transmission over the air. The
conceptual block diagram for the transmission process is shown in Figure 3.5.
Figure 3.5 The conceptual block diagram for the DL OFDM transmitter (in eNodeB).
38 From LTE to LTE-Advanced Pro and 5G
µs and is flat over the narrow modulated subcarrier width of 15 kHz. With this
consideration, the channel frequency response for subcarriers and at symbol
positions containing the reference symbols which are transmitted on every sixth
subcarrier twice per transmission slot (of 0.5 ms) within each PRB is obtained
directly. Then interpolation is used to estimate the channel frequency response
on the remaining subcarriers and symbol positions in the PRB. The conceptual
block diagram for the OFDM receiver within the UE is shown in Figure 3.6.
Figure 3.6 Conceptual block diagram of the OFDM receiver within the UE.�
receiver frequency tracking loops are critical factors to the proper operation of
the OFDM system. In fact, 3GPP specifications [8] have set frequency stability
40 From LTE to LTE-Advanced Pro and 5G
requirements of 0.1 ppm error for the UE modulation frequency observed over
a period of one subframe (1 ms). Furthermore, to prevent the transmitter and
receiver local oscillators to drift invariably over time, the base station periodi-
cally sends synchronization signals (discussed in previous sections), which are
used by the UE to track the transmitter frequency. �
transmission over the air. The DFT processing is the fundamental difference
between SC-FDMA and OFDMA signal generation as shown from compari-
son of the block diagrams shown in Figures 3.5 and 3.6. The signal thus formed
for transmission will occupy the same bandwidth as the DL OFDMA but re-
sults in a single carrier-like signal than the sum composite multicarrier signal in
the DL OFDMA and hence the name SC-FDMA. As a result, the uplink signal
will have a much lower amplitude variations and result in a lower PAPR. The
spectral representation of the user symbol sequences will have a much lower
variation than the random symbol sequence itself used to modulate the subcar-
riers as in the DL OFDM. Therefore, the SC-FDMA time waveform obtained
in the inverse DFT will also experience a much smaller variation and hence
result in lower PAPR. The analysis provided in [9] has shown that the LTE UE
power amplifier can be operated about 2 dB closer to the 1-dB compression
point than would be possible if OFDMA were used on the uplink.
Since the subcarriers are basically used here to spread (or shift) the fre-
quency-domain representation of the time-domain symbols, the name discrete
Fourier transform spread OFDM (DFT-SOFDM) is also used for SC-FDMA.
Figure 3.7 Conceptual block diagram for the UL SC-FDMA transmitter (in UE).
Air Interface Architecture and Operation 43
The L discrete Fourier terms at the output of the DFT block are then used as
the modulators of the assigned L subcarriers before being converted back into
the time domain using the IFFT process. An N-point IFFT where N > L (L be-
ing the number of symbols or subcarriers), is performed as in OFDM, followed
by addition of the cyclic prefix. By choosing N larger than the maximum num-
ber of occupied subcarriers, we obtain efficient oversampling and sinc (sin(x)/x)
pulse-shaping. There are two cyclic-prefix lengths defined on the uplink. These
are referred to as the normal CP and extended cyclic prefix corresponding to
seven and six SC-FDMA symbols per slot. The cyclic prefix provides OFDMA’s
fundamental robustness against multipath. The extended CP is beneficial for
deployments with large channel delay-spread characteristics, and for large cells.
SC-FDMA uses either contiguous subcarrier tones or uniformly spaced tones
(distributed). The current working assumption in LTE is that localized subcar-
rier mapping will be used. This decision was based on the consideration that the
localized mapping would make it possible to exploit frequency-selective gain via
channel-dependent scheduling (assigning uplink frequencies to UE based on
favorable propagation conditions).
The receiver functions within the eNodeB are basically the reverse of the
processing performed within the transmitter and is conceptually illustrated in
Figure 3.8. After time and frequency synchronization is achieved with the trans-
mitter, a number of samples corresponding to the length of the CP are removed,
so such that only the intersymbol interference free block of samples is passed to
the DFT. Note that a frequency domain (FD) equalization is performed after
presenting the received baseband signal into the frequency domain through the
FFT block using the reference symbols for channel estimation.
Figure 3.8 Conceptual block diagram of the UL SC-FDMA receiver within the eNodeB.
data is transferred over the radio interface such as the channel modulation, cod-
ing scheme, and antenna mapping. The number of transport channels has been
reduced compared to UTRAN since LTE does not use dedicated channels for
specific UEs. The transport channels organize the data into transport blocks
with a TTI of 1 ms (that is the duration of the LTE subframes). The TTI defines
also the minimum time interval over which link adaptations are performed and
scheduling decisions for transmission to different UEs are carried out.
Air Interface Architecture and Operation 45
The logical channels provide the interface between the MAC and the
RLC protocol sublayers. The MAC uses the logical channels to provide services
to the RLC. Thus, the logical channels are the SAPs between MAC and RLC
sublayers. A logical channel is defined by the type of information that it carries,
traffic or control data and system broadcasts. A logical channel is identified by a
channel ID, which is a field within the MAC header. This ID is used for multi-
plexing the logical channels within the transport channels and specifies to what
higher layer entity the information should be transmitted. The physical chan-
nels and the details of their mappings to resource elements are discussed in [5],
and the transport and logical channels and the mappings are explained in [13].
The following sections provide a short description of the various channels
types, the formats, and the mappings to different protocol layers.
the center frequency as possible. A PBCH message is repeated every 40 ms, that
is, with a transmit time interval of four LTE frames. The PBCH transmissions
consist of 14 information bits, 10 spare bits, and 16 CRC bits.
The physical multicast channel (PMCH) carries multicast/broadcast in-
formation for the MBMS service and uses the same modulation formats as
the PDSCH. The PMCH is transmitted in the MBSFN region of an MBSFN
subframe.
The physical control format indicator channel (PCFICH) informs the
UE about the format of the data being received. It indicates the number of
OFDM symbols used for the PDCCHs, which can range from 1 to 3 and
are located within the first three OFDM symbols of the 1-ms subframes. The
number of symbols setting will impact the available capacity and therefore it
is a parameter that needs proper setting or may be dynamically adjusted in a
SON implementation. The PCFICH is transmitted on the first symbol of every
subframe and carries a Control Format Indicator (CFI) field. The CFI contains
a 32-bit code word that represents one of the numbers 1, 2, or 3. A CFI of 4
is reserved for possible future use. The channel uses the more robust QPSK
modulation with a rate ½ block coding.
The physical downlink control channel (PDCCH) carries downlink con-
trol information (DCI), which consist of uplink power control, downlink re-
source scheduling, uplink resource grant, and the indications for a paging or
system information. The DCI format consists of several different types which
are defined with different sizes. The different format types include: Type 0,
1, 1A, 1B, 1C, 1D, 2, 2A, 2B, 2C, 3, 3A, and 4. Type 0 contains scheduling
grants for the mobile’s uplink transmissions. DCI types 1 to 1D and 2 to 2A
are used for scheduling commands for downlink transmissions. The DCI type
1 schedules data that the base station will transmit using one antenna, open
loop diversity, or beam forming for mobiles that already have been configured
into one of the downlink transmission modes 1, 2, or 7. Further information
on DCI types and formats is given in [5].
The physical hybrid ARQ indicator channel (PHICH) is 1 bit long and is
used to report the hybrid automatic repeat request (HARQ) status with a 0 for
ACK, and a 1 for NACK. The PHICH is transmitted within the control region
of the subframe.
All the control signaling transmitted on the PDCCH, the PCFICH, and
the PHICH are located in the first n OFDM symbols within each subframe
with n ≤ 3.
The uplink physical channels consist of the following:
The broadcast channel (BCH) maps to the PBCH. All UEs within the
cell must receive BCH information error-free. The reception of the BCH is
mandatory for accessing any service of a cell.
The downlink shared channel (DL-SCH) maps to the PDSCH and is
used for transmitting the downlink traffic and the higher layer control-plane
information and hence supports both the logical control and traffic channels. It
supports adaptive modulation and coding and various transmissions modes to
increase the efficient usage of the radio channel conditions. It also supports the
DRX operation, explained in Chapter 6.
The paging channel (PCH) maps to dynamically allocated resources on
the PCCCH via its own identifier (P-RNTI), and must be received within the
entire cell coverage area. The PCH supports DRX in order to increase the bat-
tery operating life cycle.
The multicast channel (MCH) maps to the PMCH and is used to trans-
mit the same information from multiple synchronized base stations to mul-
tiple UEs. MCH transmissions occur in subframes configured by upper layer
for MCCH or MTCH transmission. For each such subframe, the upper layer
indicates if signaling modulation and coding schemes (MCS) or data MCS ap-
plies. The transmission of an MCH occurs in a set of subframes defined by the
PMCH-Configuration.
The transport channels on the uplink consist of the following:
The uplink shared channel (UL-SCH) maps to the PUSCH and is the
uplink counterpart of the DL-SCH, and supports the same basic function such
as adaptive modulation-coding, HARQ, and spatial multiplexing.
The random access channel, RACH, maps to the PRCH and is used by
the UE for initial access without synchronization with the network. It supports
both the collision-based and collision-free modes as will be discussed in later
sections.
Air Interface Architecture and Operation 49
The MAC PDU consists of a MAC header, zero or more MAC Service Data
Units (MAC SDU), zero or more MAC control elements, and optionally pad-
ding. Both the MAC header and the MAC SDUs are of variable sizes. The
MAC header contains a logical channel ID field (LCID), which identifies the
logical channel instance of the corresponding MAC SDU or the type of the
corresponding MAC control element or padding for the DL-SCH, UL-SCH,
or MCH. The MAC header size can range from 2 to 3 bytes depending on
whether a 7-bit or 15-bit length field is used. The MAC controls what to send
at a given time and a number of other functions. These include the following:
The HARQ is similar to the one used in HSDPA and performs continuous
transmissions and instead of using a status message containing a sequence num-
ber uses a single-bit HARQ feedback ACK/NACK, with a fixed-timing relation
52 From LTE to LTE-Advanced Pro and 5G
The PDCP uses the services provided by the RLC sublayer. The PDCP
header size can range from 1 to 3 bytes depending on the length of the sequence
number. There is one PDCP instance per radio bearer. The radio bearer is simi-
lar to a logical channel for user and control data.
The header compression is particularly important for VoIP, which is a
critical application in LTE. Since there is no circuit switching in LTE, all voice
signals must be carried over IP and there is a need for efficiency. Various stan-
dards are specified for use in robust header compression (ROHC), which pro-
vides a tremendous savings in the amount of header that would otherwise have
to go over the air. These protocols are designed to work with the packet loss
that is typical in wireless networks with higher error rates and longer round-
trip time. There are multiple header compression algorithms, called profiles,
defined for the ROHC framework. Each profile is specific to the particular net-
work layer, transport layer, or upper layer protocol combination (e.g., TCP/IP
and RTP/UDP/IP), with the details given in RFCs as referenced in the 3GPP
specifications [15].
The ciphering including encryption and decryption has to occur below
the ROHC because the ROHC can only operate on unencrypted packets. It
cannot understand an encrypted header. The ciphering protects user plane data,
radio resource control (RRC) data and non-access stratum (NAS) data. The
ciphering algorithm and key to be used by the PDCP entity are configured by
upper layers [16] and the ciphering method is applied as specified in [17].
terminals and equipment do not support 64 QAM on uplink and 256 QAM
on downlink. The QAM type modulations used in LTE provide higher spectral
efficiencies compared to the constant envelope schemes such as the Gaussian
minimum phase shift keying and PSK modulations used by single carrier sys-
tems as in WCDMA in which the signal amplitude remains constant. On the
downside, the higher PAPR of the QAM modulation results in higher dynamic
range requirement for the AD and DA convertors and what is even worse is
that they reduce the efficiency of the transmitter RF power amplifier (RFPA).
However, this drawback would happen in any case in a multicarrier OFDMA
system such as LTE due to the fact that the OFDMA symbol is made up of a
combination of many subcarriers in which signals can add when in phase and
attenuate each other when out of phase, resulting in high PAPR. The MCS
with the lowest modulation order such as QPSK results in lower data rates but
require less SINR to operate and hence allow larger coverage area and opera-
tion in poor channel conditions. The selection of the modulation scheme from
the specified set takes place adaptively under the control of eNodeB based on
measurements it collects on the UL and the CQI (channel quality indicator)
signaled by the UE for the DL. It is therefore important that the channel mea-
surements are processed and reported properly by the receivers are acted on
correctly by the transmitters at each end for efficient data transmissions in ac-
cordance with channel conditions. The CQI measurements are a measure of the
SINR of the channel and indicate the downlink channel quality to the eNodeB,
whereas the eNodeB makes its own estimate of the supportable uplink data rate
directly from UE demodulation reference symbols or otherwise from channel
sounding reference symbols. The CQI, which conveys the downlink channel
quality to the eNodeB, also incorporates the quality of the UE’s receiver. A
UE with a better receiver can report better CQI for the same downlink chan-
nel conditions and thus receive downlink data with a higher MCS order. The
standard proposes to use 15 different CQI levels to indicate the channel quality
and SINR. The mobile can report the CQI to the base station either periodi-
cally or aperiodically. The periodic reporting is carried out at regular intervals
between 2 and 160 ms for the CQI (and PMI) and are up to 32 times greater
for the channel rank indicator (RI). The information is usually transmitted on
the PUCCH, but otherwise is carried on the PUSCH if the mobile is sending
uplink data in the same subframe. The maximum number of bits in each pe-
riodic report is 11, to accommodate the lower data rate that is available on the
PUCCH. The aperiodic reporting is carried on the PUSCH when transmitting
user data that are requested using a field in the mobile’s scheduling grant. If
both types of reporting are scheduled in the same subframe, then the aperiodic
report takes the priority. Since the channel fading may vary across the subcar-
riers within the assigned system bandwidth, the base station can configure the
mobile to report the CQI via:
58 From LTE to LTE-Advanced Pro and 5G
In (3), the mobile selects the subbands that have the best channel quality
and reports their locations, together with one CQI that spans them and a sepa-
rate wideband CQI. If the mobile is receiving more than one transport block,
then it can also report a different CQI value for each to reflect the fact that dif-
ferent layers can reach the mobile with different values of the SINR. The base
station uses the received CQI to select the modulation scheme and coding rate
and for frequency-dependent scheduling. However, the base station only uses
one frequency-independent modulation scheme and coding rate per transport
block for transmitting the downlink data.
The channel coding scheme for transport blocks (i.e., on traffic channels)
in LTE is Turbo coding with a coding rate of R = 1/3, two 8-state constitu-
ent encoders, and a contention-free quadratic permutation polynomial (QPP)
Turbo code internal interleaver. Trellis termination is used for the turbo coding.
The Turbo-coder internal interleaver used is only defined for a limited number
of code-block sizes, with a maximum block size of 6,144 bits. If the transport
block, including the transport-block CRC, exceeds this maximum code-block
size, code-block segmentation is applied before the Turbo coding.
Code-block segmentation means that the transport block is segmented
into smaller code blocks, the sizes of which should match the set of code-block
sizes supported by the Turbo coder. Therefore, before the Turbo coding, the
transport blocks are segmented into byte-aligned segments with a maximum
information block size of 6,144 bits where 24-bit CRC error detection is also
used [2]. The CRC allows for the receiver side to detect errors in the decoded
transport block. The error indication can, for example, be used by the downlink
HARQ protocol as a trigger for requesting retransmissions. The Turbo coder
adds redundancy bits to the data, which enables some level of error correc-
tion. The basic coding rate is 1/3 where one information bit is encoded into
3 bits, but a wide range of other coding rates is achieved from the adaptive
changing of the coding operation. The coding rate is a trade-off between error
protection and data transfer efficiency. This is realized through puncturing the
native coded bit stream to a higher code rate (less protection) or by repeating
coded bits if a smaller code rate is desired to provide more protection based
on reported CQIs. The code rate is the ratio of the source data to the resulting
protected data. However, the overall coding rate achieved defines how many
data bits are present out of every 1,024 bits after puncturing, repetition, and
rate matching. There is normally an effective coding rate defined that is the
Air Interface Architecture and Operation 59
Table 3.3
Modulation and TBS Index Table for
PDSCH
MCS Index Modulation TBS
IMCS Order Qm Index ITBS
0 2 0
1 2 1
2 2 2
3 2 3
4 2 4
5 2 5
6 2 6
7 2 7
8 2 8
9 2 9
10 4 9
11 4 10
12 4 11
13 4 12
14 4 13
15 4 14
16 4 15
17 6 15
18 6 16
19 6 17
20 6 18
21 6 19
22 6 20
23 6 21
24 6 22
25 6 23
26 6 24
27 6 25
28 6 26
29 2 Reserved
30 4
31 6
Source: [18]. (© 2008. 3GPP™ TSs and TRs are
the property of ARIB, ATIS, CCSA, ETSI, TTA,
and TTC.)
ted, depending on the number of PRBs assigned, using a second much longer
table, Table 7.1.7.2.1-1 of [18]. The 3GPP table numbers for these mappings
in the case of PDSCHs and PUSCHs are provided in Table 3.4.
Air Interface Architecture and Operation 61
Table 3.4
The Mapping Tables in 3GPP TS36.213 for
Determination of TBS Transmitted
Table Number in
The Mapping 3GPP TS 36.213 [18]
MCS index to MCS order Table 7.1.7.1-1
and TBS index on DL
TBS index and number of Table 7.1.7.2.1-1
PRBs to TBS on DL and UL
MCS index to MCS order Table 8.6.1-1
and TBS index on UL
where TBS is the transport block size as calculated from Table 7.1.7.2.1-1 of
3GPP TS 36.213 [18], CRC is the cyclic redundancy check, that is, the num-
ber of bits appended for error detection (24 bits), NRE is the number of useful
resource elements (that is excluding reference and control channel symbols)
contained in the assigned PDSCH or PUSCH, Bits per RE is 2 for QPSK, 4 for
16 QAM, and 6 for 64 QAM, NRE is the number of useful RE within each PRB
* the number of PRBs assigned for the connection where the number of use-
ful REs per PRB for two transmit antennas (using one for a transmit diversity
scheme for example), as seen from Figure 3.12, is calculated as:
Number of useful REs per PRB for two transmit antenna = 12 * 14 − 3 *
12 − (16 − 4) = 120
It is noted that symbols 0 through 2 in each 1-ms subframe of a PRB are
assumed to be occupied by the PDCCHs (excluding those used for reference
symbols).
is kept below 10%. The UE calculates the CQI based on its measurements on
the received signal and transmits the corresponding index to the eNodeB. The
CQI in LTE (as in HSDPA) does not reflect the SINR only. The measurement
corresponds to the highest MCS allowing the UE to decode the transport block
with error rate probability not exceeding 0.1. For the mapping of the CQI
to a proper MCS (indicated by the MCS index), each vendor may develop a
scheduling algorithm that takes the CQI value, the ACK/NACK rate, the data
to transmit, the UE category, the available network resources (such as PRBs),
and any other applicable factors, in order to provide efficient trade-offs between
spectral efficiency and the consumed resources (power, PRBs). The criteria used
by a vendor may not be equally important to every operator depending on what
their priorities are. Therefore, the vendor may incorporate weighting for each
criterion within implemented algorithms to provide the flexibility to meet dif-
ferent operators’ priorities. This is one instance where the standards leave room
for product differentiation.
However, for the purpose of presenting examples of achievable data rates,
we have provided in Table 3.5 two different recommendations for mapping the
CQI to a modulation order and an effective coding rate. One is based on 3GPP
TS 36.213 [18] and the other is taken from [19] where the results are derived
Table 3.5
CQI to MSC Mapping
3GPP TS 36.213 SINRs, dB Proposed in [19]
Modulation Coding (Typical from Modulation
CQI Order Rate Vendors) Order Coding Rate
1 QPSK 0.078 −4.46 QPSK 0.096
2 QPSK 0.12 −3.75 QPSK 0.096
3 QPSK 0.19 −2.55 QPSK 0.24
4 QPSK 0.30 −1.15 QPSK 0.24
5 QPSK 0.44 1.75 QPSK 0.37
6 QPSK 0.60 3.65 QPSK 0.438
7 16QAM 0.37 5.2 QPSK 0.438
8 16QAM 0.49 6.1 QPSK 0.438
9 16QAM 0.61 7.55 16QPSK 0.369
10 64QAM 0.46 10.85 16QPSK 0.42
11 64QAM 0.56 11.55 16QPSK 0.47
12 64QAM 0.66 12.75 16QPSK 0.54
13 64QAM 0.77 14.55 16QPSK 0.54
14 64QAM 0.87 18.15 64QAM 0.45
15 64QAM 0.94 19.25 64QAM 0.45
Source: [18, 19]. (© 2008. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI,
TTA, and TTC.)
Air Interface Architecture and Operation 63
Table 3.6
Calculated PDSCH Data Rates in Megabits per Second Excluding Overheads from Reference
Symbols, Control Channels and Coding (Based on 3GPP Mapping of CQI to MCS Mapping)
Transmission Bandwidth, MHz
Modulation Coding 1.4 MHz 3 MHz 5 MHz 10 MHz 15 MHz 20 MHz
CQI Order Rate LTE LTE LTE LTE LTE LTE
1 QPSK 0.078 0.08832 0.2568 0.444 0.912 1.38 1.848
2 QPSK 0.12 0.1488 0.408 0.696 1.416 2.136 2.856
3 QPSK 0.19 0.2496 0.66 1.116 2.256 3.396 4.536
4 QPSK 0.3 0.408 1.056 1.776 3.576 5.376 7.176
5 QPSK 0.44 0.6096 1.56 2.616 5.256 7.896 10.536
6 QPSK 0.6 0.84 2.136 3.576 7.176 10.776 14.376
7 16QAM 0.37 1.0416 2.64 4.416 8.856 13.296 17.736
8 16QAM 0.49 1.3872 3.504 5.856 11.736 17.616 23.496
9 16QAM 0.61 1.7328 4.368 7.296 14.616 21.936 29.256
10 64QAM 0.46 1.9632 4.944 8.256 16.536 24.816 33.096
11 64QAM 0.56 2.3952 6.024 10.056 20.136 30.216 40.296
12 64QAM 0.66 2.8272 7.104 11.856 23.736 35.616 47.496
13 64QAM 0.77 3.3024 8.292 11.856 23.736 35.616 47.496
14 64QAM 0.87 3.7344 9.372 15.636 31.296 46.956 62.616
15 64QAM 0.94 4.0368 10.128 16.896 33.816 50.736 67.656
in which C is the maximum theoretical bit rate in megabits per second, w is the
effective channel bandwidth in megahertz (i.e., 6 * 12 * 0.015 MHz for the 1.4
MHz) and SNR is the signal power to noise power ratio at input to the receiver
in the linear scale. We will assume that the interference can be represented as
white Gaussian noise and added to the thermal noise, and hence the SIR in
the above formula becomes just the SINR (signal power to noise+interference
power ratio).
To estimate the theoretical bit rates, we will need the SINR values that
would represent the channel conditions defined by each CQI value. The SINR
Air Interface Architecture and Operation 65
must reflect the actual eNodeB transmit configuration such as transmitter di-
versity and whatever will result in the effective SINR at the input to the receiver
(i.e., the UE) for inputting to the Shannon capacity formula. For this study, we
will use SINR values typically reported by vendors for a site with Tx diversity
configuration, which is what is normally the minimum case on which the data
derived previously were based. These CQI to SINR mappings were also given
before in Table 3.5.
Substituting (3.4) for various system transmission bandwidths, and tabu-
lating the results, we obtain Table 3.7, the contents of which are also presented
in Figures 3.10 and 3.11.
The actual data rates presented in Table 3.6 are considerably smaller
than the theoretical channel capacity values. This is expected since, as shown
in Chapter 4, at least 25% of the channel capacity is taken by the physical
layer overheads due to transmission of reference symbols, and the control chan-
nels. However, there still remains some significant difference between the chan-
nel full capacity and the actual achieved data rate even if we assume that the
channel overheads from reference symbols and the control signaling take up
to 28% of the channel capacity. For instance, from the tabulated results, it
can be seen that the percentage differences between the full channel capacity
reduced by the physical layer overhead of 28% and the actual data rate under
Table 3.7
Theoretical Maximum Channel Capacities in Megabits per Second Versus Channel
Quality (Includes All Overheads from Reference Symbols, Control, Coding)
Transmission Bandwidth , MHz
1.4 MHz 3 MHz 5 MHz 10 MHz 15 MHz 20 MHz
CQI Cap Cap Cap Cap Cap Cap
1 0.477 1.3192 1.987 3.974 5.961 7.949
2 0.548 1.371 2.284 4.569 6.853 9.137
3 0.689 1.722 2.870 5.740 8.610 11.480
4 0.887 2.218 3.697 7.394 11.092 14.789
5 1.425 3.563 5.939 11.878 17.817 23.756
6 1.868 4.671 7.785 15.570 23.356 31.141
7 2.277 5.692 9.487 18.973 28.460 37.946
8 2.531 6.326 10.544 21.088 31.631 42.175
9 2.961 7.403 12.338 24.675 37.013 49.350
10 4.016 10.039 16.732 33.465 50.197 66.929
11 4.249 10.623 1.705 35.410 53.115 70.819
12 4.655 11.637 19.395 38.948 65.922 77.582
13 5.274 13.184 21.974 43.948 65.922 87.897
14 6.535 16.338 27.230 54.461 81.691 108.922
15 6.925 17.312 28.853 57.706 86.559 115.412
66
From LTE to LTE-Advanced Pro and 5G
Figure 3.10 Graphical display of data presented in Tables 3.6 and 3.7.
Air Interface Architecture and Operation
67
Figure 3.11 Graphical display of data presented in Tables 3.6 and 3.7.
68 From LTE to LTE-Advanced Pro and 5G
channel condition of a CQI of 8 and 15 are around 25% and 12%, respectively.
That is, the data rates achieved are about 75% to 88% of the achievable theo-
retical capacity values. The differences may be explained by considering that
the channel assumed by the Shannon capacity formula is a Gaussian channel
that experiences white Gaussian noise and this is not always the case in cel-
lular mobile communication channels due to fading and interference patterns.
Added to that will be margins of error in practical implementation, limitations
of none-ideal error correcting codes, and possibly inaccuracies in the mapping
of the CQI to SINR. The considerably smaller differences at CQI of 15 may
be because the SINRs used for this condition (the CQI to SINR mapping) and
the choice of the detailed coding (MCS) more accurately represented the chan-
nel conditions and required modulation-coding. Nevertheless, these results
show that with LTE and single stream transmission, we can achieve capacity
as close as 15% from the theoretical capacity of the allocated bandwidth and
that reflects on the spectral efficiency of the multiple access and the modula-
tion coding schemes used. Our data shows that with 20-MHz, single stream
transmission and a CQI of 15, we achieve a spectral efficiency of 67.66/20 =
3.38 bps/Hz, and a data rate of 67.66 Mbps. This data rate is quadrupled to
around 270 Mbps if a MIMO 4 × 4 is used, which should be feasible given the
channel conditions stated (CQI of 15). Besides, Release 10 LTE-Advanced (see
Chapter 10) introduces transmission modes with up to 8 antennas allowing 8
× 8 MIMO in downlink direction. This enables theoretically up to a factor of
8 increase in efficiency by transmitting 8 parallel data streams. The receiver can
retrieve all eight individual data streams, achieving MIMO gain with spatial
diverse multipath channels. UEs inform the base station with the rank indicator
about their DL reception conditions, so that the base station can apply the best
precoding of the DL MIMO signals.
It should be noted that what we have shown here as achievable peak rates
and peak spectral efficiency is on a channel level based on the channel modula-
tion and coding scheme. We have ignored the overall system interference im-
pact by estimating the peak performance based on the best channel quality
scenario (CQI = 15) and with a single user in mind. It may not be possible
to achieve these peak channel performances when several users and adjacent
cells share the same frequencies. Moreover, the actual capacity of a cell and the
spectral efficiencies achieved will depend also on the distribution of the users
within the cell as well as site antenna configurations and interference mitiga-
tion features used in the network. Obviously, if users are located in better radio
propagation spots within the cell, higher capacity is achieved. However, mul-
tiple simulation based evaluations have been carried out for different scenarios
to provide a certain degree of diversity in the evaluations with intersite distances
of 500m and 1,732m and the averaged results reported in [20]. Compared to
the UTRA (HSPA), the results indicate that spectral efficiencies of up to 2 to 3
Air Interface Architecture and Operation 69
times in uplink and 3 to 4 times in downlink in the sense of bits per second per
hertz per cell are achievable.
Figure 3.12 position of the reference symbols within the PRB (Rx denotes the position for
reference symbols corresponding to antenna ports 2 and 3, which are used alternatively.)
of antenna ports. The precoding is intended to achieve the best possible signal
quality at the receiver. The precoding matrices for LTE support MIMO and
beamforming. Four codebook entries are used for 2 × 2 SU-MIMO and 16
entries for 4 × 4 SU-MIMO.
In addition to cell-specific reference symbols, LTE also defines UE specif-
ic reference symbols and MBSFN reference symbols. These are transmitted on
other antenna ports referred to as antenna ports 5 and 4, respectively, and their
existence is signaled to the UE by higher-layer signaling. The user specific refer-
ence symbols are used for channel estimation in the vendor-specific beamform-
ing based on multiple antennas. LTE Release 8 supports rank-1 precoding (or
beamforming) using predefined 3GPP codebook for 2 and 4 antennas and any
vendor-specific beamforming when using UE-specific reference signals (with an
arbitrary number of base station antennas). In beamforming, the transmissions
are directed to a specific UE for improved gain and reduced interference to us-
ers in neighboring cells.
In the downlink, there are nine different transmission modes, where TMs
1 to 7 were introduced in Release 8 of 3GPP standards which will be defined
below, TM8 was introduced in Release 9 for dual-layer beam forming on two
Air Interface Architecture and Operation 71
antenna arrays with different polarization, and TM9 was introduced in Release
10 (LTE-Advanced), which we will discuss in detail in Chapter 10. For the
downlink, TM1 and TM2 are specified, where TM1, the default, was intro-
duced in Release 8 and TM2 was introduced in Release 10. The different trans-
mission modes differ in the number of layers (also known as streams or ranks)
and the number of antenna ports used.
The RRC signaling is used to convey to the UE which transmission mode
to use. These modes are defined and discussed below, and we will cover the
MIMO spatial multiplexing operation in some detail subsequently.
in the same way as the reference signals that are used for single-layer
beamforming.
9. Transmission Mode 9. This mode was introduced in Release 10 LTE-
Advanced, and uses UE-specific RSs with up to eight spatial layers,
and is discussed further in Chapter 10.
and indicates the maximum number of layers that the mobile can successfully
receive. The rank indication is calculated jointly with the PMI, by choosing the
combination that maximizes the expected downlink data rate.
where the channel coefficients h00, h01, h10, h11 referred to as the channel cou-
pling matrix are estimated using the reference symbols for antenna ports 0 and
1. The applicability of the MIMO operation is largely dependent on the char-
acteristics of the channel and the receiver’s ability to allow the recovery of the
channel coupling matrix, which is highly impacted by noise, interference, and
channel correlation properties. Correlation in the channel matrix coefficients,
which can result from inadequate antenna spacing, common antenna polariza-
tion, and narrow angular spread created by the propagation environment, can
lead to an ill-conditioned matrix. This makes the system prone to errors and it
is highly sensitive to noise and interference. In an ill-conditioned matrix, small
errors in the coefficients may have a large detrimental effect on the solution
[13–24]. For that reason, minimization of round-off errors may require the use
of double or triple precision in order to reach sufficient accuracy in the solution.
However, the measurements of the matrix coefficients when using the known
reference symbols are also impacted by high levels of noise and interference, in
low SNR cases.
There are several techniques to quantify the channel matrix properties,
which include a key parameter referred to as the matrix condition number.
The condition number is formed by taking the ratio of the maximum to the
minimum singular values of the instantaneous channel matrix. Small values
for the condition number imply a well-conditioned channel matrix while large
values indicate an ill-conditioned channel matrix. For example, a condition
number close to the ideal value of 0 dB would imply perfect channel conditions
for the application of MIMO spatial multiplexing, while values greater than
10 dB would point to the need for a decibel per decibel improvement in the
relative SNR in order to properly realize the benefits of a MIMO system. With
a large channel matrix condition number, a small error in the received signal
may result in large errors in the recovered data. However, when the condition
number is small and approaching an ideal value of 0 dB, the system is no more
affected by noise than a traditional SISO system. The channel matrix condition
Air Interface Architecture and Operation 75
number can be calculated using vector signal analyzers available from vendors.
The condition number indicates how SNR needs to improve in order to allow
spatial multiplexing and is a useful static test that can be easily implemented by
the receiver designer. In reality, the SNR would vary for each transmission layer
in a MIMO system resulting in asymmetry between the layers at the receivers,
thus requiring higher SNRs for a MIMO system. It is possible to equalize the
layer performance at the receiver using a technique called precoding that is
included as part of the LTE specification [5]. The precoding cross-couples the
layers prior to transmission into the wireless channel with the goal of equalizing
the performance across the multiple receive antennas.
Finally, we will note that to design a MIMO system with low correlation
between channel coefficients, widely spaced antenna elements or antenna ele-
ments that are cross-polarized are often used.�
3.13.1 In the UE
In the physical layer, three key measurements are made in the UE [25]. These
consist of the following:
order to prevent outages caused by high interference situations, RSRQ was also
introduced for RRC_IDLE state in Release 9. This gives the network the option
to configure the UE to use RSRQ as a metric for performing cell reselection,
at least in the cases of cell reselection within E-UTRAN, from UTRAN FDD
to E-UTRAN and from GSM to E-UTRAN. The RSRQ and RSRP together
have been shown to be particularly beneficial for performing interfrequency
quality-based handover. The RSRQ is inherently a relative quantity which to
some extent eliminates absolute measurement errors and leads to better accu-
racy than is possible for RSRP. The RSRQ accuracy requirements are down to
−6 dB (based on a measurement bandwidth equivalent to the central 6 RBs),
where the measurement sampling rate is UE implementation-dependent. The
UE is required to measure RSRQ from the same number of intrafrequency and
interfrequency cells as for the RSRP.�
used for the receiver interference power measurement. The reference point is
the Rx antenna connector.
For frame type 1, the timing advance (TA) is defined as the time difference
• eNodeB-RX is the eNB received timing of uplink radio frame #i, de-
fined by the first detected path in time. The reference point for eNodeB-
RX is the Rx antenna connector.
• eNodeB-TX is the eNodeB transmit timing of downlink radio frame #i.
The reference point for eNodeB-TX is the Tx antenna connector.
Table 3.8
UE Categories in Release 8
Maximum
Number
of Layers
Maximum Maximum Supported for 64 kbps
UE DL Bit Rate UL (Mbps) Multiplexing Supported
Category (Mbps) Bit Rate on DL in UL
1 10.295 5.160 1 No
2 51.024 25.456 2 No
3 102.048 51.024 3 No
4 150.752 51.024 4 No
5 299.552 75.376 5 Yes
3.14 UE Categories
The 3GPP specifications Radio Transmission and Reception, 3GPP TS 36.306
Release 8, have specified five categories of user equipment as given in Table 3.8.
Each category is specified by a number of downlink and uplink physical layer
parameter values as listed in the table. TS 36.101 specified one power class, UE
power class 3, that has a maximum output power of 23 dBm.
References
[1] 3GPP TS 36.300, “E-UTRA and E-UTRAN Overall Description; Stage 2, v8.0.0.”
[2] 3GPP TS 36.201, “LTE Physical Layer – General Description, v1.0.0.”
[3] 3GPP TS 36.212, “Multiplexing and Channel Coding.”
[4] 3GPP TR 25.814, “Physical Layer Aspects for Evolved Universal Terrestrial Radio Access.”
[5] 3GPP TS 36.211, v1.0.0, “Physical Channels and Modulation.”
[6] Smith, S. W., “The Discrete Fourier Transform,” Ch. 8 in The Scientist and Engineer’s
Guide to Digital Signal Processing, 2nd ed., San Diego, CA: California Technical Publish-
ing, 1999.
[7] Benvenuto, N., and S. Tomasin, “On the Comparison Between OFDM and Single Car-
rier Modulation with a DFE Using a Frequency Domain Feed Forward Filter,” IEEE
Trans. on Communication, Vol. 50, No. 6, June 2002, pp. 947–955.
[8] 3GPP TS 36.803, “UE Radio Transmission and Reception.”
[9] Van Nee, R., and R. Prasad, OFDM for Wireless Multimedia Communications, Norwood,
MA: Artech House, 2000.
80 From LTE to LTE-Advanced Pro and 5G
[10] Chase, D., “Code Combining – A Maximum Likelihood Decoding Approach for
Combining an Arbitrary Number of Noisy Packets,” IEEE Trans. on Communications,
Vol. 33, May 1985, pp. 385–393.
[11] Sandrasegaran, K., et al., “Analysis of Hybrid ARQ in 3GPP LTE Systems,” 16th Asia-
Pacific Conference on Communications (APCC), November 2011, pp. 418–423.
[12] Larmo, A., et al., “The LTE Link-Layer Design,” IEEE Communications Magazine, April
2009.
[13] 3GPP TS 36.321, V12.0.0, “Medium Access Control (MAC) Protocol Specification,
Release 2013-12.”
[14] 3GPP TS 36.322, “Radio Link Control Protocol (RLC) Specifications, V12.0.0,” 2014.
[15] 3GPP TS 36.323, “Packet Data Convergence Protocol (PDCP) Specification, V12.0.0,”
2014.
[16] 3GPP TS 36.331, “Evolved Universal Terrestrial Radio Access (E-UTRA) Radio Resource
Control (RRC); Protocol Specification.”
[17] 3gpp ts 33.401, “3gpp system architecture evolution: security architecture.”
[18] 3GPP TS 36.213, “Release 8.”
[19] Salman, M. I., et al., “CQI-MCS Mapping for Green LTE Downlink Transmission,”
Proceedings of the Asia-Pacific Advanced Network, Vol. 36, 2013, pp. 74–82.
[20] 3GPP TR 25.912, “Feasibility Study for Evolved Universal Terrestrial Radio Access
(UTRA) and Universal Terrestrial Radio Access Network (UTRAN), Release 11,” 2012.
[21] Alamouti, S. M., “A Simple Transmit Diversity Technique for Wireless Communications,”
IEEE Journal on Selected Areas on Communications, Vol. 16, No. 8, October 1998, pp.
1451–1458.
[22] Gesbert, D., et al., “From Theory to Practice: An Overview of Space-Time Coded MIMO
Wireless Systems,” IEEE Journal on Selected Areas on Communications, April 2003, special
issue on MIMO systems.
[23] Agilent Application Note, “MIMO Channel Modeling and Emulation Test Challenges,”
Literature Number 5989-8973EN, October 2008.
[24] Kreyszig, E., Advanced Engineering Mathematics, 6th Edition, Englewood Cliffs, NJ:
Prentice Hall 1988, pp. 1025–1026.
[25] 3GPP TS 36.214, “Physical Layer-Measurements, V8.7.0,” 2009.
[26] 3GPP TS 36.459, “Evolved Universal Terrestrial Radio Access (E-UTRA); SLm
Application Protocol (SLmAP).”
[27] 3GPP TS 36.111, “Evolved Universal Terrestrial Radio Access (E-UTRA); Location
Measurement Unit (LMU) Performance Specification; Network Based Positioning
Systems in E-UTRAN.”
4
Coverage-Capacity Planning and
Analysis
Generally, the first step in the process of planning a network is the coverage
planning. This gives an estimate of the resources needed to provide adequate
SINR between the user equipment and the radio access node (eNodeB) for
service deployment in the area without detailed capacity concerns. Coverage
planning uses link budgeting for the DL and UL radio links to calculate the
maximum allowed path loss based on the required SINRs to achieve certain
operator specified bit rates (throughputs) at the cell edge with a certain amount
of PRBs (radio transmission resources). This process will assume an overall level
interference load caused by the traffic based on expected capacity requirement
from the network. The maximum allowed path loss is then converted into a
cell radius using an appropriate semi-empirical propagation model for the area.
With a rough estimate of the number of cells and sites obtained in this way,
subsequent checking is made to see whether, with the given site density, the net-
work can handle the specified traffic load or new sites needs to be added. The
main indicator for assessment of the system capacity in LTE is the SINR distri-
bution within the cell obtained through system level simulation. The SINR dis-
tributions are mapped into supported bit rates at each simulated point (pixel).
The system-level simulation will take into account realistic terrain geometries
for signal propagation and loss assessment, and as well as the antenna configura-
tions, interference levels from neighboring cells, supported MCS, and vendors
resource scheduler algorithms. Capacity-based site counts are then compared
with the results from coverage analysis, and the greater of the two is selected as
the final required site count to provide adequate coverage and system capacity.
81
82 From LTE to LTE-Advanced Pro and 5G
N f ,RU L f − 1
N f ,B = 10 * LOG N f ,TMA + (4.1)
GTMA
expressed in decibels, in which all the quantities in the bracket must be convert-
ed from decibels to linear scale first before the computation. Note that the base
station noise figure calculated by (4.1) has already accounted for the feeder cable
loss, Lf, and this loss should not be considered again in the UL link budgeting
equation again. The TMA is a low-noise amplifier (LNA), which is mounted
Coverage-Capacity Planning and Analysis 87
as close as practical to the antenna, and helps to reduce the radio receiver noise
figure and improves its overall sensitivity. The improved sensitivity enables
the base station to receive weaker signals.
The effective thermal noise per subcarrier at the base station receiver refer-
ence point, Nt,B, is then calculated as
N t ,B = N t + 10 * log(15000) + N f ,B (4.2)
and the effective thermal noise per subcarrier at the UE receiver input is
Likewise, the power transmitted per subcarrier by the base station on the
DL, PB,SC, is given by
It is noted that (4.4) and (4.5) can be easily modified to accommodate for
any power boosting option on the reference symbols.
N + ρPUSCH (1 − β )I SC ,ul
N R ,ul = 10 * log t ,B (4.6)
N t ,B
where all quantities in the bracket must be converted from dB or dBm to lin-
ear scale before the computation. The uplink interference per subcarrier, ISC,ul,
defines the interference on PUSCH when the network is fully loaded and de-
pends on the cell size, the cell layout geometry, and the uplink power control
target parameter P0 (see Chapter 5) with typical values in the low −100-dBm
ranges (−140 to −110). More specific values may be provided from system
88 From LTE to LTE-Advanced Pro and 5G
level simulation results provided by vendors. The uplink load parameter load,
ρpusch defines the fraction of the PUSCHs carrying user data. The operator may
specify a load level for which the network is to be dimensioned. Otherwise,
different values of load level may be tested for achieving acceptable trade-offs
between coverage and capacity. The parameter β accounts for the positive im-
pact of any interference cancellation algorithm in the base station receiver and
ranges between 0 and 1, with the value 0 used when no interference rejection is
implemented and 1 for perfect interference cancellation. The reduce noise from
interference rejection algorithms can be used to either increase the coverage (the
cell range) or obtain higher capacity and throughput. The value of the noise rise
is used as an estimated value for interference margin in the link budgeting and
recommended values are usually available in tabulated format versus the cell
load factor ρpusch for typical site configurations from vendors.
A cell load is also defined in LTE for both the UL and DL but different
from WCDMA system. The cell load in LTE is defined as the average PRB
resource utilization with the averaging defined over long periods of the order of
minutes or even hours. The higher the cell load, the higher will be the interfer-
ence from neighboring cells. With this definition, the cell load can be set to
100%. The cell load on uplink and downlink is what will determine the respec-
tive interference margins that will be required in the link budgeting process and
is the alternative means to the noise rise calculation for this purpose. The calcu-
lation of UL noise rise and the DL noise rises as described in the above and in
the following section for the DL are complicated and require parameters whose
values cannot be easily estimated for each network design scenario or require
iterative calculations. Therefore, the load factor is often the more simple practi-
cal means for setting the interference margins. Normally based on system simu-
lations on various network models and measurements obtained from practical
networks, vendors provide tabulated data that recommend the necessary link
margins for various targeted network loads. However, we will proceed with the
analysis and formulation of the DL noise rise in the next section for the sake of
more exactness and providing insight for the interacting factors and the impact
of the network layout, site configuration, and traffic load to the noise rise and
the interference margins required.
where the weighting in the bracket accounts for the fact that when a control
channel is the interferer, it is always transmitting data (i.e., control informa-
tion), whereas when a PDSCH from a neighboring cell is the interferer, it
transmits power only by ρpdsch fraction of the occasions, that is its load factor.
The PDSCH load may be set as a design goal to achieve a certain coverage
requirement. The path loss factor at the cell edge, Lce , is same as the linear
scaled version of the downlink path loss obtained from the link budgeting and
hence ties the two values into an iterative loop until convergence starting with
the best estimates (using, for instance, the path loss obtained from the UL) for
Lce. The Fc factor reflects the cell design quality and depends on the cell layout
geometry and the site configuration such as antenna height and tilts. Values for
this parameter in various scenarios for cell-edge areas can be obtained via system
level simulations. Typical values can fall in the range from around 1.3 to 3. The
parameter γCCH defines the fraction of times when PDSCHs are interfered by
control channels, and its value depends on allocated system transmission band-
width, and weather the network is time synchronized or not (adjacent cells are
time-synchronized or not). Typical values can range from a few percentages to
the percentages in the low 20s, with the higher ends applicable for nontime-
synchronized networks, and may be obtained from the equipment vendor. The
value of the noise rise is used as an estimated value for interference margin
in the link budgeting. However, recommended values are usually available in
tabulated format versus the cell load factor ρpdsch for typical site configurations
from vendors.
Table 4.1
Worst-Case PDSCH Overheads Due to Reference Symbols and
Control Signaling
Number of DL Symbols
(Slots) Taken Up by the
DL Reference+Control Percentage
Antenna Configuration Channels per PRB Overhead
Single-stream 48 28.6%
transmission with one
TX and one TX diversity
antenna
2 × 2 MIMO 52 31%
4 × 4 MIMO 52 31%
Table 4.2
Average Number of PRBs Consumed by the Uplink
Control Channels
PRBs Taken on the
Transmission Total Number Average by the Uplink
Bandwidth of PRBs Control Channels
1.4 6 2
3 15 4
5 25 4
10 50 6
15 75 8
20 100 10
Table 4.3
Average Total Overhead from the Reference Symbols and the
UL Control Channels in PUSCH
Transmission Bandwidth, MHz
Overhead 1.4 3 5 10 15 20
Average number symbols 80 69 51 44 42 41
taken per PRB
% overhead 48% 41% 30% 26% 25% 24%
sections. These protocols consist of MAC, RLC, PDCP, IP, TCP, or UDP, and
the application layer protocols such as RTP for VoIP and FTP/http for data.
The header sizes for these protocols are given in Table 4.4. To calculate the total
overheads from the stated protocols, we note that at the bottom are the 1-ms
radio subframes, which are the entity that contains the transport block. Within
the transport block is the MAC header and any extra space filled by padding.
Then follows the RLC header, and then within the RLC PDU (protocol data
unit) there can be one or a number of PDCPs that take up the IP packets that
form the service data units (SDUs) coming from the top of the protocol stack.
Table 4.4
Typical Layer 2 Protocol Header Sizes
TCP (or RTP (for
MAC RLC PDCP IP (v6) UDP) VoIP)
Header Size (bytes) 2 to 3* 2* 2* 40 20 (or 8) 12
*The size of the MAC and the RLC headers can range from 2 to 3, and from 1 to 2, respectively
depending on what the size of the length field is in case of MAC and the sequence number in case
of the RLC. Likewise the size of the PDCP protocol can vary from 1 to 2 bytes depending on weather
a short (5-bit) sequence number is used or a long (12-bit) sequence number is used. These are all
configurable by the operator.
Coverage-Capacity Planning and Analysis 93
Multiple PDCPs are used within one RLC if more than one user or application
is multiplexed within the 1 subframe.
The RLC PDU size is not fixed because it is based on the transport block
size, which depends on the conditions of channels which the eNodeB assigns
to the UE on the downlink. The transport block size can vary based on the
bandwidth allocation and the modulation-coding scheme. The RLC PDU size
will also depend on the size of the packets (e.g., large packets for video or small
packets for voice over IP). If one RLC SDU cannot accommodate the data
packet, or the available radio data rate is low resulting in small transport blocks,
the RLC SDU may be split among several RLC PDUs. If the RLC SDU is
small, or the available radio data rate is high, several RLC SDUs may be packed
into a single PDU. In many cases, both splitting and packing may be present.
As a simple example, we will assume single PDCP layer per RLC PDU,
and an average IP packet size of 300 bytes, which is based on the statistics often
indicated from the Internet traffic. Then a 300-byte data from the application
data will incur 20 byes for TCP header, 40 bytes for IP v6, 2 bytes for RLC
header, and 2 bytes for the MAC header resulting in a total of 65 bytes of pro-
tocol overhead, or 100 × 64/300 = 21.3%. Thus, the desired user data from the
application layer for which the link budgeting should be performed must be
boosted by a factor of 1.23. This is assuming that no header compressions of
any kind are applied (the worst case).
In the case of VoIP, with the 3GPP AMR codec rate of 12.2 kbps, voice
frames are collected every 20 ms, which results in 32-byte frames. In the case
of voice, the ROHC protocol (described in RFC 3095) is used to compress the
protocols above PDCP to only 3 bytes in total header size. Adding to that 1
byte from RLC (for unacknowledged mode operation using a 5-bit sequence
number), 2 bytes from MAC and a PDCP header of 1 byte using short sequence
number result in a total frame size of 39 bytes, of which the percentage header
overhead is 100 × 7/39 = 18%. In other words, the link layer protocol headers
add 100 × 7/32 = 22% to the voice frames. Thus, each voice connection will
bear a data rate of 1.22 × 12.2 kbps = 14.88 kbps in the link budgeting. Simula-
tion results presented in [3] show the LTE capacity for voice over IP using the
AMR codec at a rate of 12.2 kbps and a cell bandwidth of 5 MHz varies from
289 to 317 calls on DL and from 123 to 241 calls on UL under various voice
scheduling, control channel limitations, and channel conditions with averages
obtained such as at around 320 calls on downlink and 240 calls on uplink per
sector in [4], indicating that the voice service is limited by the uplink. The VoIP
based on VOLTE (voice over LTE) is covered in detail in Chapter 12.
allowed path loss, Lpmax , from the UE to the eNodeB receiver that will meet
an operator specified uplink bit rate (throughput) at the cell edge. The path
loss, Lpmax , obtained from the uplink budgeting is also the starting point of
the downlink calculations and is used to obtain the downlink noise rise esti-
mate formulated in previous section. The required cell-edge bit rate and the
transmission resources (number of physical resource blocks (PRBs)] that may
be allocated to cell-edge users translate into the MCS needed to support the
specified bit rate. This process was basically described in Chapter 3. The uplink
bit rates at the physical layer is first found by adding all relevant protocol layers
at L1 and above (excluding reference symbols and control channels) such as
MAC, RLC, PDCP, TCP, or UDP to the desired net user data rate in bps. Then
this is scaled down by 1,000 to find the necessary transport block size (TBS) in
bits (the bits transmitted in 1 ms). With the TBS thus found and the number
of PRBs allocated for the connection, the 3GPP tables mentioned in Chapter
3 are used as explained there to find the TBS index associated with the TBS.
Then other tables mentioned in Chapter 3 are used to map the TBS index into
the MCS index. From the MCS index, the coding rate is found with the help
of the relevant table given in Chapter 3.
The MCS and the coding rate thus found translate into the required
SINR using link level simulation data from the vendor. The higher the allocated
transmission bandwidth, that is the number of PRBs allocated, the more robust
will be the required MCS (more coding and lower order modulation) and hence
a smaller SINR that will be needed. The smaller SINR will help to increase
the coverage by allowing higher path losses. The allocation of the transmission
resources (resource blocks) to the cell-edge users which then determines the
necessary MCS provides the trade-offs between coverage and resource usage.
The more robust MCS, requiring less SINR, will consume more transmission
resources and hence leave less for other scheduled users. However, the cell-edge
criteria may be set to be the cell-edge throughput meaning the bit rate that
could be achieved by a single UE placed within the cell at the cell edge loca-
tion and consuming the resources. In that case, there is still a similar trade-off
between coverage (the cell range) and the coverage quality that is the cell edge
throughput that can be achieved. The more robust MCS will increase the cell
range but incur more overheads (due to coding and lower modulation order)
in the resource usage and thus leaving less for transmitting the actual user data.
The SINR requirement and the uplink noise rise due to interference al-
low the calculation of the minimum signal power required per subcarrier at the
base station receiver reference point. The uplink maximum allowed path loss
is then obtained by subtracting from the UE transmitted power per subcarrier
the receiver minimum signal power required and then adding all diversity and
antenna gains and subtracting the various losses due to feeder, building or car
Coverage-Capacity Planning and Analysis 95
penetration, body loss if any, and the margin for lognormal slow fading. The re-
ceiver sensitivity per subcarrier basis excluding the noise rise is calculated from
SB = SINRul + N t ,B
where Nt,B is the effective total thermal noise at the input to the eNode B re-
ceiver given by (4.2) as
N t ,B = N t + 10 * log(15000) + N f ,B
In which Nf,B is the TMA accounted noise figure of the base station re-
ceiver given by (4.1), substituting in the above expression for the eNodeB sen-
sitivity, we obtain
N f ,RU L f − 1
SB = SINRul + N t + 10 * log(15000) + 10 * log N f ,TMA + (4.8)
GTMA
The maximum allowed path loss on the uplink is then obtained by the
following expression
Alternatively, instead of using the noise rise NRul in the above equation,
one may set an interference margin based on the targeted UL load. Tabulated
data from vendors based on system simulations performed on typical network
layouts and measurements collected from practical networks are usually avail-
able to guide set the interference margin required for the targeted load. In that
case, (4.9) takes the form:
in which IMul is the interference margin for the targeted load as obtained from
vendors. Next, the maximum allowed path loss from link budgeting the down-
link side is obtained and compared with the above value from UL. The small-
est of the two is then used to dimension the cell range and thereby obtain the
number of sites necessary to meet the specified coverage requirements.
resource blocks). The required cell-edge bit rate is translated into the equivalent
bit rate in bps at the physical layer by incorporating protocol overheads from
MAC, RLC, PDCP, TCP, or UDP (excluding reference symbols and control
channel overheads) as applicable. Then the result is scaled down by 1,000 to
find out the transport block size in bits. The TBS thus found and the transmis-
sion bandwidth in terms of number of PRBs are mapped into the TBS index
and then into the MCS index in the same manner that was explained for uplink
link budgeting (and as explained in Chapter 3).
The MCS and the coding rate required then determines the necessary
SINR at input to the user equipment receiver using the tabulated link level
simulation results provided by the vendor. The tradeoffs involved between
transmission bandwidth resources that may be allocated to cell edge users and
the resulting coverage and overall cell capacity are similar to what was explained
for the Uplink case. From the SINR, the UE receiver sensitivity excluding the
interference caused noise is obtained by subtracting the thermal noise calcu-
lated in (4.3). The maximum allowed path loss is then obtained by subtracting
from the base station transmitted subcarrier power, PB,SC, given by (4.5), the
UE sensitivity, the noise rise due to DL interference, the various losses and add-
ing and subtracting the various gains and losses. The UE sensitivity excluding
noise rise due to interference is
Then the maximum allowed path loss, Lpmax,dl, is calculated from the
expression
L p max,dl = PB ,sc − Sue ,sc − N Rdl ,ce − N LNF − L f
−L J − LBP − LB + GTx ,B + G Rx ,ue (4.12)
Alternatively, instead of using the noise rise NRdl,ce in (4.12), one may set
an interference margin based on the targeted DL cell edge load. Tabulated data
from vendors based on system simulations performed on typical network lay-
outs and measurements collected from practical networks are usually available
to guide the interference margin required for the targeted load. In that case,
(4.12) takes the form:
L p max,dl = PB ,sc − Sue ,sc − M dl − N LNF − L f
−L J − LBP − LB + GTx ,B + G Rx ,ue (4.14)
in which IMdl is the interference margin for the targeted load as obtained from
vendors.
We notice from the expression for the noise rise, NRdl,ce, in the formula
for the maximum allowed path loss that it itself depends on the path loss to be
calculated (the Lce term). Hence, the above two equations tie each other into
an iterative loop for convergence of an initial best estimate for Lpmax,dl. The
best starting value for this iteration will be the maximum allowed path loss Lp-
maxul obtained from the uplink budgeting from (4.9). On convergence, the final
value obtained for Lpmax,dl is compared with the value obtained on the uplink
and the smallest of the two is used as input to an RF propagation model suited
to the environment to estimate the cell range. Figure 4.1 provides the schematic
illustration of this process of network dimensioning for coverage using link
budgeting on UL and DL. A simplified template for calculation of the uplink
and downlink path losses in simplified Excel format is given in Table 4.5.
The control channels performance against the cell range decided on this
basis, should also be checked, and should make sure that the required SINRs for
them are satisfied at the cell edge. Otherwise, the cell edge should be reduced
until the control channel performance is not a limitation. The final value decid-
ed is then used to obtain the number of sites required using site configuration
and geometry as discussed in later sections. Simulation results in [4] indicate
that for low bit rate services uplink random access channel coverage may be the
limiting factor, instead of the PDSCH/PUSCH. The following two sections
present the analysis and formulations for checking the coverage on the down-
link and uplink control channels.
4.1.1.6 Downlink Control Channel Coverage Verification
In order to be able to meet the cell edge performance obtained on the DL and
UL data channels, the UE must be able to properly receive and decode the sig-
naling information transmitted on the control channels as well. On the down-
link, the PDCCH is the limiting control channel. The PDCCH is interfered
by both the PDSCH data channels and the control channels from neighbor-
98 From LTE to LTE-Advanced Pro and 5G
Figure 4.1 Flowchart for coverage based radio access network dimensioning (using the
noise rise estimation approach).
Table 4.5
Simplified Link Budgeting Template
Parameter or Catogory type naming Uplink Downlink Comment
Channel PUSCH PDSCH Formulat used when a value is
calculated
User Environment Indoor Indoor Indoor
System Bandwidth (MHz), a 10.0 10.0
Cell Edge Rate (kbps) 512.00 2048.00 512.00
MCS QPSK QPSK 0.30
0.36
Tx
Max Total Tx Power (dBm), b 23.00 49.00
Allocated RB 6 30
RB to Distribute Power, c 6 50
Subcarriers to Distribute Power, d 72 600 c*12
Subcarrier Power (dBm), e 4.43 21.2 b–10*log(d)
Tx Antenna Gain (dBi), f 0.00 18.00
Tx Cable Loss (dB), g1 0.00 0.50
TMA insersion and jumper loss, g2 0.00 0.00
Tx Body loss (dB), h 0.00 0.00
EIRP per Subcarrier (dBm), i 4.43 38.72
Rx
SINR (dB), j –4.83 –1.84
Rx Noise Figure (dB), k 2.00 7.00 On the base station side, this
figure should account for any
TMA gains and feeder cable loss
using forumla 4.1
TMA gain 0.00 0.00
Receiver Sensitivity (dBm), l –135.06 –127.08 j–174+10*log(15000)+k
Rx Antenna Gain (dBi), m 18.00 0.00
Rx Cable Loss (dB), n 0.00 0.00 On base station side this has
already been taken care of by
calculation of the noise figure
(formula 4.1)
Rx Body Loss (dB), o 0.00 0.00
Noise rise (dB0 due to interference, or 0.50 5.00 Formula 4.9 or 4.10 for UL and
the interference margin (estimated from 4.12 or 4.13 for DL
assumed load parameters), p
Min Signal Reception Strength (dBm), q –134.56 –122.08 1+p
Path Loss & Cell Radius
Penetration Loss (dB), r 19.00 19.00
Std. of Shadow Fading (dB), s 11.70 11.70
Area of Coverage Probability, t 95.00% 95.00^ 95.00%
Shadow Fading Margine (dB), u 9.43 9.43 Func(s,t) obtained from tables
Path Loss (dB), v 128.56 132.36 i–g1–g2+m–q–r-u
100 From LTE to LTE-Advanced Pro and 5G
The control channel SINR at the cell edge must therefore be calculated
and checked against the vendor recommended values to ensure adequate per-
formance. If the calculated cell-edge SINR is lower, the cell-edge throughput
will be degraded. If the SINR is considerably lower than the recommended
value, the site-to-site distance cell range needs to be reduced. To calculate the
PDCCH SINR at cell edge, we will first calculate the noise rise on the control
channel at the cell edge as follows:
in which all the quantities must be converted into linear scale (from dB or
dBm) before the calculation. The ρPDCCH was defined earlier at the beginning
of this chapter as the PDCCH load, that is, the fraction of occasions a PDCCH
resource is used. By setting this parameter to 100%, the noise rise for the ex-
treme case of fully loaded PDCCH can be calculated.
The control channel SINR at the cell edge is then calculated by
where Lpmx is just the maximum allowed path loss result from link budgeting
the UL and DL data channels. The parameter Lce in the noise rise equation is
the linear scale version of Lpmax.
{ }
SINRPUCCH , Ack /Nack = Minimum P0, pucch ,sc , PUE ,SC − Lce − N t ,B
(4.17)
−10 * log ( µ + Fu ) ρPUCCH , A /N .10
P0,PUCCH ,SC /10
Coverage-Capacity Planning and Analysis 101
The PUCCH power control algorithm adjusts the received signal strength
towards the target P0,pucch,SC defined here on a per subcarrier basis (on 15-kHz
bandwidth) in decibels. The parameter ρpucch A/N is the number of simultane-
ously transmitted ACK/NACK on PUCCH in a cell. A value of 2 is recom-
mended. The parameter µ as mentioned earlier conveys the nonorthogonality
factor to model intracell PUCCH interference. A value of 0.2 is recommended
for dimensioning.
Table 4.6
Site Area Calculation Formulas
Site Configuration Omnidirectional Bisector Trisector Six-Sector
Site Area formula 2.6 (cell radius)2 1.3 (cell radius) 2 1.95 (cell radius) 2 2.6 (cell radius) 2
102 From LTE to LTE-Advanced Pro and 5G
where n is the path-loss distance exponent (how the loss varies with distance),
and L–p(d0) is the mean path loss at a reference distance d0, which captures an-
tenna heights and the frequency and environmental dependencies.
There are basically three different categories of path loss models, referred
to as statistical or empirical, deterministic, and semideterministic. The statisti-
cal models are formulas that describe the path loss versus distance on an average
scale. They are derived from statistical analysis of a large number of measure-
ments obtained in typically distinct environment such as urban, suburban, and
rural, which are incorporated in the form of tabulated data or best-fit formulas
for average path loss calculation versus distance in the particular environment.
They do not require detailed site morphological information. Examples are
the Okumura-Hata models. The deterministic models such as those based on
ray tracing apply RF signal propagation techniques to a detailed site morphol-
ogy description (using building and terrain databases) to estimate the signal
strength resulting from multiple reflections and line of site at various pixels in
the area. The simple, rather idealized free-space and two-path ground reflec-
tion models may also be classified under the deterministic models. Because
of the complexities involved in such models, they are only developed mainly
for simple environments such as indoor and small micro cell. The ray-tracing
examples of such models are used for modeling small micro cells in urban and
dense urban areas. The semideterministic or semistatistical models are based
on a mixture of deterministic method of following individual signal propaga-
tion effects from site-specific morphology and a statistical generalization and
calibration of model parameters based on collected path loss measurements.
These models require more information than the empirical models but less than
the deterministic models. Examples are COST 231 Walfisch-Ikegami and the
generalized tuned Hata models.
where 150 MHz < f < 1,500 MHz carrier frequency, hb = 30m to 200m is the
base station antenna height, hm = 1m to 10m mobile antenna height, d = 1
km to 20 km distance from the transmitter, and a(hm) depends on whether the
model is used on the size scale of the city given as follows.
Coverage-Capacity Planning and Analysis 103
Qr = 4.78 ( log10 f ) 2 − 18.33 log10 f + 40.94 all frequencies (in MHz) (4.21d)
data to capture the propagation effects due to reflections and rooftop diffrac-
tions. Therefore, the model has deterministic elements than being purely sta-
tistical. It is more complex than the Okumura-Hata models and is more suited
to smaller macro cells in urban areas or microcells where the antenna is placed
rather at the rooftop levels. The model presents different formulas for cases
when there is a line of site (LOS) signal component, and for when there is no
LOS component and is discussed further in [6, 8].
(
PRx = PTx + K 1 + K 2 log (d ) + K 3 log H eff )
(4.24)
( ) ( )
+ K 4.D + K 5log H eff log (d ) + K 6 log hmeff + K clutter
where K1, K2, K3, K5, and K6 are coefficients that are also present in the origi-
nal Hata formula, but are left here unspecified for further tuning for the area.
Their values should not be changed much from the original values used in the
Hata formula to keep the model structure reliable. The coefficient K4, which
multiplies a diffraction loss D, is included to adjust diffraction losses, which
are caused by buildings or terrain irregularities in the line of site of the receiver.
The effective antenna heights for the base station and mobiles, Heff, and hmeff,
are calculated by considering the terrain profile and the area to be covered. The
effective antenna height values can vary, for instance, whether the entire area is
to be modeled or just a hilly road with antennas placed along it. For an oscilla-
tory hilly area, the effective antenna heights are normally calculated relative to
the average terrain height as referenced to the sea level.
A new variable clutter correction parameter, Kclutter, is included to adapt
the equation to each morphological class. This parameter allows the same for-
mula structure to be used for each different environment and land usage class
such as urban, suburban, open, and rural by simply shifting the basic curve to
fit the particular clutter type.
The model resolution defines the smallest bin size over which RF propa-
gation predictions and signal coverage validations can be performed. The work
reported in [9] concludes that the optimum model prediction resolution de-
pends on the intended cell coverage radius in the following way
Coverage-Capacity Planning and Analysis 105
L p (r ) = L−p (r ) + X σ all in dB
Table 4.7
Typical Values for Lognormal Fade
Margin for a Single-Cell Coverage
Area Probability of 95%
Propagation Path Constant
σLNF, dB 3 4
6 6.25 5.5
9 11 9.8
12 15 14
Coverage-Capacity Planning and Analysis 107
the reduced likelihood that signals from all the overlapping cells go into fading
at the same time. Assuming for simplicity that coverage is provided through
two overlapping cells, we can evaluate the handover gain to the lognormal fade
margin as follows.
From the sum probability formula, we can write:
where p1 and p2 are the coverage probabilities provided by each of the two cells
for a given lognormal fade margin for the propagation environment. Assuming
p1 = p2, which is reasonable, we obtain cell coverage probability from either of
the two overlapping cells
2 p − p . p = pp (2 − p ) (4.26)
Since p < 1, then 2 − p > 1, and hence the coverage probability from the
overlapping area at cell edge will be larger than the coverage probability of one
single cell by a factor of 2 − p. This will translate to a lower fade margin require-
ment for the same original coverage probability requirement of p. That is, it
would require the fade margin for what would be required for a coverage prob-
ability of p/(2 – p). As an example, for an environment with a standard devia-
tion of 8 dB, and for a cell-edge coverage probability of 95%, the fade margin
required is around 5 dB without the handover gain. With the handover gain
based on two overlapping cells, the coverage probability requirement of 0.95%
with one cell would translate into a coverage probability requirement of 90%
only and hence the fade margin required would reduce to 3 dB (corresponding
to the coverage probability of 90%), achieving a gain of 3 dB.
system-level simulation, and further iterations for the cell radius and hence the
number of sites to be performed if necessary.
The capacity quality requirement of the radio network is expressed in
terms of the traffic handling capability either on a per cell (for instance, 10
Mbps per cell) or per unit area. In LTE, the main indicator of capacity is the
SINR distribution in the cell, which is obtained by carrying out system-level
simulations. The SINR distribution is then directly mapped into system capac-
ity that is the achievable data rate distribution within the area. Therefore, the
cell capacity in LTE is impacted by several factors such as the supported MCSs,
the antenna configurations, the detailed signal propagation environment, and
the site geometry, as well as specific resource and interference management al-
gorithms implemented within the network nodes. The capacity analysis based
on detailed system-level simulation will result in new updates of the values of
load and interference-related parameters assumed in the link budgeting dimen-
sioning. Then the updated values can be used to reiterate the coverage-based di-
mensioning and obtain new values for the cell radius and the iteration repeated
until convergence is achieved.
The cell range obtained from final coverage dimensioning and capacity
analysis can be used to calculate the number of sites necessary for coverage re-
quirement in the service area. Then with the traffic model, and the cell capacity,
the number of sites needed based on the traffic requirement can be calculated.
The larger of the two values provides the final output for the number of sites
necessary to meet both the coverage and the capacity requirements. Normally,
the necessary site count based on capacity requirement will eventually exceed
what is obtained in the coverage dimensioning in the initial phase of the net-
work deployment when the number of users is still small. As the demand in-
creases and more users are added to the service, the capacity based site count
takes the lead resulting in smaller cell size. The flowchart for the combined
coverage and capacity dimensioning is provided in Figure 4.2.
into the best MCS that can be supported using tabulated link-level simulation
results and hence the achievable bit rate. The average cell throughput is then
the probability weighted summation of the achievable bit rates at various pixels
within the cell and is calculated by
110 From LTE to LTE-Advanced Pro and 5G
The mean cell throughput obtained from (4.25) is then checked against
the expected mean traffic demand per cell. If the capacity requirement is not
met, additional sites are added until the network capacity provided can meet
the traffic demand. The term under the summation sign in (4.27) may be mul-
tiplied by the local traffic density normalized by the expected cell traffic to get a
more realistic estimate of the cell capacity. This is because the cell capacity will
vary depending on where in the cell the users use the services from, their mobil-
ity activity (which can takes up more signaling channels) and the actual service
profile and geographic distribution of users within the cell. For example, the
capacity will be greater if some of the mobiles are in better propagation condi-
tions than in less favorable radio conditions.
service, the number of sites based on the overall services throughput thus ob-
tained and the cell capacity indicated in the system simulations is estimated by
The site count estimation should be performed for each type of service
area, as was done in the case of coverage dimensioning. The larger of the two
site counts is then chosen as the final output from the radio access dimension-
ing task.
Figure 4.3 Optimum system bandwidth analysis for coverage of 1 Mbps on DL.
112 From LTE to LTE-Advanced Pro and 5G
Figure 4.4 Optimum system bandwidth analysis for coverage of 2 Mbps on DL.
the data derived in Chapter 3 for downlink bit rates of 1 Mbps and 2 Mbps.
The figures show that for a downlink bit rate of 1 Mbps, the optimum system
bandwidth is 3 MHz, and for a bit rate of 2 Mbps the optimum system band-
width required is 5 MHz.
There is an optimum system bandwidth for the uplink also. However, the
trends are less complex as the uplink data rate is almost unaffected by changes
in the uplink transmission bandwidth. As the uplink transmission bandwidth
increases, it does reduce the SINR requirement for the same bit rate because
of the lower coding rate that can be used. However, the increase in the uplink
bandwidth results also in increased noise bandwidth and hence more total noise
power while the signal power stays the same. The difference to downlink is that
the symbols are detected in time-domain (SC-FDMA) and not on a per sub-
carrier basis as in the downlink. Hence, on the uplink the interplaying factors
are the gains achieved through the increased uplink bandwidth against the loss
resulting from the increased total noise power.
S
SINR = (4.29)
I +N
Coverage-Capacity Planning and Analysis 113
In which S denotes the average received signal power, and I and N denote
the interference and noise at the input to the receiver, respectively. The inter-
ference averaging is assumed over small scale fading and many TTIs and the
impact of the interplay of HARQ and the interference bursts is neglected in the
overall considerations made here. The interference term I is generally composed
of an intracell component and intercell part. In LTE, the intracell contribution
is assumed insignificant due to the subcarriers mutual orthogonality, although
excess phase noise, transmitter nonlinearity measured in error vector magni-
tude, and excess delay spread beyond the cyclic prefix length can result in non-
negligible interference from in-cell users.
At the cell edge, the interference from K neighboring cells can be written
as
I = ∑ γk .I max,k
K (4.30)
where Imax,k is the maximum interference received from cell k when it is fully
loaded, and γk is the subcarrier activity factor (i.e., load) of cell k. Assuming
the average cell load is the same for all cells and equal to γ, (4.30) simplifies to
S S
SINR = = (4.31)
γ.I max + N γ.S .Fc + N
where
I max
Fc =
S
as was defined earlier in the chapter and N is the total thermal noise power in
the receiver and in dB is given by
and Nf is the UE receiver noise figure normally set to around 7 dB. All powers
in (4.31) are expressed on per subcarrier basis.
114 From LTE to LTE-Advanced Pro and 5G
Tx ,eff
S= (4.32)
M .L
where M is the fade margin and L is the path loss all in linear scale. Tx,eff is
the effective transmitted power (after antenna gains) in watts. Substituting in
(4.31) and solving for the path loss L gives
Figure 4.5 Cell-edge SINR versus Rx power for various average network load parameter γ.
A value of 3 was used for the intercell interference parameter Fc.
Coverage-Capacity Planning and Analysis 115
Figure 4.6 Cell range versus network load for various values of cell-edge throughputs.
For the path loss L, we will use the model provided by 3GPP in [14] for
the 2-GHz band which is
References
[1] Pozar, D. M., Microwave Engineering, New York: John Wiley & Sons, 1998.
[2] Rahnema, M., UMTS Network Planning, Optimization and Inter-Operation with GSM,
New York: John Wiley & Sons, 2007.
[3] 3GPP R1-072570, “Performance Evaluation Checkpoint: VoIP Summary,” 2007.
116 From LTE to LTE-Advanced Pro and 5G
[4] Holma, H., et al., (eds.), LTE for UMTS, New York: John Wiley & Sons, 2009.
[5] Sklar, B., “Rayleigh Fading Channels in Mobile Digital Communication Systems Part
I: Characterization,” IEEE Communications Magazine, Vol. 35, No. 7, July 1997, pp.
90–100.
[6] Hata, M., “Empirical Formulae for Propagation Loss in Land Mobile Radio Services,”
IEEE Trans. Vehic. Tech., Vol. VT-29, No. 3, 1980, pp. 317–325.
[7] COST 231, Damasso, E., and L. M. Correia, (eds.), Digital Mobile Radio Towards Future
Generation Systems, Final Report, COST Telecom Secretariat, Brussels, Belgium, 1999.
[8] Walfisch, J., and H. L. Bertoni, “A Theoretical Model of UHF Propagation in Urban En-
vironment,” IEEE Transactions on Antennas and Propagation, Vol. 36, No. 12, December
1988, pp. 1788–1796.
[9] Mckown, J., and R. Hamilton, “Ray Tracing as a Design Tool for Radio Networks,” IEEE
Networks Magazine, Vol. 5, No. 6, November 1991, pp. 27–30.
[10] Wireless World Initiative New Radio, “WINNER II Channel Models, WINNER II
Deliverable D1.1.2, Version 1.2,” http://www.ist-winner.org/WINNER2Deliverables/
D1.1.2.zip, 2008.
[11] Jakes, W. C., (ed.), Microwave Mobile Communications, New York: John Wiley and Sons,
1974.
[12] Hämäläinen, J., “Cellular Network Planning and Optimization Part II: Fading,”
Communications and Networking Department, TKK, 17.1, 2007, www.comlab.hut.fi/
studies/3275/Cellular_network_planning.
[13] Salo, J., M. Nur-Alam, and K. Chang, “Practical Introduction to LTE Radio Planning,”
web listed white paper, November 2010.
[14] 3GPP TR 25.814, “Physical Layer Aspects for Evolved UTRA Release 7, V2.0.0,” 2006.
5
Prelaunch Parameter Planning and
Resource Allocation
The three important tasks in the LTE prelaunch parameters planning are allo-
cation of physical cell identities (PCIs), uplink reference signal sequence plan-
ning, and the physical random access channel (PRACH) parameters planning.
These are discussed in the following sections.
117
118 From LTE to LTE-Advanced Pro and 5G
boring sites are not frame-synchronized or the frame timing offset among the
neighboring cells are random, PCI coordination among sites is not practical.
A PCI-tied reference symbol allocation scheme can result in an overlap-
ping of RS with the control and data resource elements of the neighboring cells.
Therefore, a design choice must be made between RS to RS interference and RS
to PDSCH/PDCCH interference. The current engineering consensus seems
to be that the latter choice is favored on the consideration that a PCI module
3 allocation of reference symbols helps to avoid interference with the primary
synchronization signals and hence avoid problems in cell search and handover
measurements.
1. These are complex valued mathematical sequences whereby cyclically shifted versions of the
sequence result in zero autocorrelation.
Prelaunch Parameter Planning and Resource Allocation 119
tion and periodicity of PRACH resources in the uplink subframes are found by
checking the information broadcast on the BCCH channel.
Table 5.1
FDD Preamble Burst Formats
Sequence, Guard Cell
Format CP, ms ms Subframes Time, ms Radius, km
0 103.125 800 1 96.875 ~14
1 684.375 800 2 515.625 ~75
2 206.25 1,600 2 193.75 ~28
3 684.375 1,600 3 715.625 ~103
n (n + 1)
x u (n ) = exp − j πu , n = O N ZC − 1 (5.2)
N ZC
where NZC denotes the length of the uth root sequence. From the uth root
Zadoff-Chu sequence, random access preambles with the zero correlation zone
are defined by cyclic shifts of multiples of Ncs according to
Table 5.2
NCS for Preamble Generation (Preamble Formats 0 to 3)
ZeroCorrelation NCS value
ZoneConfig Unrestricted Set Restricted Set
0 0 15
1 13 18
2 15 22
3 18 26
4 22 32
5 26 38
6 32 46
7 38 55
8 46 68
9 59 82
10 76 100
11 93 128
12 119 158
13 167 202
14 279 237
15 419 —
122 From LTE to LTE-Advanced Pro and 5G
The degree of cyclic shifting of the ZC root sequences, NCS, should guar-
antee distinct sequences for the cell’s radio propagation geometry. Therefore, a
ZC root sequence may be shifted by an integer multiple of the cell’s maximum
round-trip delay plus the delay spread, 64 times to generate the set of 64 dis-
tinct orthogonal sequences for the cell’s radio environment. The relationship
between the cyclic shift and the cell size is given by
in which RTD stands for round-trip delay given by RTD = 2R/c, where R and
C are the cell radius and the speed of the light in free space, respectively. In
the case of remote sites’ deployment, the length of the fiber to the remote cells
must be considered as part of the cell radius based on the speed of light in fibers
(about 2/3 the speed of light in free space). In the case in which microwave links
are used, the speed of light in free space is used for the calculation. The delay
spread is typically different for rural, suburban, urban, and dense urban envi-
ronments and its value should be obtained through drive test measurements in
the cell.
Another factor which determines the degree of cyclic shifts required is
the Doppler shift experienced by the UE’s mobility. In the presence of Doppler
shifts, the CS-ZC sequences lose their zero auto-correlation properties. Indeed,
high Doppler shifts induce offsets in the receiver’s bank of correlators from
the desired peak, and hence can lead to preamble confusions. To avoid this, a
1-bit flag from the eNodeB signals whether the current cell is a high-speed cell
or not so that proper ranges for cyclic shifts can be selected in the generation
of the preamble sequences for the cell. Hence, the cyclic shifts can either be
restricted or unrestricted. The restricted cyclic shifts limit the available cyclic
shifts for preambles that are used in high-mobility scenarios. The upper bound
for PRACH performance without applying cyclic shift restriction is within the
range of 150 to 200 km/hour.
As a result, the cyclic shift and corresponding number of root sequences
used in a cell are a function of the cell size and the UE speed. The cyclic shift is
indirectly given to the UE by a parameter called ZeroCorrelationZoneConfig,
as shown in Table 5.2 [1]. A small value for cyclic shift (NCS) allows for more
sequences to be derived from the root ZC sequence and higher values produce
less but are more resistant to the effects of delay spread and Doppler in the
preamble. Based on simulation results discussed in [2], the upper bound for
PRACH performance for the unrestricted cyclic shifts given in Table 5.2 are in
the range of 150 ~ 200 km/hour for typical AWGN environments [2].
The root sequences assigned are signaled to the cell by a single base logical
root sequence index (RootSequence Index parameter), regardless of the actual
number of root sequences required in a cell to derive the 64 preambles. The
Prelaunch Parameter Planning and Resource Allocation 123
logical root sequence index order is cyclic, that is, the logical index 0 is consecu-
tive to 837. The UE then derives the subsequent root sequence indexes in accor-
dance with a predefined ordering. The parameter RootSequenceIndex informs
the UE via SIB2 which root sequence is to be used. The UE starts with the
broadcast root index and applies cyclic shifts to generate the 64 preambles. The
cyclic shift value (or increment) is taken from among 16 predefined values. The
parameter called ZeroCorrelationZoneConfig points to a table from which the
cyclic shift is obtained. The smaller the cyclic shift, the more preambles can be
generated from a root sequence. Hence, the number of root sequences needed
to generate the 64 preambles in a given cell is:
( (
No. of root sequences = ceiling 64 integer (sequence length cycle shift ) )) (5.5)
For example, if the RootSequenceIndex is 300 and the cyclic shift is 119,
then the number of root sequences needed to generate the 64 preambles in a
cell is:
( (
No. of root sequences = ceiling 64 integer (839 119) = 10 ))
This means that if we allocated RootSequenceIndex 300 to sector 1, then
sector 2 must have RootSequenceIndex 310 and sector 3 must have RootSe-
quenceIndex 320.
cell. This results in a sequence reuse cluster of at least 839/5 ≈ 167 cells, which
allows for easy planning process. The design criterion to consider here is that
root sequence indices of cells must not overlap within the reuse distance. As
for typical scenarios, there are plenty of ZC sequences available. However, if
root sequences used in neighboring cells, which can hear each other overlap,
the transmitted preamble can be detected in multiple cells and result in ghost
preambles. The preamble planning process therefore consists of determining
the cell range desired and then using equations 1 and 2 to determine the proper
the root sequence reuse cluster and then the proper assignment of cells to the
reuse clusters.
for a random access response (RAR). The RAR will be identified with a random
access radio network temporary identifier (RA-RNTI) that is related to the pre-
amble transmission, as shown. The UE continues to monitor the PDCCH for a
number of subframes given by the parameter ra-responseWindowSize. If it does
not receive a RAR in this time with the corresponding RA-RNTI, then it will
initiate a retransmission with a power increment through the open loop power
control. This process continues until either a successful preamble transmission
or the number of retransmissions reaches preambleTransMax. If this occurs,
then the procedure will fail and an indication is provided to the higher layer.
Upon receipt of a successful uplink PRACH preamble, the eNodeB calcu-
lates the power adjustment and timing advance parameters for the UE based on
the strength and delay of the received signal and transmits an uplink capacity
grant to the UE to enable it to send further details of its request. This will take
the form of the initial layer 3 message. If necessary, the eNodeB will also assign
a temporary cell radio network temporary identifier (C-RNTI) for the UE to
use for ongoing communication. Once received, the eNodeB reflects the initial
layer 3 message back to the UE in a subsequent uplink resource grant message
to enable unambiguous contention resolution. After this, further resource al-
locations may be required for signaling or traffic exchange, which will be ad-
dressed to the C-RNTI.
check if the random access is overloaded in any of the preamble ranges. If one
of them is overloaded, RACH preambles are reallocated among the ranges. If all
of them are overloaded, more physical resources need to be reserved for RACH.
If none of them is overloaded, other parameters need to be adjusted, such as the
power ramping step, the preambleTransmax, the ContentionResolutionTimer,
and the PreambleReceivedTargetPower [4].
References
[1] 3GPP TS 36.211 v 1.0.0 (2007-03), “Technical Specification Group Radio Access Net-
work; Physical Channels and Modulation, Release 8.”
[2] Panasonic and NTT DoCoMo, “R1-073624: Limitation of RACH Sequence Allocation
for High Mobility Cell,” 3GPP TSG RAN WG1 Meeting, No. 50, Athens, Greece, August
2007.
[3] Salo, J., M. Nur-Alam, and K. Chang, “Practical Introduction to LTE Radio Planning,”
White Paper, November 2010.
[4] 3GPP TS36.300, “Technical Specification Group Radio Access Network; Evolved Univer-
sal Terrestrial Radio Access Overall description; Stage 2 Release 13,” V13.0.0, 2015.
6
Radio Resource Control and Mobility
Management
The radio resource control (RRC) performs the overall control of radio resourc-
es in each cell and is responsible for collecting and managing all relevant infor-
mation related to the active user equipment (UE) in its area. The RRC protocol
layer [1] works very closely with the layer 2 protocol medium access control
(MAC) inside the UE and the eNodeB. It is part of the LTE air interface control
plane and the brain of the radio access network. The main services and func-
tions of the RRC sublayer include the UE power state management, broadcast
of system information related to the nonaccess and the access stratum, paging,
setup, maintenance, and release of an RRC connection between the UE and
the E-UTRAN, UE measurement reporting and the control of the reporting,
resource scheduling, mobility functions, security, and the QoS management.
127
128 From LTE to LTE-Advanced Pro and 5G
downlink paging delay and NAS signaling delay) to the connected state should
be less than 100 ms according to 3GPP TS 36.913. The state transitions be-
tween these two states are shown in Figure 6.1.
These two states define the RRC state machine implemented in the UE
and the eNodeB. Likewise, the EPC [2] maintains two different contexts for the
UE known as the EPC mobility management (EMM) (mobility management
context) and EPC connection management (ECM) (connected management
context), each of which is handled by state machines located in the UE
and the MME. The EMM ensures that the MME maintains the location data
necessary to offer service to the UE when required. The two EMM states main-
tained by the MME are EMM-deregistered and EMM-registered. The ECM
states describe a UE’s current connectivity status with the EPC, for example,
whether an S1 connection exists between the UE and EPC. There are two ECM
states, ECM-idle and ECM-connected. However, these are not the subject of
discussion in this chapter.
R s = Qmeas ,s + Q hysts
Rn = Qmeas ,n − Qoffsets ,n (6.1)
where Qmeas,s and Qmeas,n are either the RSRP or the RSRQ measured by UE for
the serving cell and the neighbor cells, respectively. Qhysts specifies the hysteresis
value for the ranking criteria. Qoffsets,n specifies the offset between the serving cell
and the neighbor cell. If Qhysts is changed, it will impact the selection relation
between the serving cell and all the neighbor cells. So if only reselection be-
tween one pair of cell needs to be adjusted, the related offset parameter Qoffsets,n.
should be tuned.
The UE also monitors the serving cell system information messages and
paging or notification messages. The system information messages convey all
the cell and system parameters. Any changes in these parameters that may affect
the service level provided by the cell, or access rights to the cell can provoke a
cell reselection or a PLMN reselection. Paging or notification messages result in
connection establishment. The information that the UE uses for cell reselection
includes the frequencies, technologies, and cells that it should consider for rese-
lection. This information is provided in system information blocks (SIBs) 3 to
8 [1]. The specific SIBs used depend on the frequency and technology options
130 From LTE to LTE-Advanced Pro and 5G
that will be considered. The information regarding which SIBs are used for each
technology and neighbor cell types are provided in [1]. However, the informa-
tion relating to intrafrequency LTE cells is split between SIB Type 3 and SIB
Type 4, and that relating to LTE interfrequency cells are provided in SIB Type
5. Such information may optionally include a neighbor list. For each neighbor
in the neighbor cell list, the physical layer ID and a cell-specific reselection
offset are provided. Note that since the UE is required to scan and detect neigh-
bors on a given frequency, the operator may choose not to include the neighbor
cell list. Additionally, a black cell list may be included as well. Each entry in the
black cell list is either a single physical cell ID or a range of physical cell IDs.
This list can be used by an operator to prevent reselection to cells detected by
the UE on the given interfrequency. SIB Type 7 carries the information for
GSM cells, and it consists of a list of absolute radio frequency channel numbers
(ARFCNs) and an accompanying bit map identifying allowed values for the
NCC element in the BSIC. Thus, unlike a standard GSM neighbor cell list, it
does not list specific BSICs. The SIB Type 8 includes information specific to
frequency layers and reselection parameters for non-3GPP technology.
In the strategy for cell reselection, LTE allows for the use of RAT/fre-
quency prioritization. Each frequency layer belonging to either E-UTRA or any
other radio access technology (RAT) that the UE may be required to measure is
assigned a priority. Priority levels are allocated a value between 0 and 7, where 7
is the highest priority. This priority information is cell-specific and is conveyed
to the UEs via system information messages. The different frequency layers
within LTE may be assigned the same priority, but priorities may not be equal
for different radio access technologies. Additionally, UE-specific values can be
assigned by the user, which then take priority over the values provided in the
system information messages. The UE will then not consider any frequency
layers that do not have a priority in the cell reselection process. These measure-
ment rules are utilized for reducing unnecessary neighbor cell measurements.
The measurements defined depend on the RAT; for example, this would be
RSRP or RSRQ for LTE and RSCP or Ec/No for UMTS. The measurement re-
porting can be set as either periodical or event-based and the setting will include
the details of appropriate events such as thresholds and timers. Each defined
reporting configuration is tagged with a report configuration ID.
The frequency/RAT priority level and system thresholds are used to en-
sure that the UE measures cells in a higher priority layer in the cell reselection
process unless the quality of the currently selected layer becomes unacceptably
poor. The UE will also apply scaling to Treselection, hysteresis, and offset values
dependent on an assessment of its mobility state, which may be high, medium,
or low. The Treselection parameter defines the delay in the preselection to a
better ranked cell to avoid instability. This is based on consideration of resent
reselection frequency through operator specified thresholds.
Radio Resource Control and Mobility Management 131
Fn = (1 − a ) ∗ Fn −1 + (a ∗ Measn ) (6.2)
where
The filter coefficient can be configured independently for the RSRP and
the RSRQ. The five events defined as the criteria to choose for intrasystem
handovers are as follows:
• Event A1: Event A1 is triggered when the serving cell becomes better
than a threshold. The event is triggered when the following condition
is true:
• Event A5: Event A5 is triggered when the serving cell becomes worse
than threshold 1 while a neighboring cell becomes better than threshold
2. The event is triggered when both of the following conditions are true:
Radio Resource Control and Mobility Management 135
In the above listed criteria, the following notations have been defined.
• Event B2: This event is triggered when the serving cell becomes worse
than threshold 1 while a neighboring intersystem cell becomes better
than threshold 2. The event is triggered when both of the following
conditions are true:
Fn = (1 − a ) ∗ Fn −1 + (a ∗ Measn ) (6.3)
where Measn is the measured quantity (RSRP or RSRQ) after the L1 filtering at
measurement instant n, and Fn is its L3 filtered value at measurement instant
n, as also discussed in Section 6.1.3.1. The parameter a (filter coefficient) de-
termines the relative influence of the recent and the older measurements and
138 From LTE to LTE-Advanced Pro and 5G
is also called the forgetting factor and determines the filter length. This coef-
ficient can be adaptively chosen depending on the degree of correlation present
in successive measurement samples to average out the fast fading and follow
only the lognormal shadowing. At high speed, for example, the samples are not
highly correlated; therefore, it would be more accurate to have a shorter filtering
length than for slow-speed users in order to follow the lognormal shadowing.
In this way the ping-pong handovers (i.e., the unnecessary handovers) can be
eliminated through sufficient filtering of the handover trigger measurements.
Similar effects are achieved in case of increased downlink bandwidth, which
results in increased layer 1 filtering. In fact, the simulation results presented
in [7] have shown a 30% decrease in the average number of handovers when
the downlink bandwidth is increased from 1.25 MHz to 5 MHz at the mobile
speed of 3 km/hr while with a negligible change in the number of handovers at
mobile speed of 120 km/hr. The results also show that the gain of using larger
measurement bandwidth at 3 km/hr can also be achieved by using longer L3
filtering period. This improvement in reduced number of handovers has also
resulted in a small reduction, around 0.5 dB in the downlink signal quality (due
to delayed handovers). Nevertheless, the study presented in [7] confirm that
the amount of filtering as set by the filter coefficient a can affect the number of
handovers and the probability of unnecessary as well as delayed handovers and
hence must be tuned up taking into account the mobile speed and the propaga-
tion environment.
The L3 filtering can be performed in either the decibel or the linear do-
main. The filtering is said to be done in decibel or linear domain when the
measurements used are expressed in decibels or linear units, respectively. How-
ever, the results in [7] showed that there is negligible difference observed in the
number of handovers between linear and decibel filtering at slow mobile speeds
of around 3 km/h, but with a small reduction achieved in the number of han-
dovers with linear filtering at the higher speeds of 120 km/hr. The results also
show that better gains are achieved in reducing the number of handovers when
the signal strength RSRP is used instead of the signal quality for the handover
trigger measurements.
eters as they also determine when the handover is initiated. When the handover
threshold decreases, the probability of a late handover decreases and the ping-
pong effect increases. It can be varied according to different scenarios and prop-
agation conditions to make these trade-offs and obtain a better performance.
The hysteresis margin is the main parameter that governs the HO algo-
rithm between two eNodeBs. The handover is initiated if the link quality of a
neighbor cell is better than that of the serving cell by the hysteresis value. It is
used to avoid ping-pong effects. However, it can increase handover failure since
it can also prevent timely necessary handovers. The TTT acts to reduce the
number of unnecessary handovers and effectively avoid ping-pong effects, but
it can also delay the handover, which then increases the probability of hando-
ver failures. These handover parameters need to be optimized for good perfor-
mance. Too low hysteresis and TTT values in fading conditions result in back
and forth ping-pong handovers between the cells. Too high values can cause call
drops during handovers as the radio conditions get too bad for transmission in
the serving cell. Therefore, the optimal setting depends on UE speed, cell plan,
propagation conditions, and the system load.
The impact of the hysteresis and TTT parameters on the handover
performance have been investigated for the most common 3GPP scenarios
through simulations based on the downlink RSRP measurements in [8, 9]. In
the simulations, the optimal settings for each scenario have been investigated
and performance evaluation carried out using the number of handovers, SINR,
throughput, delay, and packet loss. The results have confirmed that with rela-
tively large cells, smaller hysteresis and a larger TTT trigger more handovers.
This is expected as the larger hysteresis margin would be harder to meet for
bigger cells. For slow-moving UEs, the setting with small hysteresis and long
TTT is easier to trigger handovers since it takes a long time for the slow UE to
meet the large hysteresis condition. The results in [8] have shown that for UE
speeds of 3 km/hr and 30 km/hr, the setting of 3 dB and 960 ms for the hys-
teresis and TTT and a cell radius of 500m provide good performance, whereas
this changes to 6 dB and 960 ms for the UE speed of 120 km/hr. However, the
study concludes that some medium triggering settings, such as 3 dB and 960
ms, usually result in good performances for all the scenarios and can be widely
used as a simple way to improve the handover performance. Nevertheless, a very
important factor that influences the handover performance is the UE speed.
For high-speed railway networks, trains go through the cells frequently and
will perform handover frequently. That can increase call drops and handover
failure rate. A SON-based algorithm is presented in [10], which picks the best
hysteresis and time-to-trigger combination for the changing radio propagation
condition due to UE speed and the environment. The results show an improve-
ment from the static value settings.
140 From LTE to LTE-Advanced Pro and 5G
where Tsearch is the time required to search the target cell when the target cell is
not already known when the handover command is received by the UE. If the
target cell is known, then Tsearch = 0 ms. If the target cell is unknown and signal
quality is sufficient for successful cell detection on the first attempt, then Tsearch
= 80 ms. Regardless of whether DRX is in use by the UE, Tsearch is still based on
non-DRX target cell search times.
TIU is the interruption uncertainty in acquiring the first available PRACH
occasion in the new cell. TIU can be up to 30 ms. The actual value of TIU will
depend upon the PRACH configuration used in the target cell.
In the above, a cell is known if it has been meeting the relevant cell iden-
tification requirement during the last 5 seconds; otherwise, it is unknown [12].
Tinterrupt = Tiu + Tsynch + 150 + 10 ∗ Fmax if target cell is not known (6.6)
The three parameters T, nB, and UE_ID, equal to the UE IMSI modulo
1,024 are used in creating time diversity for the sending of paging messages.
In other words, they spread out in time the opportunities for paging and in
this way limit the scheduling conflicts while allowing the UEs to go into DRX
mode and reduce their power consumption. The UE_ID splits the UE popula-
tion into groups with identical paging occasions. All UEs with the same UE_ID
(defined as IMSI modulo 1,024) are paged within the same unique paging
occasion. However a given paging occasion is always shared by at least four
UE_ID groups, and generally much more (in the worst case, all UEs share a
single paging occasion).
The paging occasion is determined by an index called i_s, which points to
PO from the subframe pattern defined in the following tables (Tables 6.1 and
6.2) for LTE FDD and LTE TDD.
In these tables, i_s, N, and Ns are calculated from the following formulas:
N = min (T ,nB )
Ns = max (1,nB T )
(6.7)
i _ s = floor (UE _ ID N ) mod Ns
UE _ ID = IMSI mod 1,024
The IMSI is given as a sequence of digits of type Integer (0..9) and in the
formula above is interpreted as a decimal integer number, where the first digit
given in the sequence represents the highest order digit. The System Informa-
tion DRX (discontinuous reception mode) parameters, T and nB, stored in
the UE are updated locally in the UE whenever they are changed in the system
information messages. If the UE has no IMSI, for instance, when making an
Table 6.1
Subframe Pattern to Determine PO in LTE FDD Mode
Table 6.2
Subframe Pattern to Determine PO in the LTE TDD Mode
6.4 DRX
In addition to the DRX for UEs in the idle mode, LTE also supports DRX
for UEs in the RRC connected state. The DRX function is characterized by a
DRX cycle, an on-duration period, and an inactivity timer. The UE wakes up
and monitors the PDCCH at the beginning of every DRX cycle for the entire
on-duration period. If no scheduling assignment is received, the UE falls asleep
again. Whenever the UE receives an assignment from the network, it starts (or
restarts) the inactivity timer and continues to monitor the PDCCH until the
timer expires. This process is controlled by the MAC and RRC protocols. The
parameters are set by RRC but it is the MAC layer that operates the process
itself. The onDurationTimer defines the length of time that the UE is active
and monitoring the downlink control channels when DRX is running. This
operates in conjunction with a DRX cycle that defines the amount of time that
the UE can be off. It is to be noted that the HARQ operation overrides the
DRX function. Thus, the UE wakes up for possible HARQ feedback, as well as
for possible retransmissions during a configurable amount of time as soon as a
retransmission can be expected.
There are two DRX cycles defined for a UE known as the long DRX-Cy-
cle and the short DRX-Cycle. The long DRX-Cycle is the default value. When
a period of activity is started through the scheduling of resources for the UE’s
C-RNTI, the UE starts the DRX-InactivityTimer. If the UE remains active
long enough for the DRX-InactivityTimer to expire, or if it receives a MAC CE
on which it may have to act, then when the activity stops, the UE uses the short
DRX-Cycle period and start also the DRXShortCycleTimer. If no further activ-
ity takes place before the DRXShortCycleTimer expires, then the UE reverts to
the long DRX-Cycle period.
terminals to arrive at the base station with a timing misalignment less than the
length of the cyclic prefix and hence result in the loss of orthogonality between
subcarriers and cause some interuser interference. Such interuser interference
will worsen when the different users transmission are received at the base sta-
tion with different power levels due to different paths losses from terminals in
different locations and with different radio conditions. If two terminals with
different radio link conditions transmit with the same power, the received sig-
nal strengths may thus differ significantly, causing a potentially significant in-
terference from the stronger signal to the weaker signal unless the subcarrier
orthogonality is perfectly retained. To avoid this, at least some degree of uplink
transmission-power control is needed on the uplink, to help, for instance, re-
duce the transmission power of user terminals close to the base station and
ensure all user signals are received at the base station with approximately the
same level as needed. Hence, LTE provides for uplink power control on both
the PUSCH and PUUCCH.�
where Pmax is the maximum UE power, which is 23 dBm, and P0 is the required
power in the Node B in a single resource block for a reference modulation cod-
ing scheme, which is broadcast to the UE and used for initial power setting;
this is cell-defined and will generally depend on the instantaneous uplink noise/
interference level and therefore vary with time. M is the number of resource
blocks, α is a cell specific parameter between 0 and 1 that enables use of frac-
tional power control and is broadcast to the UE, PL is the estimated downlink
Radio Resource Control and Mobility Management 147
path loss calculated at the UE, and δMCS is an MCS dependent offset that is
required from the P0 value for the actual MCS being used, which also depends
on the transport format selected.
ƒ(∆i) is a function that can be relative, cumulative, or absolute correc-
tions to the UE power. The UE calculates the transmit power to be used in each
subframe in which it has a resource allocation according to the above formula.
The use of Pmax − Ppucch reflects the fact that the transmit power available
for PUSCH on a carrier is the maximum allowed per-carrier transmit power
after power has been assigned to any PUCCH transmission on that carrier [14].
This ensures priority of L1/L2 signaling on PUCCH over data transmission on
PUSCH in the power assignment. The term 10*log10(M) is used to scale the
controlled power per source block P0,pusch to the power required to transmit
the total number of resource blocks contained within the PUSCH. The initial
open loop path loss (PL) is estimated using the transmit reference symbols,
measured at the UE. The closed loop component is contained in ∆i, where ∆i
= (SINRTarget − SINREstimated). At the eNodeB, SINR is measured and the UE
transmit power adjusted to meet the target SINR. Adjustments are sent to the
UE via the transmit power control (TPC) commands.
The variation of the parameters P0 and α provides a trade-off between
absolute cell performance and overall system performance. The initial power
setting P0 is composed of an 8-bit cell specific nominal component P0Nominal
and a 4-bit UE specific value P0UE. These two parameters and the path loss
compensation parameter α are key RF optimization parameters. Higher set-
tings of the P0 parameters improves PUSCH reception, but result in higher
UE transmit power leading to more interference to neighboring cells and vice
versa. The parameter α, can take a value of 1 or smaller in which case the power
control will operate with partial path loss compensation. With partial path loss
compensation, an increased path loss is not fully compensated for by a cor-
responding increase in the uplink transmit power. That would then result in
inadequate SINR at the receiver, which, in turn, causes the receiver to vary the
scheduled modulation-coding scheme accordingly. This means that the partial
PL compensation favors more the UEs that are in good radio propagation con-
dition and allows them to obtain a higher spectral efficiency, whereas less power
is spent on UEs in less favorable condition and that way mitigate the interfer-
ence to UEs in neighbor cells [15]. In the case of partial path loss compensa-
tion, the ∆MCS term should then be disabled to prevent from further reduction
of the terminal power, which would otherwise occur as the base station would
try to reduce the offset to a value consistent with the reduced modulation-
coding scheme.
The reduced modulation-coding scheme with partial path loss results in
reduced data rates in places such as the cell border. However, the benefit is a rel-
atively lower transmit power for terminals close to the cell border and hence less
148 From LTE to LTE-Advanced Pro and 5G
{
Ppucch = min Pmax , P0, pucch + PLDL + ∆ format + σ } (6.9)
{
Ppucch = min Pmax , P0, pucch + PLDL + ∆ format + σ } (6.10)
Table 6.3
The CQI and Its Mapping to MCS
Code Rate
CQI Index Modulation × 1,024 Efficiency
0 Out of range
1 QPSK 78 0.1523
2 QPSK 120 0.2344
3 QPSK 193 0.3770
4 QPSK 308 0.6016
5 QPSK 449 0.8770
6 QPSK 602 1.1758
7 16QAM 378 1.4766
8 16QAM 490 1.9141
9 16QAM 616 2.4063
10 64QAM 466 2.7305
11 64QAM 567 3.3223
12 64QAM 666 3.9023
13 64QAM 772 4.5234
14 64QAM 873 5.1152
15 64QAM 948 5.5547
Radio Resource Control and Mobility Management 151
References
[1] 3GPP TS 36.331, “Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Resource
Control (RRC); Protocol Specification, Release 8, version 8.4.0,” December 2008.
[2] 3GPP TS 23.401, “Evolved Universal Terrestrial Radio Access Network Access, Architec-
ture Description, Release 12, V11.0.0,” 2012.
[3] 3GPP TS 36.300, “Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved
Universal Terrestrial Radio Access Network (E-UTRAN); Overall Description; Stage 2,
Version 8.7.0,” December 2008.
[4] 3GPP TS 36.304, “Evolved Universal Terrestrial Radio Access (E-UTRA), UE Procedures
in Idle Mode, Release 12, V12.5.0,” 2015.
[5] 3GPP TS 36.214, “Evolved Universal Terrestrial Radio Access (E-UTRA) Physical Layer
- Measurements, Version 8.5.0,” December 2008.
[6] Dimou, K., et al., “Handover Within 3GPP LTE: Design Principles and Performance,”
Ericsson Research/IEEE, 2009.
[7] Anas, M., et al., “Performance Analysis of Handover Measurements and Layer 3 Filtering
for UTRAN LTE,” Proc. of PIMRC, September 2007, pp. 1–5.
[8] Iñiguez Chavarría, J. B., “LTE Handover Performance Evaluation Based on Power Bud-
get Handover Algorithm,” Master’s thesis, Universitat Politècnica de Catalunya, February
2014.
[9] Luan, L., et al., “Optimization of Handover Algorithms in LTE High-Speed Railway Net-
works,” March, 2012, www.aicit.org/JDCTA/ppl/JDCTAVol6No5_part10.pdf.
[10] Jansen, T., and I. Balan, “Handover Parameter Optimization in LTE Self-Organizing
Networks,” Vehicular Technology Conference Fall, Fall 2010.
152 From LTE to LTE-Advanced Pro and 5G
[11] Racz, A., A. Temesvary, and N. Reider, “Handover Performance in 3GPP Long Term
Evolution (LTE) Systems,” 16th IST Proc. of Mobile and Wireless Communications Summit
2007, July 2007, pp. 1–5.
[12] 3GPP TS 36.133, “Requirements for Support of Radio Resource Management, Release 8,
V8.9.0,” 2010.
[13 Linked CRs to TS25.214, TS25.331, and TS25.133 for Faster L1 DCH synchronization,
TSG RAN WG1, WG2, and WG4, TSG RAN Meeting #28, Quebec, Canada, Release 6
Category B, June 1–3, 2005.
[14] Dahlman, E., S. Parkvall, and J. Skold, 4G, LTE-LTE-Advanced for Mobile Broadband,
New York: Academic Press, 2011.
[15] Salo, J., M. Nur-Alam, and K. Chang, “Practical Introduction to LTE Radio Planning,”
web listed white paper, November 2010.
[16] 3GPP TS 36.213, “Evolved Universal Terrestrial Radio Access (E-UTRA) Physical Layer
Procedures, Release 8,” 2009.
7
Intercell Interference Management in
LTE
Because LTE is also based on the frequency reuse concept, it is subject to inter-
cell interference. Intercell interference arises when users in neighboring cells are
assigned the same resource blocks in the same time period, that is, the same set
of subcarrier frequencies over the same transit time interval (TTI), and funda-
mentally can happen for all user locations on both the uplink and the downlink.
However, the interference on the downlink side is not expected to be as critical
as it could be on the uplink side. The intercell interference is most critical on
the uplink and particularly for outer cell (cell edge) located UEs. On the up-
link, an outer UE transmits with a significantly higher power than an inner UE
(closer to the antenna) due to the fact that power control is applied to overcome
the larger path loss. The outer cell UEs have also the shortest distance to the in-
terfered base station and can make the situation even worse when there is a high
degree of asymmetry in the radio frequency (RF) geometry of neighboring cells.
The situation is not as bad on the downlink. On the downlink, the interference
is more evenly spread out over the cell than on the uplink as the base stations
transmit with a rather evenly distributed power to all the UEs. This means any
cell border area UE experiences interference from base stations that are typically
at least a cell radius away. Also, the interference that a UE experiences in any
given location is the same regardless if the interfering base station is transmit-
ting to an interior or a cell edge UE in the interfering cell. Nevertheless, a basic
idea that has been considered in dealing with multicell interference at cell edges
on the downlink side is to allow multicell transmissions. In this scheme, cells
interfering with each other at the border areas are used all for transmission of
the same information to a UE as in soft handover in UMTS. This is what is
153
154 From LTE to LTE-Advanced Pro and 5G
done in the case of the MBMS and results in a spatial diversity gain through
soft combining, which helps to reduce the necessary transmit power. Multicell
transmission requires tight coordination and intercell synchronization, in the
order of substantially less than the cyclic prefix, and hence is most easily realized
for cells belong to the same node B. However, the use of beam-forming antenna
solutions at the base station is also a general method that can be seen as a means
for downlink intercell interference mitigation.
The 3GPP standards have specified and investigated three different
mechanisms for dealing with multicell interference on both the uplink and
the downlink in THE LTE systems. These are based on interference cancella-
tion, interference randomization, and interference avoidance through intercell
coordination measures in the resource assignment processes [1]. Each of these
mechanisms can require a different design and the signaling of different infor-
mation between the nodes and/or between the nodes and the UEs depending
on if they are to be implemented for the uplink or the downlink side.
a new load information message carrying an update. The HII, OI, and the
RNTP indicators would be updated about every 100 ms [1] to prevent an ex-
cess signaling load.
ment are not encouraged in LTE due to absence of the RNC, although the
mechanisms could be implemented in the OSS/network management center
at the expense of backhaul transmissions and the reduced reliability of a single
centralized architecture.
The distributed dynamic ICIC schemes implement the dynamic resource
scheduling algorithms within the nodes without the use of a central control
entity. This is done in either an autonomous manner by each node or by a co-
ordinated manner using an internode exchange of load and resource usage sta-
tus within neighboring nodes. The distributed dynamic schemes are also more
in line with the 3GPP consensus that dynamic (event-triggered) schemes are
superior to static schemes that limit the applied power level on a subset of the
resource blocks via planning, irrespective of the momentary usage of the same
resource blocks in the neighbor cells.
In the autonomous distributed interference avoidance schemes as inves-
tigated in [11], each node (eNodeB) aims to reduce the interference that it ex-
periences without regard to other nodes. In this scheme, each node periodically
measures the interference across all PRBs on the uplink transmission on a small
time scale of a subframe (1 ms) with no need for signaling with the UE or other
node Bs. The measurements are processed through some exponential averaging
process (i.e., short memory averaging) to derive a measure of the latest interfer-
ence present on each resource block. Then the results are used to replace exist-
ing assigned PRBs with cleaner ones as long as the reduced interference on the
new resource is above a set-defined hysteresis threshold. The latter check helps
to bring some stability to the process against small-scale fluctuations. In order
to reduce the possibility that multiple nodes simultaneously decide to replace a
resource with the same, less interfered PRB and hence generate a severe intercell
interference case, each node performs its decisions over update of an allocated
resource with a small probability of, say, 10%. This probability introduces some
implicit measure of collision avoidance into the resource selections in neigh-
boring nodes and, if chosen properly, results, on average, in only one node in
a neighborhood to switch to a better resource and thus help to avoid conflicts
with high interference.
The autonomous interference coordination scheme cannot guarantee
convergence to the system-wide global minimum due to the simple fact that
interference impacts are not symmetric between cells. Specifically, switching
one UE in one cell in a node from an interfered PRB to a better one might lead
to lower interference in that cell, but the total system-wide interference can
increase dependence on UE positions and resource usage patterns in the cells of
the neighboring nodes. This will happen if the reduced interference in the cell
under consideration does not outweigh any increase in the sum global interfer-
ence experienced by all users. Nevertheless, the scheme as shown by simulation
results in [11] does help to significantly reduce the intercell interference.
Intercell Interference Management in LTE 159
the nodes rather being configured in a planned manner via the network O&M,
for instance. The HII bit map can be used to specify the resource portion that
is intended for cell-edge user scheduling in neighboring nodes and hence avoid
similar usage in the receiving node and help to avoid highly interfering colli-
sions. Similarly, the OI can be used to request neighbor nodes to refrain from
scheduling UE on the high interfered resources at all or at least refrain from
placing the cell-edge users on the RBs depending on the load level indicated on
them by the neighbors.
7.5 Conclusions
It should be evident that the intercell interference coordination and avoidance
is a complex issue with a wide range of possible solutions. Finding a single solu-
tion that would be optimum for all possible situations is not the scope of 3GPP
standardization. Nevertheless, the 3GPP has extensively investigated a range of
intuitively appealing and feasible interference coordination algorithms using
advanced system simulations [12]. The outcome of these efforts are expected to
provide a deep understanding of the trade-offs involved and a broad consensus
regarding the time scale at which practical ICIC schemes should operate. More-
over, the identification of interfaces and a flexible suit of signaling information
elements should allow the continued evolution of the technology. From a sys-
tem design and commercial feasibility perspective, the ICIC mechanisms that
build on markedly low-complexity heuristics without or with intercell commu-
nication should be particularly attractive.
References
[1] 3GPP TR 25.814 V2.0.0, “Physical Layer Aspects of Evolved UTRAN, Rel 7 Physical
Layer Aspects for Evolved UTRA, Release 7.”
[2] 3GPP TS 36.423, “Evolved Universal Terrestrial Radio Access Network (EUTRAN); X2
Application Protocol (X2AP),” June 2008.
[3] Koutsimanis, C., et al., “Intercell Interference Coordination in OFDMA Networks and in
the 3GPP Long Term Evolution,” System Gábor Fodor, Ericsson Research.
[4] 3GPP R1-050738, “Interference Mitigation - Considerations and Results on Frequency
Reuse,” Siemens, September 2005.
[5] 3GPP R1-060291, “OFDMA Downlink Inter-Cell Interference Mitigation,” Nokia, Feb-
ruary 2006.
[6] 3GPP TSG-RAN WG1, “On Inter-Cell Interference Coordination Schemes without/
with Traffic Load Indication,” R1-074444.
Intercell Interference Management in LTE 161
[7] Rahman, M., H. Yanikomeroglu, and W. Wong, “Interference Avoidance with Dynamic
Inter-Cell Coordination for Downlink LTE System,” Department of Systems and Com-
puter Engineering Carleton University, Ottawa, Canada, Communications Research Cen-
tre of Canada (CRC) Ottawa, Canada.
[8] 3GPP R1-050507, “Soft Frequency Reuse Scheme for UTRAN LTE,” Huawei, May
2005.
[9] 3GPP R1-050841, “Further Analysis of Soft Frequency Reuse Scheme,” Huawei, Septem-
ber 2005.
[10] Rahman, M., and H. Yanikomeroglu, “Interference Avoidance Through Dynamic
Downlink OFDMA Subchannel Allocation Using Intercell Coordination,” Proc. IEEE
VTC, May 2008, pp. 1630–1635.
[11] Ellenbeck, J., C. Hartmann, and L. Berlemann, “Decentralized Inter-Cell Interference
Coordination by Autonomous Spectral Reuse Decisions,” Proc. European Wireless
Conference, June 2008.
[12] 3GPP R1-074444, “On ICIC Schemes Without/With Traffic Load Indication,” October
2007.
8
SON Technologies in LTE
The self-organizing network (SON) is a feature specification of LTE networks
to provide the model for next-generation operations and business support
systems (OSS/BSS). The SON can help telecom carriers to reduce operating
expenses (OPEX) by automating the manual steps needed to configure and
operate their networks efficiently in the increasingly high-volume, low-cost
services. The SON concept embraces both automated network configuration
in the deployment phases as well as automated tuning and self-healing in the
network operation phase [1]. In the deployment phases, for instance, when new
radio cells are dropped in, they are able to automatically structure and config-
ure themselves relative to their neighboring cells for proper integration into
the network. This is particularly handy in the flattened and distributed radio
access network of E-UTRAN where the manual configuration of many more
capable radio sites (eNodeBs) can be cumbersome and costly. The SON tech-
nology can also improve the user experience by optimizing the network more
rapidly and mitigating outages as they occur. These are very important capabili-
ties because the time to repair is very critical factor for every network operator.
Furthermore, through automated network performance tuning, the technology
can dynamically respond to changing network traffic distribution and inter-
ference geometry by tuning the equipment parameters and configurations in
real time and result in improved user experience, service quality, and improved
network capacity and resource utilization. The SON concept thus includes a
set of capabilities grouped into the functional domains of self-configuration,
self-optimization, and self-healing with the 3GPP SON agreed use cases and
capabilities with respect to these functional areas are defined in [1]. There has
been and still is a significant amount of research and development carried out in
the defining solutions based on SON for the E-UTRAN. However, as a guiding
163
164 From LTE to LTE-Advanced Pro and 5G
reference, any SON solutions and methods for E-UTRAN should adhere to the
3GPP architectural reference model in [2, 3], which lists the requirements and
objectives for management systems.
8.2 Self-Configuration
The self-configuration process is defined as the process in which newly de-
ployed nodes (eNodeBs) are configured by automatic installation procedures to
get the necessary basic configuration for system operation [4, 5]. The SON self-
configuration specifications were completed in Release 8 [4]. Self-configuration
process works in preoperational state, which starts from when the eNodeB is
powered up and has backbone connectivity until the RF transmitter is switched
on. The self-configuration capability allows a newly deployed network element
to automatically establish the necessary security control channel with the servers
in the network to download the available version of the software release. Then
the element performs a self-test to determine that all is working as intended.
Finally, the network element is taken into service, but the configuration may
still be improved using self-optimization. The self-configuration includes also
the processes in which radio planning parameters are assigned to a newly de-
ployed network node. The parameters in scope of self-planning include track-
ing area code, cell global identity, neighbor cell relations; Max TX power values
of UE and the base stations (eNode B in LTE), antenna tilts and azimuths,
HO parameters such as the trigger thresholds, hysteresis, and cell-offsets. The
parameter setting in the self-configuration phase may require negotiating with
the neighboring cells to adapt the new cell to the environment and possibly
also changing of the settings in the neighbors. As part of the self-configuration
process, the base station also runs a procedure known as S1 setup [6] to estab-
lish communications with each of the MMEs to which it is connected. In this
procedure, the base station informs the MME about the tracking area code and
PLMN identities of each of its cells, as well as any closed subscriber groups to
which it may belong. The MME replies with a message that indicates its glob-
ally unique identity, which can then communicate with the base station over
the S1 interface.
The network self-configuration features provided by SON dramatically
reduces the manual steps required when adding or expanding new network
elements by enabling plug and play hardware that is self-locating and self-con-
figuring. The self-configuration can be based on similarity detection, parameter
retrieval, and algorithmic postprocessing.
8.3 Self-Optimization
The self-optimization process is defined as the process where the UE and the
eNodeB load and performance measurements are used to auto-tune the net-
work. This process works in the operational state, which starts when the RF
166 From LTE to LTE-Advanced Pro and 5G
interface is switched on. The tuning actions can include changing parameters,
thresholds, and neighborhood relationships. The self-optimizing mechanisms
include the radio resource management (RRM) processes of traffic and load
balancing, scheduling techniques, and intercell interference coordination. The
intercell interference coordination is a process in LTE where the UE measure-
ment reports can be used to vary power levels between eNodeBs to reduce in-
terference to edge-cell users. The self-optimizing feature contributes to further
increase the spectral efficiency in LTE networks, because they can be used to
allocate capacity where it is needed. The degree of self-optimization that is de-
ployed determines the residual tasks that remain for network operators. In an
ideal case, the operator merely needs to feed the self-optimization methods with
a number of policies, which defines its desired balance in the trade-offs that ex-
ist between the conflicting coverage, capacity, quality, and cost targets.
8.4 Self-Healing
The self-healing is automated fault management [7]. It monitors and analyzes
fault management data, alarms, notifications, and self-test results to automati-
cally trigger corrective action on the affected network node(s) when necessary.
The self-healing processes create models for self-diagnosis to learn and identify
trends from past experiences using for instance statistical naïve Bayesian classi-
fiers and regression techniques on a database of historic solved problems based
on past optimization experience. Then when the most likely cause of a given
symptoms are decided in this way, the necessary corrective actions are triggered
automatically to solve the problem in near real time.
the UE). The detected cells as indicated by their reported frequencies can be
a LTE cell within the same frequency or a LTE cell with a different frequency
or even a cell belonging to another RAT. To detect the interfrequency cells or
inter-RAT cells, the eNodeB needs to instruct the UE to do the measurement
on that frequency. The ANR functionality is divided into three areas which
consist of neighbor removal, neighbor detection, and neighbor relation table
management functions. The first two functions decide whether to remove an
existing neighbor relation or to add a new neighbor relation. The cell removals
are based on statistical analysis of handover behavior and the success rate. The
third function is responsible for updating the neighbor relation table according
to the input of the previous two functions and the OAM.
SON algorithms. This indicates that not all the SON use cases may be able to
run in parallel, and thus there is a need for a SON coordinator/controller [16]
to effectively handle the triggering of the SON use cases.
References
[1] 3GPP Specifications for SON 3GPP TS 36.902, “Evolved Universal Terrestrial Radio Ac-
cess Network (E-UTRAN); Self-Configuring and Self‐Optimizing Network (SON) Use
Cases and Solutions,” Release 9, V9.3.1, 2013.
[2] 3GPP TS 32.101, “Telecommunication Management; Principles and High Level Require-
ments,” V8.1.0, 2007.
[3] 3GPP TS 32.500, “Telecommunication Management; Self-Organizing Networks (SON);
Concepts and Requirements.”
[4] 3GPP TS 32.501, “Telecommunication Management; Self‐Configuration of Network El-
ements; Concepts and Integration Reference Point (IRP) Requirements.”
[5] 3GPP TS 32.502, “Telecommunication management; Self‐Configuration of Network El-
ements Integration Reference Point (IRP); Information Service (IS).”
[6] 3GPP TS 36.413, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1
Application Protocol (S1AP), Release 11,” 2013.
[7] 3GPP TS 32.541, “Telecommunication Management; Self-Organizing Networks (SON);
Self-Healing Concepts and Requirements,” V12.0.0, Release 12.
[8] 3GPP TS 32.511, “Telecommunication Management; Automatic Neighbour Relation
(ANR) Management; Concepts and Requirements,” 3GPP TS 36.300.
[9] 3GPP TS 32.521, “Telecommunication Management; Self-Organizing Networks (SON)
Policy Network Resource Model (NRM) Integration Reference Point (IRP),” V11.1.0,
2012.
[10] TR 32.836, “Study on Network Management (NM) Centralized Coverage and Capacity
Optimization (CCO) Self-Organizing Networks (SON) Function,” http://www.3gpp.
org/DynaReport/32836.htm.
[11] Bandh, T., G. Carle, and H. Sanneck, “Graph Coloring Based Physical-Cell-ID Assignment
for LTE Networks,” 2009 International Conference on Wireless Communications and Mobile
Computing: Connecting the World Wirelessly, June 2009.
[12] Feng, S., and E. Seidel Nomor, “Self-Organizing Networks (SON) in 3GPP Long Term
Evolution,” Research GmbH, Munich, Germany, May 20, 2008.
[13] 3GPP TS 36.423, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); X2
Application Protocol (X2AP), Release 11,” Sections 8.3.9-8.3.10, September 2013.
[14] R3-081165, “Solutions for the Mobility Robustness Use Case,” http://www.3gpp.org/ftp/
tsg_ran/WG3_Iu/TSGR3_60/Docs/R3-081165.
SON Technologies in LTE 175
[15] Kitagawa, K., et al., “A Handover Optimization Algorithm with Mobility Robustness for
LTE Systems,” IEEE 22nd International Symposium on Personal Indoor and Mobile Radio
Communications (PIMRC), September 2011.
[16] Socrates Research Project, “FP7 SOCRATES Final Workshop on Self Organization in
Mobile Networks,” February 2011, http://www.fp7-socrates.org/files/Presentations/
SOCRATES_2010_NGMN%20OPE% 20workshop%20presentation.pdf.
[17] 3GPP TR 36.805, “Technical Specification Group Radio Access Network Study on
Minimization of Drive-Tests in Next Generation Networks, Release 9,” V2.0.0 2009.
9
EPC Network Architecture, Planning,
and Dimensioning Guideline
The evolved packet core (EPC) refers to the long-term evolution (LTE) packet
core network, which is responsible for the overall control of the UE and the
establishment of the bearers. It provides session, mobility, and quality-of-ser-
vice (QoS) management and enables the operators to connect users to applica-
tions in their service delivery environment, on the Internet, and on corporate
networks. The EPC thus provides the interface to packet data networks such
as the Internet and other Internet Protocol (IP)-based packet communication
networks and was introduced by 3GPP in Release 8 of the standard. The EPC
together with the LTE radio access network (E-UTRA) is made up of the eNo-
deBs, which make up what is referred to as the system architecture evolution
(SAE) in LTE. The SAE distributes the RRM functions to the eNodeBs and
removes the RNC and SGSN from the equivalent 3G network architecture
to make a simpler mobile network. This allows the network to be built as an
all-IP-based network architecture. The EPC, unlike the core network in GPRS
and UMTS, is a purely packet-switched network and uses the IP for all services
[1]. The protocols running between the UE and the EPC are known as the
Non-Access Stratum (NAS) Protocols. The evolved core network consists of the
mobility management entity (MME), the serving gateway (SGW), the home
subscriber server (HSS), the packet data network gateway (PGW), and the pol-
icy control and charging rules function (PCRF). These functional blocks are
specified independently by 3GPP, but in practice vendors may combine some of
them in a single box, such as a combined MME and HSS or a combined SGW
and PGW as an edge interface. The MME is homed to a number of SGWs
that are the data connection between the towers and the network. SGWs create
177
178 From LTE to LTE-Advanced Pro and 5G
tunnels for a call to a packet gateway (PGW) that connects to the local Internet
service providers (ISPs). A call to a fixed network moves from handset to tower
via LTE radio access and from tower to packet gateway via a bearer tunnel that
passes through the serving gateways. The EPC can include two other functional
blocks that are related to the location positioning of the UEs. These are referred
to as the evolved serving mobile location center (E-SMLC) and the gateway
mobile location center (GMLC). Moreover, the EPC may include an IMS (IP
multimedia subsystem), which comprises all the necessary elements for provi-
sion of IP multimedia services comprising audio, video, text, and chat. How-
ever, the IMS is not an element of LTE, but we will briefly discuss it later in this
chapter. The LTE EPC network supports interfaces that allow interoperation
with various access technologies, in particular, the earlier 3GPP technologies
(GSM/EDGE and UMTS) as well as non-3GPP technologies such as WiFi,
CDMA2000, and WIMAX.
An objective of this chapter will be to provide a reasonable understanding
of the functions of each of these network elements and their relationships/inter-
faces and how the end-to-end connections are established and managed in LTE.
This will also provide a good background for help in modeling the traffic for di-
mensioning, which will be covered as well. We will also discuss the QoS classes
in LTE and their characterization in terms of treatment and performance.
and the Internet (PGW). The SGW also acts as the mobility anchor for inter-
3GPP communication through the S4 interface with SGSN, using the GTP-U
protocol. That is it routes user data between the 2G/3G SGSN and the PGW
of the EPC. Moreover, it maintains information about the bearers when the UE
is idle and acts as a buffer for the downlink data when the MME is initiating
paging of the UE to reestablish the bearer.
In a way, the MME and SGW separate the control plane and the user
plane used in the UMTC RNC (except that the RRM functions have moved
to the eNodeBs). By separating the signaling plane from the user data, a more
scalable and flexible architecture is obtained, which is easier for expansion with
the growth of traffic and signaling.
• Per-user based packet filtering (by, for example, deep packet inspection);
• Lawful interception;
• UE IP address allocation;
• Transport-level packet marking in the uplink and downlink (e.g. setting
the DiffServ code point), based on the QoS class identifier (QCI) of the
associated EPS bearer for QoS control;
• UL and DL service-level charging based on PCRF rules, gating control,
rate enforcement as defined in 3GPP TS 23.203;
• UL and DL rate enforcement based on APN-AMBR;
• DL rate enforcement based on the accumulated MBRs of the aggregate
of SDFs with the same GBR QCI (e.g., by rate policing/shaping);
180 From LTE to LTE-Advanced Pro and 5G
UE-associated signaling is assigned one SCTP stream and the stream is not
changed during the communication of the UE-associated signaling. In the ap-
plication layer, the S1-C interface is specified in 3GPP TS 36.41x series [5] and
is responsible for E-RAB management function of setting up, modifying, and
releasing E-RABs, which are triggered by the MME via a secure ciphered con-
nection. The release and modification of E-RABs may be triggered by the eNB
as well. The S1 signaling application also handles the initial context establish-
ment in the eNodeB, setting up the default IP connectivity, transfer of NAS-re-
lated signaling to the eNodeB, and the mobility functions for UEs in LTE such
as a change of eNodeB within the SAE/LTE and inter-MME/serving SAE-GW
handovers, a change of RAN nodes between different RATs (inter-3GPP RAT
handovers) via the S1 interface with the EPC involvement, and load-balancing
function to ensure equally loaded MMEs within an MME pool area, a reference
point for the control plane protocol between E-UTRAN and MME.
The S1-U also referred to as S1-SGW is the interface that connects the
eNodeB and the SGW for user plane traffic (i.e., bearers’ tunneling, inter-eNB
handover) and is used to provide IP-based transport of data streams from the
E-UTRAN towards the EPC (SGW). It uses the GTP, specified in TS 29.281,
at the transport layer over UDP over IP. This is described further in 3GPP TS
36.414 [6].
9.8.2 S5 Interface
This interface is between the SGW and PGW and provides for transport of
packet data towards end users during roaming and nonroaming cases (i.e., S8
is the inter-PLMN variant of S5) using the GPRS tunneling protocol GTP-U
over UDP over IP. This is further described in 3GPP TS 29.274 and 3GPP TS
39.281.
9.8.5 Gx Interface
This interface (not shown in Figure 9.1) provides transfer of (QoS) policy in-
formation from PCRF to the SGW. The Gxc is used only in the case of PMIP-
based S5/S8. This interface is specified in TS 29.212. PMIP (proxy mobile IP)
refers to PMIPv6 as defined in IETF RFC5213.
from the mobile station. The signaling on this interface uses the Diameter S13
Application in TS 29.272.
9.8.9 X2 Interface
This is an interface in the E-UTRAN and not in the EPC, but will be ex-
plained here. The X2 interface provides the capability to support radio interface
EPC Network Architecture, Planning, and Dimensioning Guideline 185
The MME state transition diagrams in the UE and MME are shown in
Figures 9.2 and 9.3.
the UE in the ECM-Idle state, and neither any S1-MME (i.e., S1-C) or S1-U
connection for the UE. It is noted that when the UE is in ECM Idle state, the
UE and the network may be unsynchronized, that is the UE and the network
may have different sets of established EPS bearers.
The location of a UE in ECM-IDLE state is known by the network on
a tracking area level. The UE may be registered in multiple tracking areas in
which case all the tracking areas in a tracking area list to which a UE is regis-
tered are served by the same serving MME. In the ECM-Idle state, if the UE
happens to be in the EMM_Registered context, that is if the UE is in the com-
bined EMM-Registered and ECM Idle state, The UE performs tracking area
updates, responds to paging commands from the MME, and establishes radio
bearers when uplink user data is to be sent.
plications, and the casual Internet browsing type services. Each of these can
involve users with different service quality experience requirements. Therefore,
it is important to have a flexible QoS framework that can withstand future chal-
lenges. Advanced LTE QoS allows priorities for certain customers or services
during congestion. In the LTE broadband network, QoS is implemented be-
tween the CPE and the PDN gateway and is applied to a set of bearers. A bearer
is basically a virtual concept and is a set of network configuration to provide
special treatment to a set of traffic streams such as VoIP packets are prioritized
by network compared to Web browser traffic. In LTE, QoS is applied on radio
bearer, S1 bearer, and S5/S8 bearer, collectively called an EPS bearer. An EPS
bearer is not restricted to carrying a single data stream. Instead, it can carry
multiple data streams. That is, each EPS bearer comprises one or more bidirec-
tional service data flows (SDFs), each of which carries packets for a particular
service such as a streaming video application.
The EPS bearer is broken down into three lower-level bearers, namely,
the radio bearer, the S1 bearer, and the S5/S8 bearer. Each of these is itself as-
sociated with a QoS profile and receives a share of the EPS bearer’s maximum
error rate and maximum delay. The EPS bearer QoS profile is specified by a set
of four parameters that are referred to as the QCI, ARP, GBR, and MBR. Each
QCI (QoS class indicator) is characterized by the priority, packet delay budget
(allowed packet delay with values ranging from 50 ms to 300 ms), and packet
190 From LTE to LTE-Advanced Pro and 5G
error loss rate (with allowed values ranging from 10−2 to 10−6). The packet
delay budget is an upper bound, with 98% confidence, for the delay that a
packet receives between the mobile and the PDN gateway. The QCI specifies
the treatment IP packets will receive on a specific bearer. The QCI label for a
bearer determines how it is handled in the eNodeB. Priority level 1 is the high-
est priority level That is, a congested network meets the packet delay budget
of bearers with priority 1, before moving on to bearers with priority 2, for
instance. The 3GPP has defined a series of standardized QCI types [8], which
are summarized in Table 9.1. The QCI is a single integer that ranges from 1 to
9 and serves as reference in determining QoS level for each EPS bearer. It repre-
sents node-specific parameters that give the details of how an LTE node handles
packet forwarding in the sense of the scheduling weights, admission thresholds,
queue thresholds, and link layer protocol configuration. The network operator
can preconfigure the LTE nodes in how to handle packet forwarding according
to the QCI value. However, the QCI values are expected to be mostly used by
eNodeBs in controlling the priority of packets delivered over radio links. As
for the SGW and PGW nodes within the EPC, the QCIs may be mapped to
QoS mechanisms implemented by the respective vendors such as differentiated
service code points (DSCP) explained in RFC 2475 or MPLS mechanisms to
give appropriate priorities in the processing and routing of the packets within
these nodes and meet certain delay and packet loss requirements [1, 9]. In fact,
3GPP specifications mandate DiffServ on the S1-U and X2 interfaces, and the
protocol is commonly used within the EPC. In the DiffServ protocol, the in-
gress router examines the incoming packets, groups them into classes that are
Table 9.1
3GPP QoS Class Indicators and Characteristics
Bearer Packet Packet
Resource Delay Error/Loss
QCI Type budget, ms Rate Priority Example Services
1 GBR 100 10 −2 2 Conversational voice
2 150 10−3 4 Conversational video
3 50 10−3 3 Real-time gaming
4 300 10−6 5 Nonconversational video (buffered
streaming)
5 Non-GBR 100 10−6 1 IMS signaling
6 300 10−6 6 TCP based (e-mail, www, FTP, chat,
progressive video)
7 100 10−3 7 Voice, video (live streaming),
interactive gaming
8 300 10−6 8 TCP based (e-mail, www, FTP, chat,
9 9 progressive video)
EPC Network Architecture, Planning, and Dimensioning Guideline 191
known as per-hop behaviors (PHBs), and labels them using a 6-bit differenti-
ated services code point (DSCP) field in the IP header. Inside the network, in-
ternal routers use the DSCP field to support the algorithms for queuing, packet
dropping, and packet forwarding.
In Table 9.1, the GBR and MBR are defined only in the case of GBR type
EPS bearers. An EPS bearer is referred to as a guaranteed bit rate (GBR) bearer
where a bit rate is guaranteed, by the parameter GBR. The parameter MBR is
used for a GBR type bearer to indicate the maximum bit rate allowed on the
bearer within the LTE network. Any packets arriving at the bearer after the
specified MBR is exceeded will be discarded. These can be used for applications
such as VoIP and conversational video. For these bearers, bandwidth resources
are allocated permanently to the bearer at bearer establishment/modification.
The non-GBR bearers do not guarantee any particular bit rate. These can be
used for applications such as Web browsing or FTP transfer. For these bearers,
bandwidth resources are not allocated permanently to the bearer. Note that a
UE can be connected to more than one PDN (e.g., PDN 1 for Internet, PDN
2 for VoIP using IMS) through different PGWs and it has one unique IP ad-
dress for each of its all PDN connections. In that case, a parameter referred to
as UE-AMBR (UL/DL) indicates per UE aggregate maximum bit rate allowed
over the aggregated non-GBR EPS bearers associated to the UE no matter how
many PDN connections the UE has. Likewise, each APN access is associated
with an aggregate QOS parameter referred to as the APN AMBR (the aggregate
maximum bit rate) [10]. The APN AMBR is a subscription parameter stored
per APN in the HSS. It limits the aggregate bit rate that can be expected to be
provided across the mobile’s all-non-GBR bearers that are using the same APN
(e.g., excess traffic may get discarded by a rate shaping function). Each of those
non-GBR bearers could potentially utilize the entire APN AMBR, for instance
when the other non-GBR bearers do not carry any traffic. The PGW enforces
the APN AMBR in downlink. The enforcement of APN AMBR in uplink is
done in the UE and additionally in the PGW.
The QCI 5 to 9 (non-GBR) can be assigned to what is referred to as
the default bearers in LTE. The default bearer in LTE depends on the service
subscribed and remains connected until service is changed or terminated. Each
default bearer comes with an IP address. An LTE subscriber may be assigned
more than one default bearer in which case each of the bearers will have a sepa-
rate IP address. There are also the dedicated bearers, which are created on top
of an existing default bearer. The dedicated bearer shares the IP address previ-
ously established by the default bearer and therefore does not require to occupy
additional IP address. The dedicated bearers are mostly used for GBR services
although they can also be used for a non-GBR service. The dedicated bearer
may be used, for instance, for VoIP service to provide high-quality service and
improve the user experience. It is to be noted that whether a service is realized
192 From LTE to LTE-Advanced Pro and 5G
based on GBR or non-GBR bearers, would depend on the policy of the service
provider and the anticipated traffic load versus the dimensioned capacity. For
example, if sufficient ample capacity is provided in view of anticipated traffic,
any service, whether real time or nonreal time, can be realized based on a non-
GBR bearer. However, a service provider may leverage GBR bearers to imple-
ment service blocking rather than service downgrading. For most carriers this
may be a preferred user experience in which network carriers block a service re-
quest rather than enabling all services with degraded quality and performance.
The parameter ARP in Table 9.1 is used to decide whether to refuse a new
bearer request or remove an existing one in favor of admitting a higher priority
bearer when there are insufficient resources in the LTE network such as within
an eNodeB, SGW, or PGW. The allocation and retention priority (ARP) con-
tains three fields. The ARP priority level determines the order in which a con-
gested network should satisfy requests to establish or modify a bearer, with level
1 receiving the highest priority. Note that this parameter is different from the
QCI priority level defined before. The preemption capability field determines
whether a bearer can grab resources from another bearer with a lower priority
and might typically be set for emergency services. Similarly, the preemption
vulnerability field determines whether a bearer can lose resources to a bearer
with a higher priority.
An EPS bearer can be made of one or more bidirectional service data flows
(SDF), each of which carries packets for a particular service such as a stream-
ing video application. An EPS bearer is identified using a traffic flow template
(TFT), which is the set of SDF templates that make it up. The SDFs in an EPS
bearer should share the same quality of service, specifically the same QCI and
ARP to ensure that they can be transported in the same way. For example, a user
may be downloading two separate video streams, with each one implemented as
a service data flow. The network can transport the streams using one EPS bearer
if they share the same QCI and ARP, but has to use two EPS bearers otherwise.
In turn, each service data flow may comprise one or more unidirectional packet
flows, such as the audio and video streams that make up a multimedia service.
The service data flow is identified using an SDF template, which is the set of
packet filters that make it up. Each packet flow is identified using a packet filter,
which contains information such as the IP addresses of the source and destina-
tion devices, and the UDP or TCP port numbers of the source and destina-
tion applications. Packet flows are known to the application, and the mapping
between packet flows and service data flows is under the control of a network
element such as the PCRF. The packet flows in each service data flow have to
also share the same QCI and ARP. However, in case two packet flows need to
be assigned different priorities, they have to be implemented using two service
data flows on two different EPS bearers with each assigned a different ARP. For
instance, in a video telephony service, the network may assign a lower ARP to
EPC Network Architecture, Planning, and Dimensioning Guideline 193
the video stream, so that it can drop the video stream in a congested cell but
retain the audio.
between the cells. The EPC bearer channels are essentially tunnels that create
an independent set of data paths that must be traversed before traffic is subject
to normal IP routing. Thus, the location of the PGW, relative to the SGWs,
is another important consideration in EPC design. Where the two are close in
a network topology sense, traffic jumps off the EPC and onto the IP service
network close to the cell sites. Smart traffic handling at or near the eNodeB can
segregate Internet traffic from wireless service traffic and move it immediately
onto the best-effort infrastructure used for broadband Internet connectivity.
That will reduce traffic in the more expensive EPC components and improve
service performance and operations costs. Offloading is especially critical for
Internet video traffic, which can load the SGW/PGW and the associated tun-
nels significantly. This offloading of internet traffic from the EPC can also be
very efficient if there are many intra-LTE network servers and service elements
that need to be accessed internally. Such more distributed topology will require
more packet gateway points and less efficient 4G traffic aggregation. However,
centralizing packet gateway locations will mean managing longer bearer chan-
nel paths, more routing on the way and carrying the traffic over a longer dis-
tance before it can be connected to any server and affecting the user experience.
These considerations require appropriate trade-offs to be made in distributing
and locating the core network elements to achieve a balance between costs, and
the gains achieved in reducing congestion and improving user service Another
aspect impacting the service performance and user experience is the speedy and
reliable transmission of signaling exchanges with the MME, SGW, HSS, and
hence is an important considerations in the design of the EPC. As the signaling
path failures will compromise services, and latency in managing handoffs will
disrupt the bearer channel and interfere with user conversations or experiences.
These signaling channels need not be carried on the same routes as the bearer
channels, but where diverse routing may not be available, it may help to provide
for the priority transmission of signaling messages over the links.
VoIP is supported, the number of busy hour voice call attempts, BHVCA. The
BHDSA is calculated for each user by the following formula
The ASUB parameter represents the number of LTE subscribers that are
able to have a successful connection with the PGW along with a successfully
established default EPS bearer and successfully allocated IP address in busy
hour. The busy hour, BH, is known to be the busiest 60-minute period of the
day, in which the total traffic is the maximum throughout the day. Moreover,
an estimate of the number of simultaneously established bearers, simultaneous
evolved packet system bearers (SEPSB), at busy hours is needed to provide the
bandwidth required for the network element data interconnect links such as the
S1-U, S5 and SGi interfaces. This parameter should be estimated for applica-
tions requiring the Internet and the intranet services and is obtained by
The signaling for attachment and detachment, as well as EPS bearer es-
tablishment and management along with authentication requests and respons-
es, is what will impact the load on the S1-C interfaces.
In the most formal scenario, the resulting traffic profile, the QoS require-
ments such as delay, and throughputs along with the core network elements
capacity parameters and cost constraints can be translated into a mathemati-
cal linear programming formulation. This will include an objective function
for optimization and a set of decision variables such as the link capacities and
number of each network element types. This has been carried with heuristic
solutions developed on the basis of the CPLEX methodology in [11] where the
reader may refer for more information, and solution examples. However, here
we will continue on with providing the general guidelines for the dimensioning
process. With a detailed traffic profile and parameters and the capacity con-
straints of the vendor core network elements, one should be able to identify the
type and the needed number of network nodes and the bandwidth for the inter-
connect links. In general, the capacity constraints are characterized by four dif-
ferent basic types of capacity constraints parameters. These are the throughput,
transactions, subscribers, and bearers. The throughput is the total amount of
data load that a node can handle, and it is considered as a data plane limitation,
whereas the transactions parameter relates to the signaling traffic and signaling
messages processing capacity in the control plane. The subscribers parameter
196 From LTE to LTE-Advanced Pro and 5G
represent the number of subscribers that can be handled by a node and include
the active versus idle (i.e., without ongoing media session). The bearer param-
eters include the default and the dedicated type bearers explained in previous
sections. The default bearer is best effort and is mandatory for any attached
user, whereas the dedicated bearer is established based on a needed bit rate. The
number of transactions and subscribers (attached subscribers) in busy hour play
a major role in the determination of the capacity requirements for the MMEs.
Another factor is the number of established bearers in busy hour which impact
the control and connection related management signaling load placed on the
MMEs. The HSS is a database for subscribers’ data and involved with handling
control-plane signaling. The S6a link which provided the interface between the
MME and the HSS is concerned with the number of transactions the number
of attached subscribers. connection between the MME and the HSS is control
plane only. The throughput in busy hour is the major capacity determinant
for the SGW which works as a mobility anchor for the traffic being carried on
different eNodeBs and forwards packets between the eNodeB and the PGW.
With respect to signaling and transactions, the SGW is affected by the setup
and teardown of bearers, mobility (inter-eNB handover), and the idle to con-
nected transition states of the UE. The throughput is also a major determinant
in the capacity requirement for the PGW. For the PGW, the control-plane load
is affected by the setup and teardown of bearers, the QoS negotiation with the
PCRF, and inter-SGW mobility.
The dimensioning of the EPC network element interconnect and inter-
faces requires a more detailed estimate and classification of the traffic at busy
hours. The traffic may be classified into two major types, called the elastic and
the stream traffic [11, 14, 16]. The elastic traffic includes Web browsing and
FTP, and is generated by nonreal-time applications and carried over the TCP.
The TCP employs feedback control mechanisms to adapt the transfer rate to
the instantaneous network conditions, and allows flows on the link to share the
available capacity. The TCP and its optimal adaptation to wireless networks are
discussed in detail in [13]. Applications that generate elastic traffic require the
reliable packet delivery where every piece of data needs to be transferred. In case
of packet losses, the respective packets are retransmitted. In terms of quality of
service, the emphasis is placed on the actual throughput. However, stream traf-
fic refers to traffic flows whose packets need to be delivered in a timely manner,
with packet delay and delay variation being the most important quality mea-
sures. The stream traffic is usually generated by (near) real-time applications
such as audio/video communications or streaming applications, and carried via
the RTP/UDP. In [12] three capacity planning methods were investigated based
on: (1) a dimensioning formula-based approach, (2) an overbooking factor-
based approach, and (3) a delay-based approach. The formula-based approach
calculates the necessary link bandwidth to handle the expected peak aggregated
EPC Network Architecture, Planning, and Dimensioning Guideline 197
References
[1] 3GPP TS 23.002, “Technical Specification 3rd Generation Partnership Project; Techni-
cal Specification Group Services and System Aspects; Network architecture, Release 8.
V8.7.0,” 2010.
198 From LTE to LTE-Advanced Pro and 5G
[2] 3GPP TS 36.410, “Technical Specification 3rd Generation Partnership Project; Technical
Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access
Network (E-UTRAN); S1 General Aspects and Principles, Release 12, V12.1.0,” 2014.
[3] 3GPP TS 36.411, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1
Layer 1,” 2014.
[4] 3GPP TS 36.412, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1
Signaling Transport,” Release 12, 2011.
[5] 3GPP TS 36.413, “Evolved Universal Terrestrial Access (E-UTRA); S1 Application Proto-
col (S1 AP),” Release 12, 2011.
[6] 3GPP TS 36.414, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1
Data Transport,” 2014.
[7] 3GPP TS 29.061, “Technical Specification Group Core Network and Terminals; Inter-
working Between the Public Land Mobile Network (PLMN) Supporting Packet Based
Services and Packet Data Networks (PDN), V12.10.0, Release 12,” 2011.
[8] 3GPP TS 23.203, “Policy Control and Charging Architecture (Stage 2), Section 6.1.7.2.”
[9] 3GPP TS 29.213, “Technical Specification Group Core Network and Terminals; Policy
and Charging Control Signaling Flows and Quality of Service (QoS) Parameter Mapping,
Release 13, sec. 6.4, 2012.
[10] 3GPP TS 23.401 V13.4.0 (2015-09), “General Packet Radio Service (GPRS) enhance-
ments for Evolved Universal Terrestrial Radio Access Network, Release 13,” 2012.
[11] Dababneh, D., “LTE Traffic Generation and Evolved Packet Core (EPC) Network
Planning,” Thesis submitted to Ottawa-Carleton Institute for Electrical and Computer
Engineering (OCIECE) Department of Systems and Computer Engineering Carleton
University Ottawa, Ontario, Canada, K1S 5B6, March 2013.
[12] Checko, A., L. Ellegaard, and M. Berger, “Capacity Planning for Carrier Ethernet LTE
Backhaul Networks,” IEEE Wireless Communications and Networking Conference (WCNC),
April 2012.
[13] Rahnema, M., UMTS Network Planning and Inter-Operation with GSM, New York: John
Wiley & Sons, 2008.
[14] Riedl, A., et al., “Investigation of the M/G/R Processor Sharing Model for Dimensioning
of IP Access Networks with Elastic Traffic,” Institute of Communication Networks,
Munich University of Technology (TUM), Siemens AG, Munich, 2011.
[15] Lindberger, K., “Balancing Quality of Service, Pricing and Utilisation in Multiservice
Networks with Stream and Elastic Traffic,” Proc. of the International Teletraffic Congress
(ITC 16), Edinburgh, Scotland, 1999.
[16] Núnez Queija, R., J. L. van den Berg, and M. R. H. Mandjes, “Performance Evaluation
of Strategies for Integration of Elastic and Stream Traffic,” Centrum voor Wiskunde en
Informatica (CWI), PNA-R9903 February 28, 1999.
10
LTE-Advanced Main Enhancements
The LTE capabilities have been enhanced in Release 10 to provide increased
peak rates with up to 3 Gbps on downlink and 1.5 Gbps on uplink, higher
channel spectral efficiencies from a maximum of 16 bps/Hz in Release 8 to 30
bps/Hz in Release 10, and improved performance at cell edge. These higher peak
spectral efficiencies can be achieved using up to 8-layer spatial multiplexing in
the downlink and up to 4-layer spatial multiplexing in the uplink, according to
the Release 10 specifications for single user MIMO (SU-MIMO). Release 10
LTE introduces new enhancements to make the technology compliant with the
International Telecommunication Union’s requirements for IMT-Advanced,
where the resulting system is known as LTE-Advanced. A useful summary of
the features of the new system is provided in [1]. The Release 10 enhancements
are designed to be, for the most part, backwards-compatible with Release 8. A
Release 10 base station can control a Release 8 mobile, normally with no loss
of performance, while a Release 8 base station can control a Release 10 mobile.
In the few cases where there is a loss of performance, the degradation has been
kept to a minimum.
The main features of Release 10 include carrier aggregation, the enhance-
ment to multiple antenna transmissions, and the relaying functions with some
of the impacts to other aspects of the system, which will be covered in this chap-
ter. There are further features and enhancements to LTE-Advanced that have
been provided or proposed in Releases 11 and 12, which include coordinated
multipoint transmissions, enhanced PDCCH, proximity services for machine-
to-machine communications, IP flow mobility or seamless WLAN offload, se-
lective IP traffic offload, certain enhancements for machine-type communica-
tions, and improved interoperation with wireless local area networks. In this
chapter, we will also discuss the coordinated multipoint transmissions and the
199
200 From LTE to LTE-Advanced Pro and 5G
quency band that may be involved. However, the RRC connection signaling
is handled by one cell that is referred to as the primary serving cell (PCell),
which is said to use the primary component carrier (PCC): DL and UL. The
other component carriers are all referred to as the secondary component carrier
(SCC), DL and possibly UL, and the cells in which they are used are called the
secondary serving cells (SCell). The primary cell contains one component car-
rier in case of the LTE TDD mode or one component carrier on UL and one
on DL and is used in exactly the same way as a cell in Release 8. In RRC_IDLE
state, the mobile performs cell selection and reselection using one cell at a time.
The RRC connection setup procedure is unchanged as the mobile only com-
municates with a primary cell. The secondary cells are only used by mobiles in
RRC_CONNECTED and are added or removed by means of mobile-specific
RRC Connection Reconfiguration messages. Each secondary cell contains one
component carrier in the TDD mode, and in the case of the FDD mode, it has
one component carrier on downlink and optionally one on uplink. The carrier
aggregation only affects the physical layer, the MAC protocol on the air inter-
face, and the RRC, the S1-AP, and the X2-AP signaling protocols. There is no
impact on the RLC or PDCP [4]. The mobile can transmit the uplink signaling
control information, PUCCH, at the same time that it transmits data on the
PUSCH. The PUCCH is only transmitted from the primary cell. However,
the mobile can send the data on PUSCH on either the primary cell or on any
of the secondary cells. There are no significant changes in the procedures for
UL transmission and reception. The base station sends the PHICH acknowl-
edgments to the same cell that the mobile used for its uplink transmissions
(primary or secondary).
blocks that are assigned to the target mobile. As a result, they do not cause any
overhead or interference for the other mobiles in the cell. The demodulation
reference symbols are transmitted on antenna ports 7 to 14. The signals on
ports 7 and 8 are the same ones used by dual-layer beamforming, while those
on ports 9 to 14 support single-user MIMO with a maximum of eight antenna
ports. With this, each individual reference symbol is actually shared among four
antenna ports by means of orthogonal code division multiplexing.
However, the UE-specific reference symbols are unsuitable for channel
quality measurements, which should be performed across the entire downlink
system bandwidth. This is handled by the transmission of channel state infor-
mation (CSI), reference symbols on eight more antenna ports numbered from
15 to 22 [8]. These symbols are not precoded, so the antenna ports are differ-
ent from the ones used by the UE-specific reference signals stated above. The
functions of the CSI reference symbols are similar to those of the sounding
reference symbols used on uplink in Release 8, in the support of channel qual-
ity measurements and frequency-dependent scheduling. A cell can transmit the
reference symbols using two, four, or eight resource elements per resource block
pair, depending on the number of antenna ports that it has available. The cell
chooses the resource elements from a larger set of 40, with nearby cells choosing
different resource elements so as to minimize the interference between them.
The base station then supplies each mobile with the reference signal configura-
tion. This defines the subframes in which the mobile should measure the signal
and the resource elements that it should inspect, with a measurement interval
of 5 to 80 ms that depends on the mobile’s speed (see Table 6.10.5.3-1 of [7],
for instance). The long transmission intervals help to considerably reduce the
overheads that they incur. To avoid interference caused to Release 8 mobiles
that do not recognize these reference symbols, the base station can schedule the
Release 8 mobiles in different resource blocks.
edges, and hot-spot areas can also be used to connect to remote areas without
fiber connection.
The relay node connects to the Donor eNodeB (DeNB) via a radio in-
terface, Un, which is a modification of the E-UTRAN air interface Uu, and
can be implemented via radio or point-to-point microwave link to increase its
range. The Un and Uu interfaces can use either the same or different carrier
frequencies. If the carrier frequencies are different, then the Un interface can
be implemented in exactly the same way as a normal air interface, where, for
instance, the relay node acts like a base station on the Uu interface towards the
mobile and independently acts like a mobile on the Un interface towards the
donor eNodeB. If the carrier frequencies are the same, then the Un interface
requires some extra functions to share the resources of the air interface with
Uu. The extra functions include enhancements to the physical layers and some
extra RRC signaling [9–11]. The resource sharing between the Un and Uu
interfaces are managed through allocating individual subframes to either the
Un or the Uu and is implemented in two stages. The donor eNodeB tells the
relay node about the allocation using an RRC RN Reconfiguration message,
and then the relay node configures the Un subframes as MBSFN subframes on
Uu, but without transmitting any downlink MBSFN data in them and without
scheduling any data transmissions on the uplink. However, since the start of
an MBSFN subframe is used by PDCCH transmissions on the Uu interface,
typically for scheduling grants for uplink transmissions, this prevents the use
of the PDCCH on the Un interface. Instead, the specification introduces the
relay physical downlink control channel (R-PDCCH), as a substitute for the
PDCCH on Un. The R-PDCCH transmissions look much the same as normal
PDCCH transmissions, but occur in reserved resource element groups in the
part of the subframe that is normally used by data. The first slot of a subframe
are used for transmitting downlink scheduling commands, while the second slot
are used for transmitting the uplink scheduling grants. The Un interface does
not use the physical hybrid ARQ indicator channel. Instead, the donor eNodeB
acknowledges the relay node’s uplink transmissions implicitly, using scheduling
grants on the R-PDCCH. This also eliminates the need for the physical control
format indicator channel.
The relay node’s access stratum is controlled by the donor node in which
the radio resources are shared between UEs served directly by the donor eNo-
deB and the relay nodes. The relay node’s nonaccess stratum is controlled by an
MME selected by the donor node. The donor eNodeB incorporates the func-
tions of a PDN gateway and a serving gateway, which allocate an IP address
for the relay node and handle its traffic. The donor node also incorporates a
relay gateway that shields the core network and the other base stations from the
need to know anything about the relay nodes directly. The relay node acts as an
eNodeB towards the mobile and controls the mobile’s access stratum. For this,
206 From LTE to LTE-Advanced Pro and 5G
the relay node has one or more physical cell identities of its own, broadcasts its
own synchronization signals and system information, and schedules its trans-
missions on the Uu interface.
this selection to the home eNodeB. The local traffic can then travel directly be-
tween the home eNodeB and its incorporated local gateway and avoid passing
through the serving gateway. In case data arrives on the downlink while the user
is in RRC_IDLE, the local gateway sends the first downlink packet over the
S5 interface to the serving gateway, which triggers the usual paging procedure
and moves the mobile into RRC_CONNECTED. The local gateway can then
deliver subsequent downlink packets directly to the home eNodeB.
Figure 10.1 Coordinated transmission from two cells to device located near cell edges (one
scenario of CoMP).
210 From LTE to LTE-Advanced Pro and 5G
and 3). However, the operator can configure different eNodeBs to cooperate
the operation either across the X2 interface or by using proprietary techniques,
whereas remote radio heads homed to a centralized eNodeB may be configured
to communicate via a high-speed link [8].
The Release 11 specifications support uplink and downlink CoMP using
coordinated scheduling/beamforming, dynamic point selection, and noncoher-
ent joint transmission and reception [8, 18]. In coordinated scheduling/beam-
forming, nearby points coordinate their uplink scheduling and beamforming
decisions to minimize the interference that they receive from other mobiles. In
dynamic point selection, the network actually transmits and receives from only
one point at a time, with the selection potentially changing from one subframe
to the next. In joint reception (JR), the network receives data at multiple points
and combines them to improve the quality of the received signal. The CoMP
cooperating antenna sets for the uplink and downlink can be different from
each other, and the CoMP reception points can be different from the CoMP
transmission points, where a point that transmits in a particular subframe is
known as a CoMP transmission point here. The downlink CoMP is supported
through the CSI reference symbols. The CoMP has more impact on cell-edge
mobiles on the uplink than on the downlink. The simulation results carried out
by 3GPP [18] for joint transmission and reception in a heterogeneous network
without enhanced intercell interference coordination show improvements in
cell-edge data rates of 24% in the downlink and 40% in the uplink and with a
resulting cell capacity rise of 3% in the downlink and 14% in the uplink.
3G system in that it also allows the application server to contact a device and
trigger it into action. In the architectural baseline, the MTC device communi-
cates with an MTC server or other MTC devices via the 3GPP bearer services
consisting of the SMS and IMS as provided by the PLMN.
The MTC server or the application server as also refereed to is an entity
which connects to the 3GPP network via either a service capability server (SCS)
or directly. The application server (AS) may be owned by a third-party service
provider or the operator and may communicate with the device either directly
over the SGi interface [24] or indirectly through an SCS. The SCS, in turn, can
reach the device either directly or send a device trigger request over the Tsp in-
terface [25] to the MTC-IWF. The MTC-IWF looks up the user’s subscription
details in the home subscriber server, decides the delivery mechanism that it will
use, and triggers the device over the control plane of LTE. Release 11 provides
only the SMS-based mechanism in which the MTC-IWF contacts the SMS
service center over the T4 interface [26] and have it to trigger the device using
a mobile-terminated SMS, as illustrated in Figure 10.2. However, the 3GPP is
planning for new interface referred to as the T5b, which will allow the MTC-
IWF to communicate directly with the MME.
The MTC device access the 3GPP network through the MTCu interface,
which can be based on the LTE Uu interface (in the case of LTE) for the trans-
port of user plane and control plane traffic. In MTC applications, the MTC
devices may initiate communication with the server, or there may also be occa-
sions where there is a need for the MTC server to poll data from MTC devices.
For cases where MTC devices that are not continuously attached to the network
Figure 10.2 MTC architecture showing 3GPP-defined interfaces with EPC elements.
LTE-Advanced Main Enhancements 213
or that have no always-on PDP/PDN connection, the MTC server may send a
trigger indication to the device and thereby cause it to attach and/or establish a
PDP/PDN connection.
The 3GPP study for Release 12 [27, 28] focused on developing solutions
to handle massive numbers of MTC devices expected to communicate either
directly with each other or with the backend server in M2M communication.
The key issues are the network congestion in the random access process and in
the case of LTE also the efficient use of the PRBs and transport block sizing
for M2M communication. The capacity of a PRB under normal channel con-
ditions can be significantly higher than the need for a typical M2M message,
which can range from around 4 to 8 bytes. This means allocating even a single
PRB for an M2M exchange can result in the inefficient utilization of the radio
spectrum. Therefore, alternative strategies in resource allocation such as based
on group communication using data aggregation from colocated M2M devices
can be helpful. The grouping of M2M devices helps to facilitate the access
control and decrease the redundant signaling to avoid congestion. The MTC
devices within the same group can be in the same area and/or have the same
MTC features and/or belong to the same MTC user.
The grouping and aggregation/multiplexing of small M2M messages can
be achieved through the use of intermediate gateway nodes placed between
the devices and the eNodeB. The intermediate node can also potentially in-
crease the RA success probability, reduce power consumption, and result in
smaller delays. As was discussed earlier in the chapter, relay nodes are one of the
features of LTE-Advanced networks and can be used to aggregate traffic from
multiple M2M devices. In this way, the messages from multiple M2M applica-
tions can be added to make a comparatively larger packet and thus maximize
the utilization of radio resources. This will also help to reduce the congestion
in the radio access process due to the fact that it will allow a single relay node
to communicate with the eNodeB on behalf of multiple M2M devices in one
access attempt.
The concept is realized by introducing two new network elements re-
ferred to as the MTCD (MTC device) and MTCG (MTC gateway) as elabo-
rated in [28, 29] for LTE-Advanced networks. The MTCD is the UE designed
for MTC, which communicates through the LTE network with an MTC
server and/or other MTCDs. The MTCG facilitates efficient communication
among a large number of MTCDs and provides the connection to the EPC
through the eNodeBs for communication with the MTC servers. The MTCD
is able to establish a direct link with its donor eNodeB, just as a UE. There-
fore, the link between the MTCD and the base station is similar to the link
between a UE and the eNodeB. The eNodeB-to-MTCG wireless link is based
on LTE specifications, whereas the MTCG-to-MTCD and MTCD-to-MTCD
214 From LTE to LTE-Advanced Pro and 5G
that are considered more tolerant to access restrictions than others in order to
prevent overload of the access or the core network without the need to intro-
duce any new access classes. The operator can select UEs for EAB through the
information elements broadcast from a new system information block, SIB 14.
A UE configured for EAB uses its allocated access class, as defined under access
class barring when evaluating the EAB information broadcast by the network
to determine if its access to the network is barred. The EAB is part of the 3GPP
specification now and is defined in [31].
The details of parameter configuration scenarios for different access bar-
ring implementations are discussed in [31]. The 3GPP [30] uses two different
traffic models based on the scenarios in which either MTC devices access the
network uniformly over a period of time as in a nonsynchronized manner, or
where a large number of MTC devices access the network in a highly syn-
chronized manner such as after a power outage to evaluate the performance
of different RACH parameter configuration scenarios. The simulation results
presented conclude in favor of the EAB as performing best in terms of such
performance metrics as the access collision and access success probabilities.
Slotted Access
In this method, dedicated access cycle/slots (similar to paging cycle/slots) are
defined for MTC devices. The access slots are synchronized with the corre-
sponding system frames, and an MTC device is associated with an access slot
through its ID (IMSI). The access slot could be the paging frame for the MTC
device in the simplest scenario.
LTE-Advanced Main Enhancements 217
Pull-Based Scheme
In the pull-based schemes, the MTC server is assumed to be aware of when
MTC devices have data to send or the MTC server needs information from the
MTC devices. In these cases, the server can have the network to page the MTC
device, which then will perform an RRC connection establishment for sending
the data to the server. The network paging of the MTC device is performed
taking into account the network load condition.
dBm, and the Cat-200 kHz with a receiver bandwidth of 200 kHz (known as
the narrowband LTE) and peak uplink bit rate of 0.144 Mbps, and downlink
bit rate of 0.2 Mbps. Both of these will reduce the modem complexity further
down to about 20% and 15% of the Release 8 categories, respectively. The cat-
egory-M devices have a single receiver chain and work in the half-duplex mode.
References
[1] 3GPP TR 36.912, “Feasibility Study for Further Advancements for E-UTRA (LTE-Ad-
vanced), Release 11,” September 2012.
[2] 3GPP TS 36.300, “Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved
Universal Terrestrial Radio Access Network (E-UTRAN); Overall Description; Stage 2,
Release 11,” 2010.
[3] 3GPP TS 36.101, “User Equipment (UE) Radio Transmission and Reception, Release
11,” 2010.
[4] 3GPP TS 36.322, “Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved
Universal Terrestrial Radio Access Network, Radio Link Control (RLC) Protocol Specifi-
cation, Release 12,” 2011.
[5] 3GPP TS 36.212, “Multiplexing and Channel Coding, Release 11.”
[6] 3GPP TS 36.213, “Physical Layer Procedures, Release 11,” 2010.
[7] 3GPP TS 36.211, “Physical Channels and Modulation, Release 10,” 2009.
[8] Cox, C., An Introduction to LTE, 2nd ed., New York: John Wiley & Sons, 2014.
[9] 3GPP TS 36.216, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical Layer
for Relaying Operation, Release 11,” 2010.
[10] 3GPP TS 36.331, “Radio Resource Control (RRC); Protocol Specification, Release 11,”
2013.
[11] 3GPP TS 36.306, “User Equipment (UE) Radio Access Capabilities, Release 11,” 2010.
[12] 3GPP TS 23.401, “General Packet Radio Service (GPRS) Enhancements for Evolved
Universal Terrestrial Radio Access Network (E-UTRAN) Access, Release 12,” 2011.
[13] 3GPP TR 23.861, “Multi Access PDN Connectivity and IP Flow Mobility, Release 12,
Annex A,” 2011.
[14] 3GPP TS 23.402, “Architecture Enhancements for Non-3GPP Accesses, Release 12,”
2011
[15] 3GPP TS 23.261, “IP Flow Mobility and Seamless Wireless Local Area Network (WLAN)
Offload, Release 11,” 2010.
[16] 3GPP TS 24.303, “Mobility Management Based on Dual-Stack Mobile IPv6, Release
11,” 2010.
[17] IETF RFC 5555, “Mobile IPv6 Support for Dual Stack Hosts and Routers,” June 2009.
LTE-Advanced Main Enhancements 219
[18] 3GPP TR 36.819, “Coordinated Multi-point Operation for LTE Physical Layer Aspects,
Release 11,” 2010.
[19] Luigi, A., I. Antonio, M. Giacomo, “The Internet of Things: A Survey,” International
Journal of Computer and Telecommunications Networking, Vol. 54, No. 15, October 2010,
pp. 2787–2805.
[20] Gonclves, V., and P. Dobbelaere, “Business Scenarios for Machine-to-Machine Mobile
Applications,” Proc. International Conference on Mobile Business and Global Mobility
Roundtable (ICMB-GMR), 2010, pp. 394–401.
[21] 3GPP TS 22.368, v1.0, “Service Requirements for Machine-Type Communications
(MTC) Stage 1, Releases 10 through 13,” 2009.
[22] ETSI TS 102 689, v1.1.1, “Machine-to-Machine Communications (M2M): M2M
Service Requirements,” August 2010.
[23] 3GPP TR 23.888, “System Improvements for Machine-Type Communications, V10.0.0,
Release 10,” 2009.
[24] 3GPP TS 29.061, “Interworking Between the Public Land Mobile Network (PLMN)
Supporting Packet Based Services and Packet Data Networks (PDN), Release 12,” 2011.
[25] 3GPP TS 29.368, “Tsp Interface Protocol Between the MTC Interworking Function
(MTC-IWF) and Service Capability Server (SCS), Release 12,” 2011.
[26] 3GPP TS 29.337, “Diameter-Based T4 Interface for Communications with Packet Data
Networks and Applications, Release 12,” 2011.
[27] 3GPP TR 22.888, “Study on Enhancements for Machine Type Communication, V12.0.0,
Release 12.”
[28] 3GPP TR 23.887, “Study on Machine-Type and Other Mobile Data Applications
Communications Enhancements, V2.0.0, Release 12,” 2011.
[29] Zheng, K., et al., “Radio Resource Allocation in LTE-Advanced Cellular Networks with
M2M Communications,” IEEE Communications Magazine, July 2012.
[30] 3GPP TR 37.868, “Study on RAN Improvements for Machine-Type Communications,
Release 11, Annex B,” October 2011.
[31] 3GPP TS 22.011, “Technical Specification Group Services and System Aspects; Service
Accessibility, Release 11,” 2010.
[32] 3GPP TS 24.368, “Non-Access Stratum (NAS) Configuration Management Object
(MO), Release 11,” September 2012.
[33] Mehmood, Y., et al., “Mobile M2M Communication Architectures, Upcoming Challenges,
Applications, and Future Directions,” EURASIP Journal on Wireless Communications and
Networking, November 2015.
[34] 3GPP TR 37.869, “Study on Enhancements to Machine-Type Communications (MTC)
and Other Mobile Data Applications; Radio Access Network (RAN) Aspects, Release 12,”
2011.
11
Optimization for TCP Operation in 4G and
Other Networks1
Data services have been overtaking and continue to overtake the voice applica-
tions on the wireless mobile networks. This will be particularly the case with
LTE networks where, at least in the earlier stages of its deployment, data ser-
vices of various kinds such as Web services, video applications, and heavy data
downloads will remain to compose the majority of the traffic. Therefore, the
existence of well-tuned protocols for the efficient reliable end-to-end transfer of
data becomes very critical. The Transmission Control Protocol (TCP) is an end-
to-end transport protocol that is commonly used by network applications that
require reliable guaranteed delivery. The most typical network applications that
use TCP are File Transfer Protocol (FTP), Web browsing, and TELNET. TCP
is a sliding window protocol with timeout and retransmits. Outgoing data must
be acknowledged by the far-end TCP. Acknowledgments can be piggybacked
on data. Both receiving ends can flow control the far end, thus preventing a buf-
fer overrun. As is the case with all sliding window protocols, TCP has a window
size. The window size determines the amount of data that can be transmitted
in one round-trip time (RTT) before an acknowledgment is required. The time
between the submission and receipt of an ACK packet is generally referred to as
the RTT. For TCP, this amount is not a number of TCP segments but a number
1. Most of the material in this chapter on TCP performance in other networks appears in Chap-
ter 16, “The TCP Protocols, Issues, and Performance Tuning over Wireless Links,” in UMTS
Network Planning, Optimization, and Inter-Operation with GSM, by Moe Rahnema (John
Wiley and Sons, November, 2007.)
221
222 From LTE to LTE-Advanced Pro and 5G
of bytes. However, the reliable TCPs such as the TCP-Reno type are tuned to
perform well in traditional fixed networks where transmission errors are rela-
tively very low and packet losses are assumed to occur mostly because of con-
gestion. In such case the TCP receiving end will throttle back the transmitter
sending rate as TCP responds to all packet losses by invoking congestion con-
trol and avoidance. This results in degraded network throughput performance
and delays in networks and on links where packet losses mostly occur due to bit
errors, signal fades, and hard handovers (which occur in LTE networks) rather
than congestion as in the mobile communication environment.
A number of different strategies and protocol variations have been pro-
posed to provide efficient reliable transport protocol functionality and meet the
challenges on wireless networks. Some of these schemes rely on specialized link
layer protocols, specified by the 3GPP standards, which enhance the reliability
of the wireless link. These are called link layer solutions, which when combined
with certain options or versions of available TCP implementations help greatly
to address and resolve the wireless problems, as we will discuss in detail. Other
schemes require special measures to be taken such as splitting the TCP connec-
tion in the base station, providing sniffing proxies, or implementing nonstan-
dard modifications to the TCP stack within the end systems (both the fixed
and the mobile hosts). We will also discuss the latter category of solutions and
their merits to provide the reader with a broad knowledge base and insight for
tackling specific problems and issue in this particular field as they arise. This
chapter is based on an extension of the material presented in a previous book
[1] by the author and will include material specific to the optimization for TCP
in LTE-type networks.
Discovery procedure [RFC 1191], which allows the sender to check the maxi-
mum routable packet size in the network that can be used. Then the MSS is set
as a parameter. However, the MSS may interact with some TCP mechanisms.
For example, since the congestion window is counted in units of segments, large
MSS values allow TCP congestion window to increase faster.
TCP operates in acknowledged mode. That is, the segments that are re-
ceived error-free are acknowledged by the receiver. The TCP receiving end uses
cumulative acknowledgment messages in the sense that one single ACK mes-
sage may acknowledge the receipt of more than one segment, indicated by the
sequence number of the last in-sequence, error-free, received packet. However,
the receiver does not have to wait for its receiver buffer to fill before sending
an ACK. In fact, many TCP implementations send an ACK for every two seg-
ments that are received. This is because when a segment is received, the TCP
starts a delayed ACK, waiting for a segment to send with which the ACK can
be piggybacked. If another segment arrives before the 200-ms (TCP has a timer
that fires every 200 ms, starting with the time the host TCP was up and run-
ning), the sender will send the second ACK. If the tick occurs, before another
segment is received or the receiver has no data of its own to send, the ACK is
sent. Packets that are not acknowledged in due time are retransmitted until they
are properly received.
The TCP is a connection-oriented protocol meaning that the two applica-
tions using TCP must establish a TCP connection with each other before they
can exchange data. It is a full-duplex protocol in that each TCP connection
supports a pair of byte streams, one flowing in each direction. The acknowledg-
ment of TCP segments for one direction are piggybacked to transmission in
the reverse direction or sent alone when there is no reverse data transmissions.
The TCP includes a flow-control mechanism for each of these byte streams that
allow the receiver to limit how much data the sender can transmit. The TCP
also implements a congestion-control mechanism to avoid congestion within
the network nodes.
• Step 1: The client end system sends TCP SYN control segments to the
server and specifies the initial sequence number.
• Step 2: The server end system receives the SYN and replies with SYN
ACK control segment. In the meantime it allocates the buffer, specifies
the server, and receives the initial sequence number.
224 From LTE to LTE-Advanced Pro and 5G
• Step 3: The client also allocates buffers and variables to the connection.
The server sends another SYN segment, which signals that the connec-
tion is established.
• Step 1: The client end system sends TCP FIN control segment to server.
• Step 2: The server receives FIN and replies with ACK. If it has no data
to transmit, it sends a FIN segment back with the ACK. Otherwise, it
keeps sending data that is then acknowledged by the receiving end (the
client end). Then, after it finishes, it sends the FIN segment.
• Step 3: The client receives FIN and replies with ACK. It enters a timeout
“wait” before it closes the connection from its end.
• Step 4: The server receives the ACK and closes the connection.
The TCP congestion control starts with probing for usable bandwidth
when the data transmission starts following connection setup. This is called the
slow start congestion control phase, in which the congestion window (CNWD)
starts from a small initial value and grows rapidly with an exponential rate.
Then when congestion is detected, the CWND is restarted from a lower value
and is incremented only linearly. This phase is called congestion avoidance. The
boundary between the initial low start and the congestion avoidance phases are
controlled by the two important variables of CNWD and a slow start threshold,
SS-Thresh. The TCP sender thus uses the slow start and congestion avoidance
procedures to control the amount of outstanding data that are being injected
into the network. The two phases of slow start and congestion avoidance are
discussed in the following two sections. �
g = 1 8 = 0.125, h = 1 4 = 0.25
where RTT is the round-trip-time of the path between the transmitter and
receiver. To optimize the TCP throughput, the receive window should be set to
≥ [path bandwidth * RTT], which is the BDP.
In networks with very high BDP, also known as long fat networks (LFN)
such as the geostationary satellite links, care must be taken in the calculation of
the RTO. This is because in such networks, the window size is very big and the
Optimization for TCP Operation in 4G and Other Networks 229
standard RTT measurements do not measure the RTT of every segment, just a
window at a time [7].
However, in networks with very small BDP such as this one, the effect of
the slow start phase is null because the BDP is fulfilled by the initial value of the
congestion control window which is set to 1.
a new data segment is ACKed, the congestion window is halved and transmis-
sion continues in the congestion avoidance phase. The fast retransmit and fast
recovery options are usually implemented together [8], in which case it helps to
achieve better utilization of the connection available bandwidth.
such as on the HS-DSCH, and channel switching in the radio resource control
(RRC) state transitions for providing bandwidth on demand to the users in real
time, which may take up to 500 ms. Delay spikes also result from cell reselec-
tions in UMTS and GPRS, and lossless handovers in LTE, which can result in
degraded performance even when no packets are lost (no flush-outs). Moreover,
delay spikes can also occur by the reordering and the RLC ARQ transmissions
within the LTE link layer protocols. Voice call preemptions in GPRS are an-
other source of delay spikes. The voice calls can take time slots away form an
existing data call thus reducing the available bandwidth to the data call. The
sudden sharp increase in RTT delays caused by delay spikes is not captured by
the RTO estimation algorithms. That causes the TCP sender to time out and
perform spurious (unnecessary) retransmissions and multiplicative reductions
of the transmission window, resulting in reduced utilization of the allocated
bandwidth [15, 16].
11.4.4 Asymmetry
Asymmetry with respect to TCP performance occurs if the achievable through-
put is not solely a function of the link and traffic characteristics of the forward
direction, but also significantly dependent on those of the reverse direction as
well [17]. The asymmetry can affect the performance of TCP because TCP
relies on feedback through ACKs messages. Asymmetry in network bandwidth
can result in variability in the ACK feedback returning to the sender for ACKs
on the reverse path. A very simplified example is given to show the effect of
asymmetry on TCP performance. Assume we are downloading a file using a link
with 384 kbps in the DL and 8 kbps in the UL. Further, assume that 1,500-byte
segments are used in the DL while the only traffic in the UL is 40-byte ACKs.
The time required for transmission of one data packet is 1,500*8/384 ms, that
is, 31.25 ms while the time required for one ACK is 40*8/8 = 40 ms. Since TCP
Optimization for TCP Operation in 4G and Other Networks 235
is ACK-clocked, that means at the steady state it cannot send more than it can
receive ACKs. Therefore, the effective data rate of the link is reduced from 384
kbps to 300 kbps (384*31.25/40) [15].
Generally, a normalized bandwidth ratio, defined as the ratio of the trans-
mission time required for ACK packets to that of data packets [17], can be used
to quantify the asymmetry measure. Then when this ratio is the greater one, the
asymmetry will reduce the effective bandwidth available to TCP. A compromise
can be made by using delayed ACKs or other cumulative ACK scheme, in order
to stop the reverse link from being the bottleneck, but this will further slow
down the TCP’s slow start phase.
the results presented in [22] show that compared with that of the AM bearer,
the utilization of the UM bearer is 75% with a loss probability on the order of
10−4, 25% with a loss probability on the order of 10−3, and at worst, only 12%
with a loss probability on the order of 10−2. Since the RLC PDU loss probabil-
ity after the HARQ in LTE is on the order of 10−3, the UM bearer is thus not
appropriate for TCP applications.
When the packet loss rates over the link stay below 10%, TCP goodput
suffers about 10% (TCP Reno or Vegas). The degradation in the TCP goodput
becomes extremely high when the packet losses exceed 10% [23]. The com-
bined HARQ and RLC AM sublayers in LTE and HSPA networks result also in
overall reduced delays as the erroneous packets are retransmitted locally over the
UE and the base station (ENodeB) radio link, which, in turn, results in further
improvement in the TCP performance.
Since the link layer solutions are normally TCP unaware, it is important
to configure the parameters to provide as much of the reliability required for
TCP operation and prevent inefficient interactions between the two protocols.
The optimal performance is achieved when the bandwidth provided by the
RLC, or the HARQ-RLC combination in the case of LTE and HSPA networks,
which can vary over time due to the local retransmissions, is fully utilized by
TCP. None optimal utilization result when the TCP sender leaves the link idle
or retransmits the same data that RLC is already retransmitting. The link layer
tuning for TCP performance optimization requires consideration of a number
of parameters and measures, which are discussed in the following sections.�
a smaller interleaving depth in the case of the more bursty residue errors result
in reduced end-to-end data transmission delays, as confirmed by simulations
presented in [10]. The reduced end-to-end delays can help to reduce the num-
ber of TCP packet retransmission timeouts, which would trigger unnecessary
congestion control and retransmission timer back-offs. Therefore, choosing the
optimum interleaving configuration for a given throughput and the achievable
error rates is one area of experimentation within the vendor´s implementation
flexibility. The trade-offs are the reduced delays achieved with shorter interleav-
ing depths, and the achievable packet error rates in each scenario.
Timer Parameters
These are parameters within the link layer protocols that basically control the
retransmission timing of protocol data units not yet received or received in er-
ror. In the 3G UMTS networks, where only the RLC is implemented, this is
referred to as the retransmission timer, which controls the ARQ retransmission
timeout for retransmitting of lost RLC blocks in acknowledged mode. This
timer determines the number of radio frames waited before a dropped block is
retransmitted and impacts the retransmission delay. The retransmission delays
at the RLC level, in turn, are reflected in the end-to-end transfer delays seen by
the TCP, which can result in the triggering of congestion avoidance measures
and loss of throughput. Therefore, it is important to pay attention to the re-
transmission timeout values that are configured for the RLC and make sure that
it is configured to properly reflect the expected delay on the radio link under
normal conditions. The RLC ARQ RTT is influenced by many factors such
as t_Reordering, status prohibit timer (t_StatusProhibit), and UL scheduling
(e.g., scheduling request, buffer status reporting). This timer is used by the
receiving side of an AM RLC entity in order to detect the loss of RLC PDUs
at the MAC layer. If the RLC receiver detects a gap in the sequence of received
PDUs based on the RLC sequence number, it starts a reordering timer and
assumes that the missing packet still is being retransmitted by the HARQ pro-
tocol sublayer as is the case with LTE. In the rare case that the reordering timer
expires, usually in a HARQ failure case, an RLC acknowledged-mode (AM)
receiver sends a status message comprising the sequence number of the missing
RLC PDU(s) to its transmitting peer entity and requests a retransmission that
way. The ARQ function of RLC AM sender then performs retransmissions to
fill up the gap and reassemble and deliver the RLC SDUs to the upper layer in
sequence. The value of this reordering timer has significant impact on the RTT
seen by the end-to-end TCP and results in spurious timeouts, which trigger un-
necessary throttling of the sender and loss of throughput. Measurements carried
out on LTE testbeds and presented in [22] show that a setting of 0 ms for the
reordering time almost always achieves better throughput than the larger t_Re-
ordering cases with every PDU loss probability case over the radio link. For
Optimization for TCP Operation in 4G and Other Networks 239
example, the results show that the 0 ms t_Reordering case result in 4%, 9%,
and 32% gain compared with a setting of 30 ms for the PDU loss probability
cases on the order of 10−4, 10−3, and 10−2, respectively. These results basically
point out that decreasing the RLC ARQ RTT helps to increase the throughput
of TCP applications.
Figure 11.1 The positive influence of RLC retransmissions on TCP goodput. (Reproduced
from [10].)
acknowledgments that could trigger the fast recovery procedures. The RLC re-
transmissions are particularly beneficial for TCP-based applications when the
delays over the wireless link are small compared to the delays incurred within
the fixed core network. Therefore, the number of maximum allowed link level
retransmissions should be set to the highest possible value. Also for effective re-
transmissions at the link layer, it is important to have the granularity of the link
layer timers much finer compared to the TCP timers. Otherwise, contention
between the two timers can reduce the throughput.
However, this can potentially become problematic if the wireless delays
are predominant. If the wireless delays are predominant, a high value for al-
lowed number of retransmissions can result in high RTT and RTT variations
as seen by the TCP. The high RTT variations are caused by the changing radio
channel conditions, which result in different number of retransmissions till a
packet is successfully transmitted over the radio link. The high variations in
RTT can result in unstable RTO (timeouts) and trigger TCP retransmission
timeouts, which result in competing TCP recovery. That is, the TCP starts the
congestion control mechanisms, and retransmission of packets that are already
under retransmission by RLC. However, for low residue error rates on the radio
channel, the RTT jitter due to retransmissions will be small, and hence there
will be less likelihood of high competing interactions between TCP and RLC.
Optimization for TCP Operation in 4G and Other Networks 241
In fact, the simulations results presented in [10] confirm a high degree of ro-
bustness of the TCP timeout mechanisms and a nonexistence of competing
retransmissions even on wireless links with high delays. This is particularly the
case when the RTO estimation algorithm considers several consecutive RTT
samples as well as their variances.
The problem of competing error recovery between TCP and other reliable
link layer protocols resulting from spurious timeouts at the TCP sender is also
investigated in [24]. The same conclusion is reached as in [25] with performing
measurements during bulk data transfers with very few timeouts due to link
layer delays. The measurements performed in [25] indicate most TCP timeouts
are due to the radio link protocol resets resulting from unsuccessful transmis-
sion of data packets under weak signal strengths; TCP was seen to provide a
very conservative retransmission timer allowing for extra delays due to link layer
error recovery beyond 1,200 ms. This is the upper range of typical round trip
time measurements observed in some of the existing UMTS networks. Mea-
surements performed on a major providers UMTS network [26] over different
times of the day have indicated that only 10% of the measured RTTs exceeded
1 second. Therefore, the number of RLC retransmissions can be set to a high
value to make the link more reliable, without causing excessive link delays that
would be seen by TCP. In the case of LTE as was discussed in Chapter 3, the
more effective combined HARQ and RLC within the eNodeB helps to reduce
the retransmission delays and the error rates further and allow proper setting
between the number of retransmissions at the MAC HARQ level and the more
costly RLC ARQ retransmissions to provide a more reliable radio link for im-
proved TCP performance.
Setting the effective number of link layer retransmissions to an adequately
large number will also help to prevent link resets. Link resets can have a major
impact on TCP performance when TCP header compression is implemented to
reduce the per segment overhead. Header compression uses differential encod-
ing schemes which rely on the assumption that the encoded “deltas” are not lost
or reordered on the link between the compressor and the decompressor. Lost
deltas lead to generation of TCP segments with false headers at the decompres-
sor. Thus, once a delta is lost, an entire window worth of data is lost and has to
be retransmitted. This effect is worsened as the TCP receiver will not provide
feedback for erroneous TCP segments and forces the sender into timeout. The
RLC link resets results in a failure of the TCP/IP header decompression, which,
in turn, causes the loss of an entire window of TCP data segments. Therefore,
making the link layer retransmissions persistent has multiple benefits when
transmitting TCP flows. Finding a reasonable value for the link layer retrans-
missions is a challenge of TCP optimization over wireless links.
242 From LTE to LTE-Advanced Pro and 5G
On the other upper side, any packets transmitted over the BDP limit in
one RTT are buffered within the network nodes and particularly at the RLC
buffer. Since the wireless link is the bottleneck link in the network, buffering
within the fixed part of the network is assumed to be negligible. This implies
that all packets transmitted over the connection capacity (BDP) in one RTT are
buffered within the RLC buffer. Therefore, to prevent overflow of the network
buffering capacity, we should ensure that the receiver window is upper limited
by the following relation
in which the value of BDP can be estimated based on typical measured RTTs
in the network.
244 From LTE to LTE-Advanced Pro and 5G
The above is based on a static setting of the TCP receiver window. How-
ever, in the radio environment the effective BDP and packet queuing incurred
within the network can change over the course of a connection due to changing
radio channel conditions, mobility, and changing delays caused by rate varia-
tions and channel scheduling in 3G. Then for small RLC buffer sizes, setting
the receiver window statically to the buffer size result in significant underuti-
lization of the link since the full buffer of packets is not able to absorb the
variations over the wireless link. The Next Generation TCP/IP stack supports
Receive Window Auto-Tuning. Receive Window Auto-Tuning continually de-
termines the optimal receive window size by measuring the BDP and the ap-
plication retrieve rate and adjusts the maximum receive window size based on
changing network conditions.
A dynamic window regulation approach is presented in [29], for improv-
ing the end-to-end TCP performance over CDMA 2000-1X based 3G wireless
links. Since TCP inserts data into the network up the minimum between the
TCP transmission window (i.e., congestion control window) and the receiver
window, it is possible to regulate the flow to the maximum link utilization while
preventing excess network buffering by dynamically adjusting the TCP window
in real time. However, TCP adjusts its transmission window in response to the
ACK messages received from the receiver, but basically regardless of the BDP;
TCP is not aware of the connection bandwidth-delay product. The dynamic
window regulator algorithm proposed in [29] uses a strategy based on regulat-
ing the release of the ACK messages, in response to packet buffering status for
instance in the base station, to adaptively adjust the TCP transmit window to
the optimum range given by (11.6). The algorithm provides the advantage of
dynamically adjusting the window size to the connection’s varying BDP. The
algorithm is shown to result in highest TCP goodput and performance for even
small amounts of buffer per TCP flow (at the RLC link) and minimize packet
losses from buffer overflows.
total, whereas the hash value can come down to as little as 16 bits long. The
generated TCP ACK in the base station Proxy is not immediately released into
the core network, but stored locally in a hash table at the eNodeB. Meanwhile,
the original TCP data packet continues its path towards the UE where a similar
hash algorithm is executed to associate the TCP packet with an identical hash
index. The client proxy is implemented as a stand-alone module on top of
the MAC sublayer and can be introduced at the driver level or inside network
interface firmware. In this way, the TCP ACK from the UE is replaced with a
simple hash index, which not only saves bandwidth over the radio link, but also
reduces the error probability due to its much shorted length. Using the hash
index as a short MAC layer request, the client proxy within the UE can request
eNodeB to release a given TCP ACK into the network core in a way that is
consistent with the employed acknowledgment strategy (delayed or selective
acknowledgment). The UE transmits the computed hash index to the eNo-
deB using an interface between the ARQ client and the ARQ proxy defined at
the MAC layer. This reduces the RTT for the time associated with TCP ACK
transmission over the radio link, including uplink bandwidth reservation delay,
which depends on the UE’s state (being active or idle), and the framing used.
This reduction is typically in the order of tens of milliseconds [32]. However, in
case the MAC layer does not succeed to pass the hash value and the TCP ACK
arrives at the physical layer, it is transmitted as in the legacy implementation
(i.e., with no ARQ proxy enabled). It is seen that the proxy solution does not
violate the end-to-end TCP semantics as since it is the UE that triggers the TCP
ACK transmissions from the eNodeB.
Each TCP ACK located in the hash table in the eNodeB has a lifetime that
is assigned at the moment of TCP ACK generation. In case the lifetime, which
can be set to the TCP timeout, is exceeded, the packet is silently dropped from
the hash table. This mechanism ensures that the eNodeB memory resources are
freed for TCP ACKs not requested by UE. This happens when the TCP data
packet arrives out-of-order or when the UE implements delayed-ACK scheme.
Other scenarios are handover cases, in which the hash table stored by the ARQ
proxy in the old eNodeB is not transferred to the new eNodeB and is deleted.
In this way, after a location update, the UE will send the hash values for only
those TCP ACKs that correspond to packets received from the new eNodeB.
Performance evaluation results presented in [31] demonstrate that the
ARQ proxy approach is able to provide uplink capacity increase, sustain high
error rate, and bring TCP performance increase due to reduced RTT. Further-
more, they can be incrementally deployed in already operational network where
UEs and eNodeBs implementing ARQ proxy approach coexist with those which
do not. For example, in case a UE does not include the ARQ client module,
none of the TCP ACKs generated at the eNodeB is requested using their hash
values and will be simply deleted after their lifetime expiration. However, if an
Optimization for TCP Operation in 4G and Other Networks 247
eNodeB does not implement the ARQ proxy solution, all bandwidth allocation
requests sent by UEs will be rejected and the original TCP ACK packets will
transit over the radio channel. This is a consideration of the fact that whichever
comes first at the physical layer (TCP ACK or its hash value) will be transmitted
over the radio channel.
An Extension to SACK
An extension to the Selective Acknowledgment (SACK) option for TCP SACK,
defined in RFC 2883 allows the receiver to indicate up to four noncontiguous
blocks of received data. RFC 2883 defines an additional use of the fields in the
SACK TCP option to acknowledge duplicate packets. This allows the sender
of the TCP segment containing the SACK option to determine when it has
retransmitted a segment unnecessarily and adjust its behavior to prevent future
retransmissions. The reduced unnecessary retransmissions result in better over-
all throughput.
compared to for instance the more commonly used TCP Reno. TCP Vegas has
been implemented in Linux Kernel and FreeBSD. However, the conventional
TCP implementations include TCP Tahoe, TCP SACK, TCP Reno, and New
Reno. The options provided by these implementations are shown in Table 11.1
for reference purposes.
Table 11.1
Major TCP Implementations and Features
TCP Implementation
Feature TCP Tahoe TCP Reno New Reno TCP SACK
Slow-start Yes Yes Yes Yes
Congestion avoidance Yes Yes Yes Yes
Fast retransmit Yes Yes Yes Yes
Fast recovery No Yes Yes Yes
Enhanced fast No No Yes No
retransmit-recovery
SACK No No No Yes
Optimization for TCP Operation in 4G and Other Networks 251
An example of this kind of solution for LTE is presented in [39], which inves-
tigates the performance of a split connection TCP proxy deployed in LTE’s
S-GW node. Numerical results show significant performance improvement of
file downloading, Web browsing, and video-steaming applications in case of
noncongested transport networks.
11.5.6.2 Mobile TCP
Although it uses a split connection, Mobile TCP (M-TCP) preserves the end-
to-end semantics [40] and aims to improve throughput for connections that
exhibit long periods of disconnection. M-TCP adopts a three-level hierarchy.
At the lowest level, mobile hosts communicate with mobile support stations
in each cell, which are, in turn, controlled by a supervisor host. The supervi-
sor host is connected to the wired network and serves as the point where the
connection is split. A TCP client exists at the supervisor host. The TCP client
receives the segment from the TCP sender and passes it to an M-TCP client to
send it to the wireless device. Thus, between the sender and the supervisor host,
standard TCP is used, while M-TCP is used between the supervisor host and
the wireless device. M-TCP is designed to recover quickly from wireless losses
due to disconnections and to eliminate serial timeouts. In the case of disconnec-
tions, the sender is forced into the persist state by receiving persist packets from
M-TCP. While in the persist state, the sender will not suffer from retransmit
timeout, it will not exponentially back off its retransmission timer, and it will
preserve the size of its congestion window. Hence, when the connection re-
sumes following the reception of a notification from M-TCP, the sender is able
to transmit at its previous speed. TCP on the supervisor host does not transmit
ACK packets it receives until the wireless device has acknowledged them. This
preserves the end-to-end semantics and preserves the sender timeout estimate
based on the whole round trip time.
it is recommended [45] not to use URL switching unless there is a clear need
for it. Some load balancers do not allow HTTP 1.1 when URL switching is
involved.
References
[1] Rahnema, M., UMTS Network Planning, Optimization and Inter-Operation with GSM,
New York: John Wiley & Sons, 2008.
[2] Allman, M., V. Paxson, and W. Stevens, “TCP Congestion Control,” RFC 2581, April
1999.
[3] Postel, J., “Transmission Control Protocol – DARPA Internet Program Protocol Specifica-
tion,” RFC 793, September 1981.
[4] Karn, P., and C. Partridge, “Improving Round-Trip Time Estimation in Reliable Transport
Protocols,” ACM Transactions on Computer Systems, 1991.
[5] Paxson, V., and M. Allman, “Computing TCP’s Retransmission Timer,” RFC 2988, No-
vember 2000.
[6] Jacobson, V., and M. J. Karels, “Congestion Avoidance and Control,” ACM SIGCOMM,
November 1988.
[7] Stevens, W. R., TCP/IP Illustrated, Volume 1: The Protocols, Reading, MA: Addison-Wesley
Professional Computing Series, 1994.
[8] Stevens, W., “TCP Slow Start, Congestion Avoidance, fast Retransmit, and Fast Recovery
Algorithms,” RFC 20002, January 1997.
[9] Kumar, A., “Comparative Analysis of Versions of TCP in a Local Network with Lossy
Link,” IEEE/ACM Transactions on Networking, August 1998.
[10] Lefevre, F., and G. Vivier, “Optimizing UMTS Link Layer Parameters for a TCP
Connection,” Proc. IEEE Conf. on Vehicular Technology, 2001.
[11] Chan, M. C., and R. Ramjee, “TCP/IP Performance over 3G Wireless Links with Rate
and Delay Variation,” Proc. ACMMobiCom, 2002, pp. 71–78.
[12] Chakravorty, R., J. Cartwright, and I. Pratt, “Practical Experience with TCP over GPRS,”
Proc. of IEEE Globecom, November 2002.
[13] 3GPP TS 36.133, “Requirements for Support of Radio Resource Management, Release 8,
V8.9.0,” 2010.
[14] Nguyen, B., et al., “Towards Understanding TCP Performance on LTE/EPC Mobile
Networks,” School of Computing, University of Utah, AT&T Labs – Research.
[15] Teyeb, O., and J. Wigard, “Future Adaptive Communications Environment,” Department
of Communication Technology, Aalborg University, June 11, 2003.
[16] Inamura, H. et al, “TCP Over Second (2.5G) and Third (3G) Generation Wireless
Networks,” RFC 3481, February 2003.
Optimization for TCP Operation in 4G and Other Networks 255
[17] Balakrishnan, H., et al., “The Effects of Asymmetry on TCP Performance,” ACM Mobile
Networks and Applications, 1999.
[18] Fahmy, S., et al., TCP over Wireless Links: Mechanisms and Implications, Technical report
CSD-TR-03-004, Purdue University, 2003.
[19] Alcaraz, J. J., F. Cerdan, and J. García-Haro, “Optimizing TCP and RLC Interaction in
the UMTS Radio Access Network,” IEEE Network, March/April 2006.
[20] 3GPP TS 25.322, “RLC Protocol Specifications,” 1999.
[21] Sandrasegaran, K., et al., “Analysis of Hybrid ARQ in 3GPP LTE Systems,” 16th Asia-
Pacific Conference on Communications (APCC), November 2011, pp. 418–423
[22] Park, H. -S., J. -Y. Lee, and B. -C. Kim, “TCP Performance Issues in LTE Networks,”
International Conference on ICT Convergence (ICTC), September 2011, pp. 493–496.
[23] Pavilanskas, L., “Analysis of TCP Algorithms in the Reliable IEEE 802.11b Link,”
Proceedings 12th International Conference ASMTA, 2005.
[24] Balakrishnan, H., et al., “A Comparison of Mechanisms for Improving TCP Performance
over Wireless Links,” Proc. SIGCOMM’96, August 1996.
[25] Kojo, M., et al., “An Efficient Transport Service for Slow Wireless Telephone Links,” IEEE
JSAC, Vol. 15, No. 7, pp. 1337–1348.
[26] Vacirca, F., F. Ricciato, R. Pilz, “Large-Scale RTT Measurements from an Operational
UMTS/GPRS Network,” IEEE WICON 2005, Bucharest, Hungary, June 2005.
[27] Ludwig, R., and B. Rathonyi, “Multi-Layer Tracing of TCP over a Reliable Wireless Link,”
Proceedings of ACM Sigmetrics, 1999.
[28] Braden, B., et al., “Recommendation on Queue Management and Congestion Avoidance
in the Internet,” RFC 2309, April 1998.
[29] Chan, M. C., and R. Ramjee, “Improving TCP/IP Performance over Third Generation
Wireless Networks,” Proc. IEEE INFOCOM’04, 2004.
[30] Sanchez, R., J. Martinez, and R. Jarvela, “TCP/IP Performance over EGPRS
Networks,” December, 2002, europe.nokia.com/downloads/aboutnokia/.../mobile_
networks/MNW16.pdf.
[31] Kliazovich, D., et al., “Cross Layer Error Control Optimization in 3G LTE,” IEEE Global
Telecommunications Conference (GLOBECOM), December 2007, pp. 2525–2529.
[32] 3GPP, R2-061866, “Non-Synchronized Random Access in E-UTRAN,” Ericsson,
www.3gpp.org.
[33] Fall, K., and S. Floyd, “Simulation Based Comparison of Tahoe, Reno, and Sack TCP,”
Computer Communication Review, Vol, 26, No. 2, July 1996, pp. 5–21.
[34] Sinha, P., et al., “WTCP: A Reliable Transport Protocol for Wireless Wide-Area Networks,”
ACM Mobicom ’99, Seattle, WA, August 1999.
[35] Benko, P., G. Malicsko, and A. Veres, “A Large-Scale, Passive Analysis of End-to-End TCP
Performance over GPRS,” Proc. IEEE INFOCOM, March 2004, pp. 1882–1892.
256 From LTE to LTE-Advanced Pro and 5G
[36] Ludwig, R., and H. Katz, “The Eifel Algorithm: Making TCP Robust Against Spurious
Retransmissions,” ACM Computer Communications Review, January 2000.
[37] Abed, G., M. Ismail, and K. Jumari, “Appraisal of Long Term Evolution System with
Diversified TCP’s,” 5th Asia Modelling Symposium (AMS), May 2011, pp. 236–239.
[38] Bakre, A., and B. R. Badrinath, “Handoff and System Support for Indirect TCP/IP,”
Second USENIX Symposium on Mobile and Location-Independent Computing Proceedings,
Ann Arbor, MI, April 1995.
[39] Farkas, V., B. Héder, and S. Nováczki, “A Split Connection TCP Proxy in LTE Networks,”
Information and Communication Technologies, Volume 7479 of the series Lecture Notes in
Computer Science, 2012, pp 263–274.
[40] Brown, K., and S. Singh, “M-TCP: TCP for Mobile Cellular Networks,” ACM Computer
Communication Review, Vol. 27, No. 5, October 1997, pp. 19–43.
[41] Wang, K. Y., and S. K. Tripathi, “Mobile-End Transport Protocol: An Alternative to TCP/
IP over Wireless Links,” INFOCOM, San Francisco, CA, March/April 1998, p. 1046.
[42] Tsaoussidis, V., and H. Badr, “TCP-Probing: Towards an Error Control Schema with
Energy and Throughput Performance Gains,” Proceedings of ICNP, 2000.
[43] Parsa, C., and J. J. Garcia-Luna-Aceves, “Improving TCP Congestion Control over
Internets with Heterogeneous Transmission Media,” Proc. of the 7th Annual International
Conference on Network Protocols, Toronto, Canada, November 1999.
[44] Fielding, R., et al., “Hypertext Transfer Protocol-HTTP 1.1, RFC 2616,” 1999.
[45] Kopparapu, C., Load Balancing Servers, Firewalls, and Caches, New York: John Wiley &
Sons, 2002.
12
Voice over LTE
LTE was originally regarded as a broadband IP-based cellular system for carry-
ing data services. It was expected that the operators would be able to carry voice
either by reverting to circuit switching over 2G/3G systems or by using VoIP
in one form or another. However, this would necessitate the preservation of the
existing circuit-based networks of 2G/3G in the longer term and prevent the
simplicity and cost-effectiveness of having a single network to handle all servic-
es. Therefore, it was thought that LTE should eventually be able to handle voice
calls through Voice over IP given that it is an all-IP network. Voice over IP is an
ideal application for IP multimedia subsystem (IMS) to provide a rich multime-
dia solution. Hence, Groupe Speciale Moble Association (GSMA) [1] chose the
IMS as a standardized means for providing the signaling functionality to sup-
port voice as well as SMS over LTE, which is briefly referred to as VOLTE. This
uses a reduced version of IMS, which provides the required functionality and
simplicity acceptable to the operators. The most important IMS service is the
multimedia telephony service (MMTel) and its support is mandated in VOLTE
specifications. An MMTel voice device can support any number of codecs, but
the list must include the adaptive multirate (AMR) codec that is also used in
3G networks, as well as the wideband adaptive multirate codec (AMR-WB)
if the optional wideband voice communication is supported. The AMR-WB
codec is a G.722.2 ITU-T speech codec, which is also specified in 3GPP TS
26.194 and uses a sampling rate of 16,000 samples per second with a resulting
compressed data rate of 23.85 kbps. This chapter will provide an overview of
the IMS architecture and its components and interfaces as used in VOLTE, the
IMS signaling protocols used, and will also show how some typical capacity
estimates are obtained for supporting Voice over IP in LTE. However, we will
257
258 From LTE to LTE-Advanced Pro and 5G
begin the discussions first by explaining how efficient radio resource scheduling
can be carried out for packet voice communication in LTE.
Figure 12.1 A reduced version of the IMS main signaling architectural elements. (© 2014.
3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)
12.2.1 P-CSCF
The proxy CSCF (P-CSCF) is the mobile’s first point of contact with the IMS,
and secures the signaling messages for transmitting across the IP connectivity
access network by means of encryption and integrity protection. The P-CSCF
Voice over LTE 261
behaves like a proxy as defined in IETF 3261 [10], which relays all SIP signal-
ing to and from the user whether in the home or a visited network. The P-
CSCF determines the serving CSCF in the home or visiting network, which is
then used to process the VOLTE calls.
12.2.2 S-CSCF
The S-CSCF performs a variety of functions as detailed in [4], which includes
user registration for access to services such as the voice calls. This module is
similar to the MME but is always located in the mobile’s home network and en-
sures that the user receives a consistent set of IMS services while also roaming.
It is the home subscriber’s S-CSCF that processes the subscriber’s VOLTE calls
and is reached via the P-CSCF. The S-CSCF routes the SIP request via normal
IMS routing principles towards, for example, a server in the home or visited
access network using the ISC or Mx interfaces. The S-CSCF may exhibit user
agent-like behavior and maintain states for certain applications where the exact
behavior will depend on the SIP messages being handled, on their context, as
well as on the S CSCF capabilities needed to support the services. The Stage
3 design is expected to determine the more detailed modeling in the S CSCF.
profiles to be defined in the HSS for a subscription. Each user is associated with
an IMS user profile, which is stored in the home subscriber server. The user
profile contains a set of service profiles that define the services such as multi-
media telephony and SMS. The service profile is downloaded from the HSS to
the S CSCF, where it is associated with at least one public user identity and is
activated upon registration. The service profile may also be associated with a set
of initial filters, which define how the serving CSCF interacts with application
servers to invoke the service. Each public user identity can be associated with
only one service profile. Each service profile is associated with one or more pub-
lic user identities which are registered at a single S CSCF at one time. However,
all public user identities of an IMS subscription should be registered at the same
S CSCF. The IM CN Subsystem is capable of using the public service identity
for routing of IMS messages.
The application server (AS) executes the service specific logic as identified
by the public service identity. The AS can support IP voice services including
multimedia telephony, voicemail, and SMS. The users are able to manipulate
their application data across the signaling interface Ut. These application serv-
ers are stand-alone devices that are outside the LTE network and can also act
as interfaces to other application environments. For instance, the service ca-
pability server (SCS) type provides the open service access (OSA) across the
application programming interface (API) to third-party application developers.
There is also the IP multimedia service switching function (IM-SSF), which
provides the access to a service framework for developing customized applica-
tions for mobile environment based on enhanced logic customized applications
for mobile enhanced logic (CAMEL) as explained in [6]. The AS can generally
provide complex and dynamic service logic that can, for instance, make use of
additional information that is not directly available via SIP messages such as
location, date, and time. The reader can refer to 3GPP TS23.228 [4] for more
details on the functions of the IMS application server.
used for routing of SIP messages. The IP multimedia services identity module
(ISIM) within the UE universal integrated circuit card (UICC) should securely
store one private user identity where it cannot be modified by the UE. The
private user identity is a unique global identity that is defined and permanently
allocated by the home network operator, for identifying the user’s subscription
(IM service capability) and not the user. The private user identity specifies the
user’s authentication information, which is used, for instance, during registra-
tion and may be present in charging records based on operator policies. The
private user identity must be stored in the HSS from where it is obtained and
stored by the S-CSCF upon registration.
The public user identity is what is used in the request for communications
to other users and may be included, for instance, on a user’s business card. Every
IM CN subsystem should be assigned one or more public user identities, which
should include one taking the form of a SIP URI [10]. The user public identity
is usually a SIP uniform resource identifier (URI), which identifies the user,
the network operator, and application using a format such as sip:username@
domain. The user public identity can also support traditional phone numbers
in two ways. These consist of an SIP URI that includes a phone number using
a format such as sip: +phone number@domain; and a Tel URI that describes a
stand-alone phone number [11]. The ISIM application within the UE should
securely store at least one public user identity and should not be modifiable
by the UE. A public user identity should be registered either explicitly or im-
plicitly before originating or terminating IMS sessions. The public user identi-
ties may be used to identify the user’s information within the HSS during, for
example, mobile terminated session setup, but are not authenticated by the
network during a registration. It should be possible to globally register globally
through, for instance, one single UE request from a user who has more than
one public identity via a mechanism within the IP multimedia CN subsystem,
for example, by using an implicit registration set. The public and the private
user identities are used during user registration with the IMS services. In the
registration process, the mobile sends its IP address and private user identity to
the serving CSCF and quotes one of its public identities. The serving CSCF
then contacts the home subscriber server, retrieves the other public identities,
and sets up a mapping between each of these fields. The user can then receive
incoming calls that are directed towards any of those public identities as well as
make outgoing calls.
More details on user identity schemes and requirements can be found in
3GPP references [4, 12]. The mechanisms for addressing and routing for access
to IM CN subsystem services and issues of general IP address management are
discussed in 3GPP TS 23.221 [13]. Finally, it is noted that the VoLTE specifica-
tions insist that every network operator use the IMS well-known access point
name, IMS to ensure that a mobile can access the IMS while roaming [6].
264 From LTE to LTE-Advanced Pro and 5G
bearer with QOS class 1, which is set up at call origination and torn down when
the call is terminated. The voice packets are transported using IP, UDP, and the
Real-Time Transport Protocol (RTP). RTP supports the delivery of real-time
media over an IP network, by carrying out tasks such as labeling packets with
sequence number and timestamps. The RTP is described in IETF RFC 3550.
The description of the media content carried within the SIP signaling
messages are handled by the Session Description Protocol (SDP) originally
specified in [18]. The original version of SDP basically defined a media stream,
using session information such as the device’s IP address and media information
such as the media types, data rates, and the codecs used. Later on, the proto-
col was enhanced to an offer-answer model that allowed two or more parties
to negotiate the media and codecs to use [19]. This is the version used in the
IMS where the SIP requests and responses are used to carry the SDP offers and
answers.
IMS uses the Extensile Markup Language (XML) Configuration Access
Protocol (XCAP) described in IETF RFC 4825 [20] on the Ut interface be-
tween the mobile and the application server. The client uses the XCAP to map
XML formatted data stored on a server to HTTP uniform resource identifier
(URIs) for access using HTTP [21]. This allows devices to use XCAP to read
and modify information such as voicemail settings and supplementary service
configurations that are stored on the application servers. The Diameter applica-
tion protocols [22] are used for providing the communication means among
the HSS, the CSCFs, and the application servers.
Table 12.1
Estimated Downlink and Uplink Voice Capacities for Various Channel Conditions with AMR 12.2-kbps Voice Codec
and 10-MHz Transmission Bandwidth
Channel Actual Actual Number of Voice Number of Voice
Condition Modulation Number of Coding Coding Calls Supported Calls Supported
(CQI) Order Bits Used Rate Rate Used on DL on UL Comments
1 QPSK 2 0.78 0.078 56 52 For channel
2 QPSK 2 0.12 0.12 87 80 conditions with
CQI of 10:15, same
3 QPSK 2 0.19 0.19 138 127 modulation coding
4 QPSK 2 0.3 0.3 218 201 scheme as for
5 QPSK 2 0.44 0.44 320 295 channel condition
with CQI of 9 were
6 QPSK 2 0.6 0.6 436 403 used. Moreover
7 16QAM 4 0.37 0.37 538 497 Uplink capacities
8 16QAM 4 0.49 0.49 712 658 were calculated
for same channel
9 16QAM 4 0.61 0.61 887 819
conditions as giveby
10 64QAM 4 0.46 0.61 887 819 the CQIs on DL
From LTE to LTE-Advanced Pro and 5G
due to voice silent statistics, but the full shared channels are assumed available
for only the voice traffic.
The narrowband AMR 12.2 codec generates a voice packet every 20 ms
at 12.2 kbps when active, which results in 20 × 12.2 = 244 bits. This voice pay-
load is placed in an RTP/UDP/IP packet with 3 bytes of overhead after using
the IP header compression ROHC, which brings the packet size to 244 + 24 =
268 bits. This would pass through the three sublayers of the air interface proto-
col stack of MAC, RLC, and PDCP. The VoIP packet passes through these lay-
ers of the air interface protocol stack. We will assume an RLC header of 1 byte
(for unacknowledged mode operation using a 5-bit sequence number), a MAC
header of 2 bytes, and a PDCP header of 1 byte using short sequence number
resulting in 4 additional bytes of overhead bring the IP voice packet to 268 + 32
= 300 bits in total. Now as explained in Section 3.10.2, a downlink PRB con-
tains 120 resource elements that are modulation symbols assuming single-layer
transmission with a Tx diversity antenna. This assumes that all the first three
symbols in each subframe are taken for control channel and is a conservative
estimate, under the assumption of semipersistent scheduling. Next we will cal-
culate the number of PRBs needed to handle a voice call on the downlink side
for each channel condition (CQI) shown in column 1 of Table 12.1, but first
we do it for one channel condition instance to illustrate the procedure. Take,
for instance, the case for CQI of 1, which requires the QPSK modulation with
a coding rate of 0.078. The QPSK results in 2 bits for each resource element
(OFDMA symbol), and with the code rate of 0.078 provides 120 symbols/
PRB × 2 bits/symbol × 0.078 = 18.72 bits payload. This means with the 300
bits per voice packet, we will require 300/18.72 = 16.025 PRBs per 20 ms for
each voice call on the downlink side. This is true if every single packet with
the modulation scheme and coding assumed is received without any errors all
the time. However, in practice, some packets would be received in error and
would have to be retransmitted locally at the link level using the HARQ fast
retransmission to bring down the error rate within the receiving end. Note that
acceptable voice quality needs to have 98% to 99% of packets received with
no error after decoding. Assuming a reasonable 10% retransmissions at HARQ
level, the number of PRBs required per VoIP call per 20 ms would increase to
16.025 × 1.10 = 17.62. Now in the 1-ms subframe, there are 50 PRBs with the
10-MHz transmission bandwidth assumed, allowing 50 PRBs/17.62 PRBs per
user = 2.84 users. Since the AMR speech codec generates a new speech frame
only every 20 ms, we can have as many as 20 times more voice calls resulting
in 20 2.84 = 56.73, or 56 users roughly (we kept the fractions to simplify the
repeat of calculations for other cases). This procedure was repeated for other
channel conditions given in Table 12.1 except that for channel CQI of 10:15,
we used the modulation coding scheme same as for CQI 9, that is the 16 QAM
and coding rate of 0.61, due to the fact that only UE category 5 can support
268 From LTE to LTE-Advanced Pro and 5G
the 64 QAM, and may not be widely available. The results are shown in Table
12.1, column 6.
For the uplink side, the calculation of the signaling channel overhead
within each PRB is not as straightforward. As discussed in Chapter 4, we will
assume for the 10-MHz transmission bandwidth (i.e., 50 PRBs), a total of 6
PRBs are consumed for the uplink control channels. This might be less in the
case of VoIP due to the fact that semipersistent scheduling can be used, but pro-
vides a more conservative estimation. However, that leaves a total of 50 – 6 = 44
PRBs for carrying the voice payload. There are two symbols per subframe per
subcarrier within each of these voice-carrying PRBs used for the demodulation
reference symbols and one for the sounding reference symbols. That leaves 168
– 36 = 132 symbols per PRB for carrying the voice payloads. Repeating then
the calculations as we did for the downlink side for the same channel conditions
(CQIs) given in column 1 of Table 12.1, we obtain the voice call capacities sup-
ported on the uplink and are displayed in column 7 of Table 12.1. The uplink
voice capacity is seen to be somewhat lower than the downlink side, showing
that the voice capacity is limited by the uplink. The results are also plotted in
Figure 12.2.
Assuming users experience all channel conditions given in the table with
equal likelihood, and no category 5 UEs being available to support the 64
QAM, a simple statistical averaging of the results in columns 6 and 7 of Table
12.1 indicates an average voice call capacity of 580 and 536 calls on the down-
link and uplink, respectively, with the 10-MHz transmission bandwidth and
using the AMR 12.2 voice codec. However, higher capacities can be expected
than indicated here if about 50% accounts are made for voice activity ratio and
the category 5 UEs are available to better take advantage of channel conditions
for CQIs of 10 and higher. Simulation results presented in [23] show the LTE
Figure 12.2 Voice call capacities on DL and UL with AMR 12.2 kbps assuming no category 5
UEs available to support 64 QAM (these results assume 100% voice activity) in 10-MHz band-
width (based on results of Table 12.1).
Voice over LTE 269
capacity for voice over IP using the AMR codec at a rate of 12.2 kbps and a
cell bandwidth of 5 MHz varies from 289 to 317 calls on DL and from 123 to
241 calls on UL under various voice scheduling, control channel limitations,
and channel conditions. Similar results are obtained via simulations presented
in [24] for 5-MHz transmission bandwidth which indicate averages at around
320 calls on downlink and 240 calls on uplink per sector which is much higher
compared to UMTS voice call capacity, which is estimated at 50 voice calls
under average channel conditions with AMR 12.2 kbps [25]. The referenced
simulation results from [24] should be basically doubled to provide results for
the 10-MHz bandwidth.
References
[1] Russell, N., “Official Document IR.92 - IMS Profile for Voice and SMS,” GSMA, March
2013.
[2] 3GPP TS 36.321, “Medium Access Control (MAC) Protocol Specification,” V10.10.0,
Release 10, 2009.
[3] 3GPP TS 36.331, “Radio Resource Control (RRC) Protocol Specification,” V10.0.0, Re-
lease 10, 2009.
[4] 3GPP TS 23.228, “IP Multimedia Subsystem (IMS); Stage 2,” Release 11, 2013.
[5] Camarillo, G., and M. -A. Garcia-Martin, The 3G IP Multimedia Subsystem (IMS): Merg-
ing the Internet and the Cellular Worlds, 3rd ed., New York: John Wiley & Sons, 2008.
[6] Noldus, R., et al., IMS Application Developer’s Handbook: Creating and Deploying Innova-
tive IMS Applications, New York: Academic Press, 2011.
[7] Cox, C., An Introduction to LTE, 2nd ed., New York: John Wiley & Sons, 2014.
[8] 3GPP TS 24.229, “IP Multimedia Call Control Protocol Based on Session Initiation Pro-
tocol (SIP) and Session Description Protocol (SDP); Stage 3,” Release 11, 2010.
[9] 3GPP TS 23.218, “IP Multimedia (IM) Session Handling; IM Call Model; Stage 2,”
Release 11, 2010.
[10] IETF RFC 3261, “SIP: Session Initiation Protocol,” June 2002.
[11] IETF RFC 3986, “Uniform Resource Identifier (URI): Generic Syntax,” January 2005.
[12] IETF RFC 3966, “The tel URI for Telephone Numbers,” December 2004.
[13] 3GPP TS 23.003, “Technical Specification Group Core Network; Numbering, Addressing
and Identification,” Release 10, 2009.
[14] 3GPP TS 23.221, “Architectural Requirements.”
[15] 3GPP TS 31.103, “Characteristics of the IP Multimedia Services Identity Module (ISIM)
Application,” Release 11, 2010.
[16] IETF RFC 3455, “Private Header (P-Header) Extensions to the Session Initiation Protocol
(SIP) for the 3rd-Generation Partnership Project,” January 2003.
270 From LTE to LTE-Advanced Pro and 5G
[17] 32. 3GPP TS 24.229, “IP Multimedia Call Control Protocol Based on Session Initiation
Protocol (SIP) and Session Description Protocol (SDP), Stage 3,” Release 11, 2010.
[18] IETF RFC 4566, “SDP: Session Description Protocol,” July 2006.
[19] IETF RFC 3264, “An Offer/Answer Model with the Session Description Protocol (SDP),”
June 2002.
[20] IETF RFC 4825, “The Extensible Markup Language (XML) Configuration Access
Protocol (XCAP),” May 2007.
[21] 3GPP TS 24.623, “Extensible Markup Language (XML) Configuration Access Protocol
(XCAP) over the Ut Interface for Manipulating Supplementary Services,” Release 11,
December 2012.
[22] 3GPP TS 29.229, “Cx and Dx Interfaces Based on the Diameter Protocol; Protocol
Details,” Release 11, 2013.
[23] 3GPP R1-072570, “Performance Evaluation Checkpoint: VoIP Summary,” 2007.
[24] Holma, H., et al., (eds.), LTE for UMTS, New York: John Wiley & Sons, 2009.
[25] Rahnema, M., UMTS Network Planning, Optimization and Inter-Operation with GSM,
New York: John Wiley & Sons, 2007.
13
LTE-Advanced Pro: Enhanced LTE
Features
The LTE system is still continuously being developed and enhanced with more
features allowing one to improve the system performance and to introduce new
services such as smart metering and vehicular communications, which are pos-
ing significantly different requirements compared to mobile broadband, for
which this system was originally designed. This chapter goes through the lat-
est LTE standard developments and covers aspects, such as link aggregation
between two LTE nodes [i.e., dual connectivity (DC)], between LTE and un-
licensed versions of LTE air interface [i.e., licensed-assisted access (LAA)] and
tight interworking with WiFi [i.e., LTE-WiFi aggregation (LWA1)], along with
massive CA, which are aiming at improvements for system capacity, higher
throughputs, and connectivity robustness. Another set of functionalities cov-
ered in this chapter relates to machine-type communications, including nar-
rowband Internet-of-Things (NB-IoT) and device-to-device (D2D) commu-
nications. This chapter starts with the introduction to LTE-Advanced Pro and
outlines its features, which are discussed in detail later. Finally, a comparison
between the main evolutionary steps of LTE is provided together with through-
put calculations of LTE, LTE-Advanced, and LTE-Advanced Pro systems.
1. Besides LWA, LWIP and RCLWI features will also be covered so that the full scope of Release
13 interworking mechanisms is presented and evaluated.
271
272 From LTE to LTE-Advanced Pro and 5G
• DC:2 This feature enables aggregation of two radio links with nonideal
backhaul without low-latency requirement. To allow this, the links are
aggregated at the PDCP level, where the PDCP PDUs are combined
rather than MAC-layer transport blocks that are aggregated under CA
feature. The links of a single macro cell and small cell are combined,
where the macro cell acts as a mobility and signaling anchor.
• LTE-WiFi aggregation (LWA): Like in DC, this feature provides link
aggregation. However, in this case, the secondary link is provided via
Carrier WiFi at the 2.4-GHz or 5-GHz ISM band, thus enabling tight
interworking between LTE and WLAN. The point of aggregation is
also at the PDCP level of the LTE anchor carrier. In Release 13, this
is only possible in the DL direction (i.e., the WiFi link serves as a sup-
plemental DL carrier). Release 14 specifies the UL transmission within
this framework under the enhanced LWA (eLWA) feature. Addition-
ally, complementary features for interworking with WiFi standardized
within Release 13 include RAN Controlled LTE-WLAN Interworking
(RCLWI)—switching the UP between LTE and WiFi, and LTE-WLAN
Radio Level Integration with IPsec Tunnel (LWIP)—link aggregation
with legacy WLAN.
• Licensed-Assisted Access (LAA): This is another option for aggregation of
radio links for capacity and throughput improvements. LAA aggregates
2. This feature has actually been standardized within Release 12, but we put it here as it fits
within the overall framework of the main LTE improvements within LTE-Advanced Pro.
LTE-Advanced Pro: Enhanced LTE Features 273
the licensed primary LTE carrier with the secondary link using the new
LTE radio frame format 3 suited for unlicensed operation and fulfilling
the fair coexistence requirement with WiFi at the 5-GHz ISM band.
Like in LWA, the original proposal of LAA is for the DL aggregation
only. However, similar to eLWA, Release 14 proposes enhanced LAA
(eLAA) that adds UL support for the unlicensed spectrum link aggrega-
tion.
• Massive carrier aggregation (massive CA): This extends the regular CA
feature towards larger number of component carriers including the li-
censed and unlicensed bands. This feature, as per Release 13, enables
combination of up to 32 CCs, which theoretically provides up to 640
MHz of aggregated bandwidth (BW). Each of the CCs complies with
the LTE Release 8 channel BWs (and LAA frame type) and supports
backward compatibility.
with full-fledged eNodeBs using low transmit power, thus providing much low-
er coverage. In the initial design, the UE could be connected to either a macro
cell or a small cell; thus, a handover (HO) procedure needs to be involved for
the cell change in the RRC_CONNECTED state. In the case of outdoor mo-
bile use, this may result in frequent handovers and requires fine optimization of
the handover parameters between those cells; otherwise, a lot of RLFs could be
experienced, especially in the case of interfrequency deployments (i.e., macro
cells deployed on different frequencies than small cells). However, if they are
both deployed on the same frequency, as small cells are within the macro sites’
coverage, interference is a major issue. This was addressed by the enhanced
intercell interference coordination (eICIC) feature provided within Release 10,
by decreasing the macro-cell power or even blanking the subframes for the cell-
edge users served by the small cells. Yet another aspect comes with the require-
ment of improving user data rates, which was initially covered by CA (Release
10) and CoMP (Release 11). In an intersite deployment, both require a very
tight backhaul latency (the ideal backhaul) as the multiple transmission points
must be in sync. This is due to the fact that resource aggregation is provided on
the MAC layer; thus, a single scheduler is responsible for scheduling resources
within the same TTI on different component carriers (CA) or the same carrier
(CoMP).
All the above issues were taken into account within Release 12 when dual
connectivity3 was introduced. The goals that DC is addressing include the fol-
lowing [2]:
3. As mentioned, DC is in fact a Release 12 feature, however it fits into the overall framework
of the LTE-A Pro advancements, and a lot of enhancements are delivered within Release 13,
therefore we treat it and describe within LTE-Advanced Pro context.
LTE-Advanced Pro: Enhanced LTE Features 275
Figure 13.1 DC: (a) CP, (b) UP, and (c) Protocol architecture. [Figures 13.1(a, b) modified from [3]. Figure 13.1(c) © 2016. 3GPP™ TSs and TRs are the prop-
erty of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)
LTE-Advanced Pro: Enhanced LTE Features 277
Table 13.1
SCG Bearer Versus Split Bearer*
Bearer Type SCG Bearer Split Bearer
Advantages No need for MeNodeB to buffer or SeNodeB mobility hidden to CN; utilization
process packets of the SeNodeB of radio resources between MeNodeB
bearer; no need to route traffic to and SeNodeB for the same bearer
MeNodeB; no need for flow control possible (flexible resource allocation);
functionality reconfiguration is performed at RAN
level (dynamic); at SeNodeB change, no
interruption time (the PDUs can be steered
via MeNodeB at SeNodeB change)
Disadvantages SeNodeB mobility visible to CN; Need to buffer, process the packets of the
reconfiguration needs to go via split bearer at MeNodeB; need to route
MME, thus cannot be very dynamic; traffic via MeNodeB; need for flow control
utilization of radio resources (i.e., additional “scheduler” at PDCP)
between MeNodeB and SeNodeB
for the same bearer not possible;
at SeNodeB change, handover-like
interruption time
*Based on input from [2].
• There are independent C-RNTIs allocated to the UE: one for MCG
and one for SCG (the SeNodeB configures the SCG C-RNTI, but as it
does not have the RRC signaling connection towards UE, the configura-
tion goes via MCG bearer).
• There are separate DRX configurations that can be applied to MCG
and SCG (i.e., CG-specific DRX operation applies to all configured and
activated serving cells in the same CG).
• Frame timing and system frame number (SFN) are aligned among the
CCs of the same CG and may or may not be aligned among different
CGs.
• There is one MAC entity and thus MAC scheduler per CG.
• PUCCH is transmitted only in PCell (MCG) and PSCell (SCG).
• The Timing Advanced Group (TAG) is configured per CG, and expiry
of one TAG from one CG does not imply expiry of the TAG in the other
CG.
LTE-Advanced Pro: Enhanced LTE Features 279
However, there is also an aspect that binds these two links together, that
is, the measurement gap that is configured as common, covering both MeNo-
deB and SeNodeB (i.e., when we configure measurement gap, the UE cannot
receive from neither MCG nor from SCG).
• In the split bearer case, DC requires flow control (i.e., additional RRM
algorithm, scheduling the PDCP PDUs to different links).
• The scheduling of individual packets at each link is performed sepa-
rately and independently of each other and thus can be optimized at
each link according to individual channel characteristics.
• The advantage of this design (i.e., single RRC connection at macro site
and possibility to flexibly assign UP data to different links) is that the
signaling overhead can be decreased by means of reduction of number
of handovers, as the CP/UP split concept can be realized by keeping
the user context at MeNodeB, while flexibly allocating radio resources
among MeNodeB and different SeNodeBs.
• There is a single link for signaling and it is always at MeNodeB, so
there is a risk of dropping the connection when MeNodeB link deterio-
rates, even if SeNodeB has a good channel quality. Additionally, because
of this, there is additional signaling ongoing between SeNodeB and
MeNodeB for encapsulation of the RRC signaling related to SCG (even
the measurements related to SCG must go through MCG).
• The mobility framework is expanded from a handover between two
cells, incorporating the following procedures: single connectivity-to-
DC, DC-to-DC, DC-to-single connectivity, and SeNodeB change.
• The traffic steering/mobility load balancing framework is expanded
from the intrafrequency/interfrequency/inter-RAT handover, towards a
more holistic framework with multiple options (i.e., should the con-
nection be moved from macro cell to small cell, should the small cell be
4. The Release 13 LAA is a CA-type feature; however, Release 14 WI discusses eLAA that en-
ables non-ideal-backhaul between licensed LTE-PCell and unlicensed LTE-LAA SCell.
280 From LTE to LTE-Advanced Pro and 5G
5. All the features provide interworking with WiFi at existing 2.4-GHz and 5-GHz bands; how-
ever, the enhancements are envisioned for 60-GHz band WiFi (e.g., 802.11ad and 802.11ay).
LTE-Advanced Pro: Enhanced LTE Features
without any WLAN network upgrade, that is, the IPsec tunnel is es-
tablished between the UE and the eNodeB [see Figure 13.2(b)]. In this
case, the links are aggregated above PDCP, that is, on the IP level (IPsec
tunneling); thus, if a bearer would be split and provided over two links,
the packets could arrive out of order due to lack of in-order delivery
function provided by PDCP, therefore split bearer is not used in LWIP.
Contrary to LWA, in LWIP, both directions (i.e., DL and UL) are sup-
ported via the secondary WiFi link. In both cases, LWA and LWIP, the
EPS bearer is mapped to the radio bearer, that is, distributed and con-
trolled by the eNodeB.
• RAN Controlled LTE WLAN Interworking (RCLWI): This is an offload-
ing mechanism, compared to LWA and LWIP, which are aggregation
schemes. RCLWI is a RAT switch, being an evolution from ANDSF-
based and RAN-assisted interworking schemes. In this case, there is
no UP connectivity between LTE eNodeB and WLAN AP, but a total
offload of the data flow from the core network and both DL and UL
directions are supported by the WiFi link. However, the architecture on
the RAN level is the same as with LWA, that is, Xw interface and WT
node are present to exchange configuration and measurements for the
traffic-steering decision [see Figure 13.2(c)].
The following sections provide specifics on each of the above features that
are then summarized in Section 13.3.4.
[3]). Additionally, PDCP is equipped with LWA status report control PDU to
provide information on the received sequence numbers from LWA bearer; this
is particularly important as WiFi links are assumed to be less reliable than LTE
transmission.
Designing LWA based on DC framework has the following advantages:
• In the scenario of split bearer, as in the case of DC, a flow control func-
tion is required to decide on the individual PDCP-PDU transmission
path, either through LTE or WLAN link. Since, as already mentioned,
the RAN can either configure a UE with DC or LWA, a common frame-
work for management of the resources can be designed, that is, a unified
RRM decision for selecting either DC or LWA, and then a single algo-
rithm for flow control (i.e., PDCP-PDU scheduler).
• Additionally, this design for resource aggregation does not require
WLAN-specific CN nodes, interfaces, and CN signaling, as the second-
ary WiFi link is transparent to the EPC.
6. However, this limitation is addressed by Release 14 eLWA that also treats the UL (together
with other improvements, for example, support for 60-GHz WiFi version or SON for LWA).
284 From LTE to LTE-Advanced Pro and 5G
the same DRB simultaneously through LTE and WLAN links). In terms of
mobility, the same measurement types, measurement reporting and mobility
framework are used as in LWA (with the difference that the eNodeB does not
have the CP connectivity to the WLAN, as WT does not exist in LWIP).
The initial phase of the signaling procedure for LWA and LWIP is the
same and includes: UE Capability Information (where the UE indicates which
features it supports), RRC Connection Reconfiguration (setting up the mea-
surements for WLAN), Measurements Report (where the UE provides mea-
surements on WLAN that meet the thresholds), and RRC Connection Recon-
figuration (where the eNodeB provides the UE with WLAN mobility set and
configures either LWA or LWIP). At establishment, LWIP-SeGW IP address
is sent to the UE together with WLAN mobility set and bearer configuration
(using RRCConnectionReconfiguration message with lwip-Configuration). After
WLAN association (where UE selects specific WLAN AP and authenticates
using EAP/AKA), UE establishes IPsec tunnel, where the IPsec keys are derived
based on LTE KeNodeB, and there is one IPsec tunnel for all the data bearers that
are configured for LWIP. In terms of UP upgrades, at the eNodeB and UE side
for the UL,7 LWIP uses LWIPEP (LWIP Encapsulation Protocol) to encap-
sulate the IP packets with GRE header and transfer them through the LWIP
tunnel8 [3]. The LWIP tunnel management (i.e., establishment and release) is
independent from the data bearer management (i.e. configuration and resource
release) through the LWIP procedures. Through this, the WLAN mobility set
updates can be decoupled from the actual data transmission and the UE can
have tunnels ready, even if the data is not there at the moment.
7. For the DL, the packets received from the IPsec tunnel are directly forwarded to the upper
layers [3].
8. More specifically, LWIP tunnel is a tunnel between eNodeB LWIPEP entity and UE LWIPEP
entity, whereas IPsec tunnel is established between the LWIP-SeGW and UE.
286 From LTE to LTE-Advanced Pro and 5G
surements, and RAN decides on offloading, giving more control to the MNO
(however, the RALWI mechanism with RAN-rules can be used in parallel as it
is applied for RRC_IDLE). What is common to LWA, is the architecture (Xw
interface and WT entity) and deployment scenarios, that is, co-located (WLAN
AP and LTE Small Cell node) and non-co-located (with nonideal backhaul).
The mobility measurements are the same as for LWA, and WLAN mobility set
management is common for both. However, the actual mobility decision is dif-
ferent in the following manner: in LWA the mobility decision is to add second-
ary link for throughput improvement and the radio bearer is forwarded or split
from the eNodeB (i.e., EPS bearer is mapped to the radio bearer), whereas in
RCLWI the mobility decision is to switch the UP link from LTE to WLAN or
vice versa rather being a handover-like mechanism (thus, there is no UP inter-
face defined between eNodeB and WT). In order to be able to switch back from
WLAN to LTE, the signaling radio bearer (and thus RRC signaling) is kept at
the LTE, and the WLAN measurements and intra-WLAN (but inter-WLAN
mobility set) are handled by the LTE anchor cell. Both steering decisions are
provided to the UE using the RRC Connection Reconfiguration procedure
with mobilityControlInfo providing the RCLWI-specific IEs. Similar to LWA,
the UE is able to move freely among the WLAN APs that are under the WT of
the currently configured WLAN mobility set (i.e., the AP switch/association is
based on the UE decision, without informing the serving eNodeB). However,
UE is required to provide the feedback to the eNodeB about the WLAN con-
nection status, so that upon failure, the eNodeB can quickly act and switch the
UP connection back to LTE.
LWIP does not require new entities and interfaces, whereas LWA is more flex-
ible, but requires upgrading the network with Xw interface, LWAAP, and WT
logical node). Table 13.2 compares these three features.
Table 13.2
Release 13 LTE-WiFi Interworking Features Comparison
Feature LWA LWIP RCLWI
Type/purpose Aggregation/improved Aggregation/improved Offload/load balancing
throughput throughput
Deployment Co-located (ideal/internal Non-co-located (using Co-located or non-co-
backhaul), non-co-located legacy WLAN) located
(non-ideal backhaul)
Aggregation level PDCP (tight interworking IP Core Network
with flow control/PDCP
scheduler)
Bearer type Switched bearer, split Switched bearer CN offload
bearer (with in-order
delivery support)
WLAN type Upgraded WLAN Legacy WLAN Upgraded WLAN
Network upgrade Xw-C, Xw-U, WT (for LWIPEP, LWIP-SeGW Xw-C, WT
non-collocated scenario),
LWAAP
WiFi link DL (Release 13), DL and UL DL and UL DL and UL
(Release 14)
Flexibility/ Highest (dynamic bearer Medium (fast bearer Lowest (only one link
performance aggregation and fast switching) and handover-like
bearer switching) switch)
288 From LTE to LTE-Advanced Pro and 5G
ferences, as there was a huge debate on allowing the LTE system (i.e., a system
operated by big MNOs) to use the free-for-all spectrum, where IEEE 802.11
had a significant voice in evaluating the LTE in Unlicensed scheme (to make
sure that the fairness is achieved).
Figure 13.3 LAA deployment scenarios. (© 2015. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)
290 From LTE to LTE-Advanced Pro and 5G
Thus, for LAA, the Listen Before Talk (LBT) with exponential back-off mecha-
nism is used. Specifically, in LBT, the transmitter senses the channel to verify if
it is occupied or free for transmission. If it is free, the transmission is possible;
if it is not free, transmitter performs a back-off procedure for waiting and tries
again [i.e., a random number N is selected within contention window and the
channel is sensed to be idle during the predefined time multiplied by N before
transmitting (the detailed procedure is described in [10])].
Important for LBT configuration, the timing rules and energy threshold-
related parameters are defined as:
The major change in the PHY layer come exactly from the need of LBT,
that is, the PHY layer needs to support sensing procedure (as defined above by
the operational parameters), discontinuous transmission (DTX) and limited
maximum transmission duration. In LAA, the licensed signaling anchor (PCell)
LTE-Advanced Pro: Enhanced LTE Features 291
uses the standard frame type, whereas the unlicensed secondary carriers (SCells)
use the new LTE frame format. The format 3 radio frame type (applicable to
LAA SCell operation only), is a modified version of the standard LTE radio
frame. It is also (like regular LTE frame) 10 ms long with 10 subframes, while
the DL transmission can occupy one or more consecutive subframes (specifi-
cally, ranges from 2 to 10 consecutive subframes depending on the Channel
Access Priority Class). DL burst duration can start in the first or second slot
of the subframe (according to subframeStartPostion IE signaled via RRC within
DedicatedPhysicalConfiguration) and ends with the last subframe in the specific
transmission being a full subframe or a part subframe (i.e., any of the TDD
DwPTS configuration: 3 to 12 OFDM symbols, signaled via DCI Format 1C)
[11].
The UE capabilities for this purpose are signaled differently than the reg-
ular LTE operation, through tm9-LAA-r13 and tm10-LAA-r13 IEs within the
UE Capability procedure [7] together with downlinkLAA-r13 support indicat-
ing, if the UE supports LAA in general.
Table 13.3
Comparison of the LTE Operation in Unlicensed Spectrum
System LAA LTE-U MulteFire
Specification body 3GPP LTE-U Forum MulteFire Alliance
Operating spectrum 5 GHz 5 GHz 3.5 GHz (GAA for the
United States), 5 GHz
Operation Aggregation: CA SCell Aggregation: CA SCell Stand-alone (based on
(SDL) (Rel-13), DC SCG (SDL) LAA and eLAA)
(Rel-14 eLAA)
Coexistence LBT CSAT LBT
mechanism
Deployment Worldwide (compliant China, Korea, India, Worldwide (supports
possibility with regulations of United States LBT)
most countries)
Transmission DL only (Release 13), DL only DL (LAA based) and UL
direction DL and UL (Release 14 (eLAA based)
eLAA)
Licensed anchor Yes, FDD or TDD Yes, FDD No (stand-alone)
cells
Unlicensed frame Frame type 3 (LAA) Frame type 1 (FDD) Frame type 3 (LAA)
structure with enhancements
Changes to licensed High (LAA PHY) Low (fast to deploy, High (stand-alone
LTE regular LTE PHY) RAN, LAA PHY)
Support for neutral No (bound to specific No (bound to specific Yes (MNO agnostic,
host MNO due to licensed MNO due to licensed connected to EPC)
anchor) anchor)
13.4.2.1 LTE-U
LTE-Unlicensed is specified by the LTE-U Forum [12] as a proprietary tech-
nology. The high-level concept of accessing the unlicensed resources is similar
to LAA (i.e., it is based on CA with SCell anchored at LTE-licensed PCell).
However, the SCell design and unlicensed channel access is different to LAA.
First, it is not using any special radio frame structure, but is based on regu-
lar LTE FDD DL radio frame (i.e., frame type 1) for LTE-U SCell, with the
small cell ON/OFF scheme and Discovery Reference Signals to support DTX.
Second, instead of LBT, it is using the Channel Selection and Carrier Sensing
Adaptive Transmission (CSAT) scheme. The mechanism works as follows: the
eNodeB senses the available unlicensed channels to choose the empty channel
to avoid interference to/from WiFi and does that on an ongoing basis, that is,
constantly measures channels and switches, when the currently used one is oc-
cupied. Only if there are no empty/clean channels, it enters into the coexistence
CSAT scheme. This is based on adaptive duty cycle, where the LTE is on (i.e.,
transmitting regular DL frames) during a specified percentage of time, and then
it is off for the rest of the period where the WiFi is allowed to use this time.
The eNodeB senses the chosen channel for up to 200 ms and based on channel
294 From LTE to LTE-Advanced Pro and 5G
occupancy, the percentage of LTE ON time is adjusted (i.e., the duty cycle is
subject to WiFi activity). As the channel gets more or less congested, the duty
cycle is adaptively changed to support fair sharing. The major problem with
this approach (that some stakeholders raised) is that it is the LTE system that
decides on how much time it occupies, and WiFi needs to follow. Thus, this is
not considered to be fair. Also, this solution then can only be applied in several
countries, like the United States, Korea, China, and India where there are no
regulatory restrictions on such an approach. The regulatory bodies in the other
countries require the LBT mechanism for assuring fair sharing, thus LTE-U
cannot be used there.
13.4.2.2 MulteFire
MulteFire system is being specified by MulteFire Alliance [13], where the basic
approach is to support LTE in unlicensed spectrum as a stand-alone opera-
tion (i.e., without being anchored to licensed PCell counterpart). Thus, this
technology aims at enabling small cell operation solely in unlicensed spectrum.
Similar to LTE-U, it is also non-3GPP standardized system and therefore may
be considered as a stand-alone RAT (connected to EPC); however, it is solely
based on Release 13 LAA for DL and Release 14 eLAA for UL operation, to
enable global reach (i.e., MulteFire fulfills regulatory requirements on using
LBT). Due to lack of the licensed anchor, MulteFire allows neutral host con-
cept, where multiple operators share the MulteFire resources. However, due
to the stand-alone operation, certain enhancements for the signaling support
are added to the regular LAA operation including mobility, signaling, paging,
and system information support. An additional difference to other unlicensed
spectrum access schemes is that MulteFire is aiming at support also 3.5-GHz
band in the United States under the Citizens Broadband Radio Service (CBRS)
framework as Generalized Authorized Access (GAA) allowing to use up to 80-
MHz BW, if this not occupied by incumbents. The first Release of Multefire
specification was released in April 2017 [13].
13.5.1 Massive CA
Within LTE-Advanced Pro, a further extension for CA has been provided,
namely, massive CA with up to 32 aggregated CCs. In this, different frame
structures can be combined, that is, Frame Structure Type 1 (FDD), Type 2
(TDD), and Type 3 (LAA), so that licensed CCs can be combined with unli-
censed CCs. A total BW of 640 MHz can be utilized in a combined fashion,
while still assuring backwards compatibility with LTE Release 8 channel BWs
and frame structures (as well as LAA). Massive CA still requires a single PCell
that can be accompanied with up to 31 SCells. This provides a higher degree of
flexibility for resource handling with fast adaptation via MAC scheduling and
helps to achieve higher throughputs. Additionally, it reduces the need for costly
HO (in terms of signaling load and service interruption) for the purpose of
load balancing between carriers. This reduction is possible due to the scheduler
that takes over the responsibility for distributing traffic among carriers, for the
maximum efficiency and congestion avoidance. On the downside, the schedul-
ing complexity, PHY feedback signaling, and RF complexity increases. To sup-
port non-co-located CA, that is, to distribute the provisioning of the different
serving cells through different TRPs (Transmission-Reception Points), Release
11 specified Timing Advance Group that defines a group of serving cells that:
use the same timing reference, use the same TA value, and use a single RA pro-
cedure to establish timing alignment. Massive CA, inherited from Release 11
the maximum number of the TAGs, that is, one pTAG (primary TAG, used for
serving cell group associated with PCell) and up to 3 sTAGs (secondary TAG,
used for serving cell group associated with SCell), thus enabling to distribute
the 32 CCs onto 4 TRPs [3].
13.5.3 UE Support
In order to utilize the full potential of the massive CA, the UE should be ca-
pable of supporting the features that are associated with it [15]. The param-
eters that need to be supported are: support of massive CA and accordingly the
number of supported CCs and their configuration. The UE may optionally
support: PUCCH transmission on SCell (pucch-SCell-r13), cross-carrier sched-
uling (crossCarrierScheduling-B5C-r13) for the beyond 5 DL CCs (as the CIF
specified before Release 13, did provide the possibility to address the CCs on 3
bits), and multiple TAGs (multipleTimingAdvance) for each band combination.
However, from a practical point of view, within Release 13, still a maximum of
5 DL CCs can be aggregated, as defined by RAN WG 4 (being the 3GPP group
specifying the allowed band combinations for CA). With 32 component car-
riers, there is a huge impact on the RF front end on the complexity and power
consumption, which specifically touches the UE devices. The details are out of
scope of this chapter.
LTE-Advanced Pro: Enhanced LTE Features 297
• Better resource utilization, where the users between themselves have bet-
ter direct channel conditions, compared to the individual links towards
eNodeB;
• Extending the coverage of the cell; decreasing power consumption of
the low-end devices that can communicate with a gateway close to them;
• Gains achieved due to trunking where a single UE is responsible for
communicating with the eNodeB on behalf of multiple devices (e.g.,
RACH occupancy decrease due to single RRC connection to the net-
work).
• In-coverage: Where both UEs are located within network coverage and
have communication link between themselves and another communica-
10. D2D has been originally specified within Release 12, with major updates within Release 13.
11. Direct D2D communications must be operational in the absence of network infrastructure.
298 From LTE to LTE-Advanced Pro and 5G
tion link to the RAN, thus the RAN controls the resource usage between
them for ProSe communications. In this case, the synchronization is
provided to both devices from the eNodeB. The nonpublic safety (i.e.,
commercial) ProSe applications can operate only in this mode.
• Out-of-coverage:12 Where all UEs having active D2D communication
are located outside network coverage (i.e., none of them has active RRC
connection to an eNodeB). In this case, one of the UEs provides a signal
reference to the others for synchronization purposes. As the RAN is not
able to communicate the resource to be used by the UEs, the UEs are
preconfigured with the available resources, to be used for transmission.
This operation is available only for the public-safety use cases.
• Partial coverage: Where one of the devices is located within network
coverage (and has the active connection to the network) and the other
is located outside the cell’s coverage. The synchronization to the out-
of-coverage UE is relayed by the in-coverage UE. The resources of the
in-coverage UE are controlled by the RAN, whereas the out-of-coverage
UE uses the preconfigured resources. This operation is available only for
the public-safety use cases.
12. Note that in-coverage or out-of-coverage relates only to the lack of coverage of the carriers/
frequencies that are available for ProSe/D2D communications (so the UE may still be in
coverage of the general cellular communications cells).
LTE-Advanced Pro: Enhanced LTE Features 299
• PC5-D, using L3 ProSe Protocol, is used solely for sending and receiv-
ing discovery message and operating above the MAC layer being an
application-level protocol (NAS-like).
• PC5-C, using L3 PC5 Signaling Protocol (on top of PDCP) is used
for CP signaling between devices, including establishment, modifica-
tion and release of the logical side link connection. It is being exchanged
between devices on unicast L2 ID (i.e., MAC PDU IEs).
• PC5-C is used for broadcasting system information and synchroniza-
tion signals for the out-of-coverage and partial coverage scenarios, using
RRC.
Figure 13.5 ProSe architecture and PC5 Protocol stacks. (© 2015. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)
LTE-Advanced Pro: Enhanced LTE Features 301
• Physical SL Shared Channel (PSSCH) for the D2D data, where the data
is mapped from the SL Transport Channel over the SL Shared Channel;
• Physical SL Control Channel (PSCCH) for scheduling assignments, us-
ing SL Control Information (SCI);
• Physical SL Broadcast Channel (PSBCH) for broadcasting system in-
formation related to D2D communication, with the system information
provided through SL-BCCH over SL-BCH;
• Physical SL Discovery Channel (PSDCH) for providing discovery
signals.
specific resource pools, from which they can select resources autonomously.
The important thing in this discussion is that there is no new RRC state for
the D2D communication; thus, as it can be expected, Mode 1 is only possible
when UE is in RRC_CONNECTED state, while Mode 2 can be used in both
RRC_CONNECTED or RRC_IDLE states meaning that the operation can
be either bonded to the regular cellular connection or independent (thus, both
SL communication and SL discovery procedures are independent of the RRC
state). This also means that there is no RRC connection between the UEs need-
ed to be established, rather the L2 association (where the UE are identified us-
ing MAC PDU with source and destination IDs) and direct data transmission.
In case of joint operation (dual connectivity with Uu and PC5), the UL and SL
transmissions using the same UL carrier have to use different subframes (i.e., it
is not possible for the UE to transmit UL and SL in the same time). As it was
already mentioned, at a particular time, there is always one UE that is in trans-
mit mode and the other is in receiver mode (and these roles may be exchanged).
For the transmitting UE to define the transmission of the data transport block,
the Sidelink Control Information 0 (SCI0) is standardized (specifying, for ex-
ample, MCS, resource block assignment, RV order, and timing advance). Note
that, in the case of Mode 1, the eNodeB schedules the resources using DCI
format 5, but this scheduling grant is for the PSCCH (whereas the SCI0 config-
ures the resources of PSSCH). Therefore, the PSCCH always precedes PSSCH
transmission in the allocated resources. The PSSCH uses HARQ, but without
feedback (i.e., no ACK/NACK is provided), being a blind transmission. Thus,
for robustness reasons, the initial transmission is always followed by the 3 con-
secutive retransmissions using the HARQ redundancy versions. Therefore, a
single transport block transmission is occupying the 4 consecutive subframes.
This is important with respect to the dimensioning of the resource pools alloca-
tion. The resources of the UL carrier are always configured as resource pools,
where a resource pool defines two sets of consecutive PRBs (accompanied with
selected set of subframes), from which the users select resources for particular
transmission (in Mode 2), or from which the eNodeB selects the resources for
particular D2D transmission (in Mode 1) [3, 10, 11].
creates own timing reference and provides that to the associated second UE.
For this purpose, SL synchronization channels are specified [including primary
and secondary SL synchronization signal, PSSS and SSSS, respectively, and SL
Broadcast Channel (SL-BCH)]. Their configuration may be provided by SIB18
from eNodeB or may be preconfigured in the UE UICC for out-of-coverage
scenario. Compared to the DL synchronization signals, the SL SS use two dif-
ferent sets of the cell identifiers, that is, the first one is called id_net and ranges
between 0 and 167 and the second one is called id_oon and ranges between 168
and 335. id_net is a set dedicated for in-coverage UEs to provide the reference
to the out-of-coverage users, whereas the id_oon is for the out-of-coverage UEs
sourcing timing reference for other out-of-coverage users. Due to this design,
the receiving UEs know the status of the UE providing the synchronization.
The SLSS-transmitting UE sends also the MIB SL (within SL-BCH) including
SL system BW, direct subframe, and direct frame number as well as the inCover-
age flag, stating if the UE is in-coverage or not. The subframe/frame numbers
are present if the UE relays the synchronization from the eNodeB to tell the
recipients how the MIB-SL transmission is located with respect to the original
EUTRAN timing [3, 7, 11].
• Support for low throughput and sporadic transmission and limited mo-
bility at the same time enabling one to serve a massive number of devices
(e.g., up to 50,000 devices per cell);
• Achieve very low device cost (by limiting its functionality) and improve
energy efficiency (thus enable battery life with up to 10 years);
• Provide enhanced coverage (up to 20 dB better compared to regular
LTE system to reach problematic places, like basements, achieving 164
dB of MCL);
• Reuse existing infrastructure (thus enable low-cost deployment).
304 From LTE to LTE-Advanced Pro and 5G
To fulfill these requirements, the key solutions for massive MTC include
reduced BW; extended coverage by, for example, TTI bundling; reduced maxi-
mum transmit power, reduced support for DL transmission modes and number
of antenna ports (to SISO and TxDiversity in the DL) together with the use of
single receive RF chain; reduced TB size; and half-duplex FDD operation. The
remainder of this section discusses NB-IoT being the new radio added to LTE
framework (eMTC and EC-GSM are the enhancements to LTE and GSM for
the existing systems and are not covered by this chapter).
The operating bands include 700 MHz, 800 MHz, 900 MHz, 1,800
MHz, and 2,000 MHz, thus mostly low frequencies, to provide largest cover-
age. The main simplifications of NB-IoT with respect to LTE are the lack of
connected mobility support (i.e., handover and accompanied measurements),
reduced system BW and system optimizations for efficient data transmission
(with the signaling reduction being the key requirement). Full set of simplifica-
tions and system features is described in [3].
14. Lasting in this case 32 ms, as the symbol duration is 4 times longer, because the subcarrier size
is 4 times smaller than the regular 15 kHz.
15. Lasting 8, 4, 2, and 1 ms, respectively.
16. Where the system frame number spans from 0 to 1,023 using 10-bit SFN in the BCH.
306 From LTE to LTE-Advanced Pro and 5G
17. Where in legacy LTE, the context in the RAN is deleted in RRC_IDLE.
LTE-Advanced Pro: Enhanced LTE Features
Figure 13.6 NB-IoT UP Protocol Stack for Control Plane CIoT EPS Optimization (© 2016. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI,
307
get back to idle state, without the cumbersome process of establishing the AS
security, DRBs, and connections in the core network. It is achieved through the
new RRC procedures: RRC Connection Suspend/Resume, as well as S1 pro-
cedures: UE Context Suspend/Resume. As the CP solution (presented above)
is the default one, during the initial attach, the SRB1bis is used, and can be
switched to SRB1 later on (with AS security activated and PDCP in regular
mode), upon the MME decision to use UP solution. After that, the DRB and
UE context in the serving eNodeB is established. Once the UE is done with the
data transmission, the eNodeB may decide to suspend the RRC connection,
where the UE keeps the AS context and suspends SRBs and DRB(s) moving
to RRC_IDLE, and the eNodeB stores AS context of the idle UE under the
ResumeID. Next time, when there is UL data to be transmitted (or when UE
is paged), the UE uses RRCConnectionResumeRequest message (instead of RRC-
ConnectionRequest as in regular LTE case) providing the ResumeID, so that
the eNodeB restores the context and the bearers’ configuration. This enables to
significantly decrease the connection setup time. As already mentioned in this
section, in NB-IoT, there is no handover procedure (i.e., no connected mode
mobility). This is due to the fact that the majority of the NB-IoT application
assumes stationary UE behavior, or that the data in those applications is trans-
mitted very infrequently. Thus, it is better for the UE to be kept in idle mode,
and only wake up for short while, send a packet, and go back to sleep (to save
the battery power). Therefore, even if the UE moves in between sending pack-
ets, it can be done in the idle mode (as the time of resuming the connection
and sending the packet is very short). For that case, when the UE moves from
one cell to another and tries to resume RRC connection there, the context can
be provided to the target eNodeB via the new X2-AP procedure, Retrieve UE
Context Request.
The decision of which solution for CIoT data transfer is used for a par-
ticular UE is decided by the MME. The comparison of both solutions is sum-
marized in the Table 13.4.
Table 13.4
Comparison of the Data Transfer Modes for NB-IoT
Data Transfer Data over NAS (CP Solution) Data over DRB (UP Solution)
Mode
Standard CP CIoT EPS Optimization UP CIoT EPS Optimization
reference name
Key signaling No need to set up DRB (UP No need for establishing RRC connection
optimization piggybacking within signaling) and set up UE context every time (RRC
context stored in RRC_IDLE)
Support by UE Mandatory Optional
Data transfer Data transfer over NAS using NAS Data transfer over standard UP path with
PDU the use of DRB
Radio bearers SRB0 and SRB1bis SRB0, SRB1, and DRB
Security Only NAS security NAS and AS security (Legacy PDCP used)
Header RoHC in CN (MME is providing RoHC at PDCP
compression RoHC)
UE context in Not available (Regular RRC_IDLE) Available (Suspend/Resume operation)
RRC_IDLE
The removed Totally removed (no need as DRB Removed during the suspend/resume
RRC messages is not being establishment), RRC operation (but present for the initial
Security Mode Command, RRC establishment of the DRB), RRC
Security Mode Complete, RRC Connection Setup Complete, RRC
Connection Reconfiguration, Security Mode Command, RRC Security
RRC Connection Reconfiguration Mode Complete, RRC Connection
Complete Reconfiguration, RRC Connection
Reconfiguration Complete
Table 13.5
Comparison of LTE System Evolution Steps
System LTE LTE-Advanced LTE-Advanced Pro
Branding
3GPP Release 8 Release 10* Release 13 and beyond
Release
Freezing date March 2009 June 2011 March 2016
Main Provide high throughput Fulfill IMT-Advanced Mark evolution point with
purpose for MBB, prepare mobile requirements for 4G significant improvements
system for evolution system to the LTE-Advanced
towards 4G
Key features OFDMA, DL MIMO (4 × CA (extending system BW DC, LAA, LWA,
4), Modulation with up to to 100 MHz), Enhanced Modulation 256QAM,
64QAM, Flat architecture DL MIMO (8 × 8), UL EB/FD-MIMO, D2D, V2X,
(eNodeB), Flexible system MIMO (4 × 4), small cells, NB-IoT, eMTC
BW (1.4–20 MHz) HetNet, eICIC, SON,
CoMP, ePDCCH
*LTE-Advanced is defined in 3GPP as Release 10, but in fact it also covers the enhancements from Release 11 and
Release 12.
310 From LTE to LTE-Advanced Pro and 5G
18. Note that these calculations are simplifed to show the principles of features’ impacts on the
system performance. The more accurate calculations are provided in Chapter 3.
LTE-Advanced Pro: Enhanced LTE Features 311
Table 13.6
Comparison of the Systems’ Key Parameters and Throughputs
System LTE LTE-Advanced LTE-Advanced Pro
Max. system BW 20 MHz 100 MHz 100 MHz, 640 MHz*
Max. DL modulation 64QAM 64QAM 256QAM
Max. DL number of 4 8 8
spatial layers
Max. DL spectral 15 bps/Hz 30 bps/Hz 40 bps/Hz
efficiency
Max. DL throughput 300 Mbps 3,000 Mbps (3 Gbps) 4,000 Mbps (4 Gbps), 25,600
Mbps (25.6 Gbps)**
*In case of 32 CCs. ** In case of 32 CCs. As discussed above, this is not likely to happen.
Therefore, we take only the first aspect for the calculation of the maxi-
mum reasonable throughput for LTE-Advanced Pro:
References
[1] 3GPP RP-151569, “Release 13 Analytical View Version,” September 2015.
[2] 3GPP TR 36.842, v12.0.0, “Study on Small Cell Enhancements for E-UTRA and E-
UTRAN; Higher Layer Aspects,” December 2013.
[3] 3GPP TS 36.300, v14.0.0, “EUTRA and EUTRAN: Overall Description,” September
2016.
LTE-Advanced Pro: Enhanced LTE Features 313
[4] 3GPP TS 36.321, v14.0.0, “EUTRA: Medium Access Control Protocol Specification,”
September 2016.
[5] Dryjanski, M., and M. Szydelko, “A Unified Traffic Steering Framework for LTE Radio
Access Network Coordination,” IEEE Communications Magazine, July 2016, pp. 84–92.
[6] 3GPP TR 37.843, v12.0.0, “Study on WLAN-3GPP Radio Interworking,” December
2013.
[7] 3GPP TS 36.331, v14.0.0, “EUTRA: Radio Resource Control Protocol Specification,”
September 2016.
[8] 3GPP TS 36.304, v14.0.0, “EUTRA: UE Procedures in Idle Mode,” September 2016.
[9] 3GPP TR 36.889, v13.0.0, “Study on Licensed-Assisted Access to Unlicensed Spectrum,”
June 2015.
[10] 3GPP TS 36.213, v14.0.0, “EUTRA: Physical Layer Procedures,” September 2016.
[11] 3GPP TS 36.211, v14.0.0, “EUTRA: Physical Channels and Modulation,” September
2016.
[12] www.lteuforum.org.
[13] www.multefire.org.
[14] Bhamri, A., K. Hooli, and T. Lunttila, “Massive Carrier Aggregation in LTE-Advanced
Pro: Impact on Uplink Control Information and Corresponding Enhancements,” IEEE
Communications Magazine, May 2016, pp. 92–97.
[15] 3GPP TS 23.303, v14.1.0, “Proximity-Based Services (ProSe),” December 2016.
[16] 3GPP TR 36.888, v12.0.0, “Study on Provision of Low-Cost Machine-Type
Communications (MTC) User Equipments (UEs) Based on LTE,” June 2013.
[17] 3GPP TR 45.820, v13.1.0, “Cellular System Support for Ultra-Low Complexity and Low
Throughput Internet of Things (CIoT),” November 2015.
[18] 3GPP TS 36.306, v14.0.0, “UE Radio Access Capabilities,” September 2016.
[19] 3GPP TS 23.401, v14.2.0, “General Packet Radio Service (GPRS) Enhancements for
Evolved Universal Terrestrial Radio Access Network (E-UTRAN) Access,” December
2016.
[20] 3GPP TR 23.720, v13.0.0, “Study on Architecture Enhancements for Cellular Internet of
Things,” March 2016.
14
Toward 5G
As the standardization of the enhancements for LTE progresses with LTE-Ad-
vanced Pro within Releases 13 and 14, the 3GPP has recently kicked off the
work on the next-generation system, namely, 5G, in parallel. The seeds of 5G
were sown within Release 13 by working on defining potential use cases for fu-
ture networks in the “Feasibility Study on New Services and Markets Technol-
ogy Enablers (SMARTER)” [1]. This is now being followed-up by the Release
14 architecture and RAN solution studies and is moving to work items within
the Release 15 and 16 timeframe. The next-generation system being specified
within 3GPP should fulfill the ITU-R requirements for IMT-2020, a successor
of IMT-Advanced. The key requirement for 5G (beyond the regular expected
improvements for higher throughputs and capacity and lower latencies) is to
specify a flexible system to natively support large variety of use cases that may
have very different requirements (e.g., from low-end MTC to the high-end vir-
tual reality MBB). This chapter outlines the main use cases, requirements, and
spectrum aspects related to 5G followed by the potential solutions for the air in-
terface and system architecture. Tight interworking with evolved LTE (eLTE1)
and migration path toward full 5G system is also discussed later on. Compared
to previous chapters, this chapter is slightly different, as the 5G standardiza-
tion has just started and is in its study phase (and early normative work) at the
time of writing. Therefore, instead of discussing well-defined functionalities,
planning, or optimization guidelines, this chapter serves as an overview of the
potential solutions and scope of 5G. The references for this chapter are mostly
3GPP technical reports (TRs), presenting the requirements and proposed solu-
tions to capture the recent status of the 5G standardization.
315
316 From LTE to LTE-Advanced Pro and 5G
foundation for the work items in this area. The 3GPP’s 5G proposal that will
be submitted to ITU-R will include both, the new, nonbackward-compatible
radio technology [called New Radio (NR)] together with the evolution of LTE,
called eLTE. The eLTE is going to be developed to support the next generation
core network [called Next Generation Core (NGC)] in parallel to the develop-
ment of the NR. The set of the recent 3GPP documentation related to the
above discussion is presented in the Table 14.1, and is used throughout the rest
of this chapter.
Table 14.1
5G-Related 3GPP Standardization Documents
Document Responsible
Number Document Title TSG Reference
TR 22.891 “Feasibility Study on New Services and Markets SA [1]
Technology Enablers (SMARTER)”
TR 22.861 “Feasibility Study on New Services and Markets SA [6]
Technology Enablers for Massive Internet of Things”
TR 22.862 “Feasibility Study on New Services and Markets SA [7]
Technology Enablers for Critical Communications”
TR 22.863 “Feasibility Study on New Services and Markets SA [8]
Technology Enablers for Enhanced Mobile Broadband”
TR 22.864 “Feasibility Study on New Services and Markets SA [9]
Technology Enablers - Network Operation”
TR 23.799 “Study on Architecture for Next Generation System” SA [10]
TR 33.899 “Study on the Security Aspects of the Next Generation SA [11]
System”
TS 22.261 “Service Requirements for Next Generation New SA [5]
Services and Markets”
TR 38.900 “Study on Channel Model for Frequency Spectrum RAN [12]
Above 6 GHz”
TR 38.801 “Study on New Radio Access Technology; Radio RAN [13]
Access Architecture and Interfaces”
TR 38.802 “Study on New Radio (NR) Access Technology; Physical RAN [14]
Layer Aspects”
TR 38.803 “Study on New Radio (NR) Access Technology; RF and RAN [15]
Coexistence Aspects”
TR 38.804 “Study on New Radio (NR) Access Technology; Radio RAN [16]
Interface Protocol Aspects”
TR 38.805 “Study on New Radio (NR) Access Technology: 60GHz RAN [17]
Unlicensed Spectrum”
TR 38.912 “Study on New Radio Access Technology” RAN [18]
TR 38.913 “Study on Scenarios and Requirements for Next RAN [19]
Generation Access Technologies”
318 From LTE to LTE-Advanced Pro and 5G
• eMBB: Requiring support for high network capacity, high user density,
and uniform user experience;
• mMTC: Calling for massive connectivity, highly efficient small packets
transmission;
• URLLC: Requiring ultralow latency and/or ultrahigh reliability trans-
mission.
3. The critical communications or ultrareliable and low-latency communication group has been
grouped together. However, within this group the two aspects may be separated, with URC
focusing on reliability and low latency (e.g., industrial control, gaming, remote control-UAV)
and ULLC focusing on ultralow latency (e.g., tactile Internet).
4. The initial [1] document with high level use case definition evolved into 4 feasibility studies
on the specific system requirement on the group, namely, [6–9].
Toward 5G 319
Table 14.2
Key Performance Requirements from Different Services*
KPI eMBB mMTC URLLC
Data rate Very high, ~10 Gbps Low, ~kbps Low
Spectral High, ~30 bps/Hz
efficiency
Latency Low latency Very low latency/real time
(1 ms end-to-end, 0.5 ms
for DL, 0.5 ms for UL)
Reliability High, (10−5 in 1 ms)
Mobility From 0 up to 500 km/hr Low/none From low to high
Mobility Low Very low, 0 ms (make-
interruption before break)
time
Coverage High, 164 dB (maximum
coupling loss)
UE battery life Long, 10 year
Traffic density High, resulting from high Medium/high, resulting
traffic per connection from large number of
devices
Communication High (efficient
efficiency signaling and resource
utilization)
Connection Very high (1,000,000/
density km2)
*The blank fields mean that the particular KPI is not crucial for the particular service vertical.
5. This discussion focuses mostly on performance and service characteristics’ type of require-
ments, whereas the requirements for system capabilities are described in [5].
Toward 5G 321
and user density], or on the actual service [e.g., in the URLLC type, the latency
requirement can span from 0.5 ms (in tactile interaction application) to 10 ms
(in industrial automation applications), where it is more about stable latency
rather than the extreme low latency values].6 Those internal differences within
the group call for even more flexibility of the system, also in the deployment
and connectivity aspects, that is, to vary the deployment with respect to use
cases (e.g., macro sites, small cells, on-demand relay nodes, mobile relay nodes,
direct D2D communication) and optimize the mobility to the use case (e.g., for
factory robots, no mobility; for vehicular communication, very high mobility).
6. See [5] for detailed performance requirements for those two groups.
7. That is, frequencies above 30 GHz, where the wavelength is smaller than 1 cm.
8. NR in Rel-15 should support up to 52.6 GHz according to the NR WID [24].
9. The super-6-GHz band is sometimes referred to as millimeter wave. However, the millimeter-
wave spectrum starts actually at 30 GHz, being the boundary, where the wavelength is 1 cm
and goes to the millimeter region above 30 GHz.
322 From LTE to LTE-Advanced Pro and 5G
10. These are the spectrum bands to be considered to be allocated for mobile usage during World
Radio Conference 2019 (WRC 2019).
Toward 5G 323
the system BW in the millimeter wave can be much larger compared to sub-6
GHz (due to spectrum availability), the noise power is much larger. Finally, as
the diffraction is not dominant in this range, the multipath profile of the chan-
nel in the millimeter-wave region is composed of small number of multipath
components (i.e., sparse channels) [27].
The advantages of millimeter-wave frequencies are the following [27, 28]:
• Due to large attenuation in free space and blockage through walls, the
spectrum is highly reusable.
• Large spectrum chunks are available in the millimeter-wave range, en-
abling high transmission rates and capacity.
• They are very suited for massive MIMO applications, as the antenna
size (and separation between antennas) can be small; thus, large number
of antennas can be packed in a single array of a reasonable size.
• They allow for narrow beamwidth and precise beamforming.
• Due to the pencil beams and high directivity needs for their operation,
they improve security, as it is harder to eavesdrop when the direct vis-
ibility is needed between transmitter and receiver.
• The transmission directions of: DL, UL, SL, and backhaul link;
Table 14.3
Spectrum Suitability [29]
Suitability
Spectrum/Licensing Characteristic eMBB mMTC URLLC
Sub-6 GHz High coverage, low Yes Yes Yes
spectrum availability
Above 6 GHz Low coverage, high Yes (as extra Possible N/A (not
spectrum availability capacity) reliable)
Licensed spectrum Exclusive use Yes Yes Yes
Licensed shared Shared exclusive use Yes Possible Possible
spectrum
Unlicensed spectrum Shared use Yes Possible N/A (no QoS)
Toward 5G 325
• Multiple duplexing schemes: FDD paired, FDD unpaired for either di-
rection (similar to the supplemental DL concept in LTE CA), TDD
with semistatic DL/UL direction configuration, and TDD with dynam-
ic DL/UL resources configuration change;
• Different PHY layer numerologies: slot size, subcarrier separation;
• Component carrier bandwidth spanning from narrowband up to very
wideband that is going to be available in the millimeter-wave region;
• Forward compatibility to ensure smooth introduction of new features
via reserved blank resources.
11. This configuration is currently considered for at least up to 40 GHz for eMBB and URLLC.
The discussion for the waveform for mMTC and frequencies beyond 40 GHz was still ongo-
ing at the time of writing.
326 From LTE to LTE-Advanced Pro and 5G
is kept. The assumption is that the numerology is not tightened together with
the frequency band. However, it is reasonable to assume that the low-frequency
bands (sub-6 GHz) will be using the smaller subcarrier separation, (as they typi-
cally operate in NLOS scenarios) to overcome the multipath causing high fre-
quency selectivity. High frequencies (millimeter-wave) operate in LOS and do
not experience strong multipath (as elaborated in Section 14.3.1.1). Therefore,
they can cope with larger subcarrier separation. This is also important at mil-
limeter wave for mobility scenarios, for which the larger subcarrier separation is
recommended, as the higher the frequency, the higher the Doppler shift at the
same speed (detailed values can be found in [30]).
Speaking of the time domain, there is currently an assumption of using a
single fixed subframe duration of 1 ms. However, as the slot can be either 7 or
14 OFDM symbols for the subcarrier spacing of up to 60 kHz and 14 OFDM
symbols for higher subcarrier spacing, the duration of the slots (that define the
schedulable units) will be different. The range of a slot can be from 1 ms (for
14 OFDM symbols and subcarrier spacing of 15 kHz) down to 31.25 µs (for
14 OFDM symbols and subcarrier spacing of 480 kHz). A single slot can be
fully used for DL or UL, or covering DL transmission part, gap period and UL
transmission part. This can be dynamically changed from slot to slot to allow
for capturing the immediate traffic changes. The data can be allocated for a
single or multiple slots. Independent of the subcarrier separation, the physical
resource block (PRB) is composed of 12 consecutive subcarriers and 1 time slot.
The different numerologies can be multiplexed in a single NR carrier band-
width both, in the frequency domain (FDM) and in the time domain (TDM);
for example, one subframe can use 15 kHz, while the next one can use 60 kHz.
Multiplexing can be done, for example, for the reliability/latency requirements
for different services, where, for instance, in eMBB the UP latency can be set
to be 4 ms, while for URLLC it can be configured to be 0.5 ms. Shortening
the slot by increasing the subcarrier spacing is one way in which the URLLC
short latency can be achieved. The other is with the use of the mini-slots, be-
ing the lowest schedulable resource. The mini-slots are located within the slot
and are much shorter than 0.5 ms to meet the 1ms E2E latency requirement
for some of the URLLC’s application [14, 30]. Additionally, a flexible channel
bandwidth should be supported for NR (as for LTE), with the widest single
component carrier’s channel bandwidth being not smaller than 80 MHz (com-
pared to LTE maximum CC BW being 20 MHz) [14].
mission; and spatial diversity for control channels and CoMP-type schemes.
Additionally, an important aspect in multiantenna operation in NR is beam-
forming, which is specifically a good match for millimeter wave (however, it can
be also used with the sub-6-GHz bands as standardized in LTE). Beamforming
plays an important role as the higher the frequency, the lower the wavelength;
thus, antenna sizes and separation between individual antennas can be very
small, enabling to put large number of antenna elements in the panel achieving
the very precise and narrow beams. Also, the larger the number of antennas
(massive MIMO), the more stable and less fading is experienced in the channel,
resulting in the channel hardening effect [31]. Beam management is an incor-
porated set of functionalities in the NR design to enable establishing and main-
taining the transmission/reception point’s (TRP12) and UE’s beams for DL and
UL transmission. In NR, the cell is defined by a single set of synchronization
signals and can be composed of multiple beams; thus, switching between the
beams in this scenario is not considered as a handover, but is managed at the
L1/L2 level. With respect to this, NR supports both, cell-level mobility with
RRC involvement and beam-level mobility with minimum or without RRC
involvement; see Figure 14.1 [32]. Therefore, both DL and UL designs incor-
porate the beam management functionality, by introducing a set of procedures,
like beam determination (operation of TRP or UE to select transmit/receive
beam), measurement, reporting, and beam sweeping [14]. TRP sweeps through
the cell coverage in time domain by activating certain beams. In the DL, the
synchronization signals (NR-SS) and NR-PBCH can be transmitted in the dif-
ferent time intervals for the UE to determine which beam is the best one (for
initial access). In the UL, the TRP can obtain knowledge of the best UL beam
acquired by RACH procedure where for each beam the RAP is repeated [14].
14.3.3.1 Layer 2
Some modifications include a study regarding if the segmentation function
should be configurable per service or if concatenation function should be
moved to low L2 sublayer. On the MAC level, there are considerations to use
only asynchronous HARQ in both UL and DL with multiple HARQ processes.
Speaking of the different numerologies on the PHY layer, discussed in Section
14.3.2.1, the NR should support mapping of the logical channels to the new
numerology and TTI duration. In terms of flow aggregation, both CA and DC
are being considered in NR in a similar fashion as it is done in LTE [16].
for each service. Some slices can have common functions (e.g., mobility man-
agement). The slice selection function handles the initial UE attach or new ses-
sion establishment request to select the proper slice for a UE depending on UE
capabilities (e.g., slice support), subscription, and service type. It acts as a rout-
ing function to link the UE with proper CN part of the NW slice based on slice
ID [10]. By customizing the NFs and their location, the operation of the net-
work can be optimized for each service types with respect to their requirements.
For example, services requiring low latency can have the critical NFs placed
close to the network edge; or stationary mMTC services that may not require
mobility support (or with limited capabilities) could do away with mobility-
related NFs. In the MOCN scenarios, where multiple operators use the same
network for the same service type, the slicing should assure the independency
level (also known as isolation), in terms of, for example, congestion, where the
congestion in one slice should not have impact on the other slice. The other do-
main requiring isolation may be the security, where one tenant (being MNO)
uses independent security mechanisms than the other. The UE can be precon-
figured (or not) with the supported slice IDs, but it is always up to the network
to decide which slices the UE can be connected. The slice ID should reflect the
tenant ID (i.e., operator) and service type (e.g., eMBB, URLLC, mMTC). The
UE may or may not support connectivity to multiple network slices at the same
time (the slices in this case should belong to the same operator).
The RAN should support the network slicing in terms of [13]:
• Being aware of the support of the slices in the neighboring cells (not all
slices may be available in all cells in the NW);
• Being able to support the UE association with multiple slices simultane-
ously with one signaling (RRC) connection.
13. eLTE is a name carried out in 3GPP documents during study item phase, thus is used herein.
14. The SA2 [10] uses different naming for these interfaces; for example, NG-C is referred to as
NG2 and NG-U to as NG3.
Toward 5G
Figure 14.3 New RAN architecture. (© 2016. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)
333
334 From LTE to LTE-Advanced Pro and 5G
15. The LTE protocol stack is taken as a basis for the table as NR protocol stack was still under
development at the time of this writing.
Toward 5G 335
Table 14.4
Functional Split Options for NR
Split DU Transport
Option [13] Split point CU Functionality Functionality Characteristics
Option 1 Above PDCP split RRC PDCP, RLC, High latency, small
MAC, PHY, RF bandwidth
Option 2 Below PDCP RRC, PDCP RLC, MAC, High latency, small
split (DC-like PHY, RF bandwidth
architecture,
allows for traffic
aggregation of NR
and E-UTRA)
Option 3 Intra-RLC split RRC, PDCP, Synchronous High latency, small
asynchronous part of RLC bandwidth
part of RLC (e.g., (e.g., ARQ),
segmentation) MAC, PHY, RF
Option 4 RLC-MAC split RRC, PDCP, RLC MAC, PHY Medium latency,
medium bandwidth
(increased by RLC
headers)
Option 5 Intra-MAC split RRC, PDCP, RLC, Lower MAC Low latency, medium
(allows e.g., upper MAC (e.g., (e.g., HARQ), bandwidth (increased
centralized priority handling) PHY, RF by MAC headers and
scheduling padding)
schemes such as
CoMP)
Option 6 MAC-PHY split RRC, PDCP, RLC, PHY, RF Low latency, medium
MAC bandwidth (increased
by MAC headers and
padding)
Option 7 Intra-PHY split RRC, PDCP, RLC, Lower PHY Very low latency, high
MAC, Upper PHY (e.g., FFT, CP), bandwidth (increased
(e.g., channel RF PHY layer overhead)
coding)
Option 8 PHY-RF split (CPRI- RRC, PDCP, RLC, RF Very low latency, high
like) MAC, PHY bandwidth
the UE is dual radio capable) through tight interworking by means of dual con-
nectivity. In this, one RAT is configured as providing master CG and the other
is configured as the secondary CG. Once the UE is connected, the RAN should
be able to select the radio access for each data flow depending on, for example,
service, traffic, radio characteristics, UE mobility, and device type [9, 13]. For
the tight interworking dual connectivity, the following bearer configurations
are considered: MCG bearer, split bearer via MCG, and SCG bearer (and those
are similar to the legacy DC). The split bearer via SCG is also being evaluated
with the Option 3; however, it requires further justification. Figure 14.4 pres-
ents the protocol architectures with regard to the specific options from Table
14.5 for the tight interworking between LTE-NR and eLTE-NR [13] (during
336 From LTE to LTE-Advanced Pro and 5G
Table 14.5
Architecture Options (Selected Subset Considered Currently in 3GPP)
Architecture
Option Definition Description LTE Role NR Role
Option 1 Stand-alone LTE eNodeB connected to EPC Single N/A
LTE, EPC (baseline, legacy) connectivity
connected
Option 2 Stand-alone NR gNB connected to NGC N/A Single
NR, NGC connectivity
connected
Option 3 Non-stand- Data-flow aggregation Legacy LTE as NR for UP only
alone/LTE- across LTE eNodeB and Master node in as secondary
assisted, EPC NR gNB via EPC. S1-U and DC, signaling node in DC,
connected S1-C connection to LTE, Xx anchor capacity
interface for DC booster
Option 3a Non-stand- Data-flow aggregation Legacy LTE as NR for UP only
alone/“LTE- across LTE eNodeB and NR Master node in as secondary
assisted,” EPC gNB via EPC. S1-U and S1-C DC, signaling node in DC,
connected connection to LTE, S1-U anchor capacity
connection to NR booster
Option 4 Non-stand- Data-flow aggregation eLTE for NR as master
alone/NR across NR gNB and eLTE UP only as node in DC,
assisted, NGC eNodeB via NGC. NG-C and secondary node signaling
connected NG-U connection to NR, Xn in DC, capacity anchor
interface for eLTE UP booster
Option 4a Non-stand- Data-flow aggregation across eLTE for NR as master
alone/NR NR gNB and eLTE eNodeB UP only as node in DC,
assisted, NGC via NGC. NG-C and NG-U secondary node signaling
connected connection to NR, NG-U in DC, capacity anchor
connection to eLTE booster
Option 5 Stand-alone eLTE eNodeB connected to Single N/A
eLTE, NGC NGC connectivity
connected
Option 7 Non-stand- Data-flow aggregation eLTE as master NR for UP only
alone/LTE across eLTE eNodeB and node in DC, as secondary
assisted, NGC NR gNB via NGC. NG-C and signaling node in DC,
connected NG-U connection to eLTE, Xn anchor capacity
interface for NR UP booster
Option 7a Non-stand- Data-flow aggregation across eLTE as master NR for UP only
alone/LTE eLTE eNodeB and NR gNB node in DC, as secondary
assisted, NGC via NGC. NG-C and NG-U signaling node in DC,
connected connection to eLTE, NG-U anchor capacity
connection to NR booster
the production of this book the standardization has progressed, and for NR a
new sublayer above PDCP has been proposed). Option 3 enables the possibility
to introduce NR, still using the EPC and legacy LTE, by reusing the DC base-
line from legacy LTE (the EPC is not impacted as NR is hidden from CN). The
Xx interface in this option allows to provide split bearer and CP functions (will
be very similar to X2), while Option 3a requires the NR SgNB to be connected
Toward 5G 337
Figure 14.4 Protocol architecture options for tight interworking. (After: [13].)
338 From LTE to LTE-Advanced Pro and 5G
directly to EPC via S1-U for SCG bearer. Options 7 and 7a are based on a simi-
lar configuration, that is, where the eNodeB serves as master, and the difference
between the two is that the eNodeB is connected to NGC, thus enhancements
are required to the LTE for supporting NG interface (hence, eLTE). Options
4 and 4a are based on the opposite configuration (i.e., the NR gNB serves as
master, while the eLTE eNodeB is connected either through Xn or NG-U, for
split bearer or SCG bearer, respectively). eLTE requires the upgraded PDCP
(on the UP stack), while the lower layers are the same as for legacy LTE. In
terms of RRC signaling, the UE is assumed to have one RRC state machine
related to MCG, while the RAN can have two RRC entities (one for LTE and
one for NR). They can both generate ASN.1, but the ASN.1 provided by the
secondary node/RAT is transported via the master [16].
• The starting point is with the legacy LTE connected to the EPC. This
can serve as a coverage layer, while introducing NR in the higher fre-
quencies for the 5G Phase 1 (with NR to provide higher capacity in the
hotspots). For this scenario, the aim is to utilize the existing configura-
tion as much as possible; thus, Option 3/3a is a reasonable approach,
where the NR gNBs are added using the legacy DC configurations. This
option minimizes the impact on the EPC, while the NR is hidden from
the CN.
• In the second step, Options 7/7a enables the introduction of NGC,
while still keeping LTE as an anchor and NR as secondary node. This
requires upgrade on the LTE side to support NG interface toward NGC.
One potential solution for smooth transition from EPC to NGC is to
encapsulate EPC as a slice within the NGC.
• In the final step, the NR can be connected to NGC, in stand-alone
mode (Option 2). The other configurations with eLTE, being on equal
terms with NR, supporting also stand-alone mode (Option 5) and dual
connectivity in both directions (Option 4/4a and 7/7a), can be possible.
Toward 5G
339
This approach helps to keep the impact on S1 and EPC minimal, to en-
able smooth introduction of NR from Release 15 and achieve certain econo-
mies of scale before introducing full-blown NGC and other use cases that can
be enabled by the Release 16. For those operators who are interested in being
the first to upgrade the CN to NGC, another migration path is possible where
they go from Option 1 to Option 7/7a (phase 1) and finally to Option 2, Op-
tion 4, and Option 5 (phase 2). Another, more aggressive deployment approach
is where the network is evolved from Option 1 directly to Option 2 and Option
4; this requires an upgrade of LTE to eLTE and EPC to NGC in a single step
[13].
References
[1] 3GPP TR 22.891, v14.2.0, “Feasibility Study on New Services and Markets Technology
Enablers (SMARTER),” September 2016.
[2] ITU-R WP 5D, “Workplan, Timeline, Process and Deliverables for the Future Develop-
ment of IMT.”
[3] 3GPP RWS-150073 “RAN Workshop on 5G: Chairman Summary,” September 2015.
[4] www.3gpp.org/specifications/work-plan.
[5] 3GPP TS 22.261, v1.0.0, “Service Requirements for Next Generation New Services and
Markets,” December 2016.
[6] 3GPP TR 22.861, v14.1.0, “Feasibility Study on New Services and Markets Technology
Enablers for Massive Internet of Things,” September 2016.
[7] 3GPP TR 22.862, v14.1.0, “Feasibility Study on New Services and Markets Technology
Enablers for Critical Communications,” September 2016.
[8] 3GPP TR 22.863, v14.1.0, “Feasibility Study on New Services and Markets Technology
Enablers for Enhanced Mobile Broadband,” September 2016.
[9] 3GPP TR 22.864, v15.0.0, “Feasibility Study on New Services and Markets Technology
Enablers - Network Operation,” September 2016.
[10] 3GPP TR 23.799, v14.0.0, “Study on Architecture for Next Generation System,”
December 2016.
[11] 3GPP TR 33.899, v0.6.0, “Study on the Security Aspects of the Next Generation System,”
November 2016.
[12] 3GPP TR 38.900, v14.2.0, “Study on Channel Model for Frequency Spectrum Above 6
GHz,” December 2016.
[13] 3GPP TR 38.801, v1.0.0, “Study on New Radio Access Technology; Radio Access
Architecture and Interfaces,” December 2016.
[14] 3GPP TR 38.802, v1.0.0, “Study on New Radio (NR) Access Technology; Physical Layer
Aspects,” November 2016.
Toward 5G 341
[15] 3GPP TR 38.803, v1.0.0, “Study on New Radio Access Technology; RF and Coexistence
Aspects,” December 2016.
[16] 3GPP TR 38.804, v0.4.0, “Study on New Radio Access Technology; Radio Interface
Protocol Aspects,” November 2016.
[17] 3GPP TR 38.805, v0.0.2, “Study on New Radio Access Technology; 60 GHz Unlicensed
Spectrum,” December 2016.
[18] 3GPP TR 38.912, v0.0.2, “Study on New Radio Access Technology,” September 2016.
[19] 3GPP TR 38.913, v14.1.0, “Study on Scenarios and Requirements for Next Generation
Access Technologies,” December 2016.
[20] Recommendation ITU-R M.2083, “IMT Vision – Framework and Overall Objectives of
the Future Development of IMT for 2020 and Beyond,” September 2015.
[21] 3GPP R1-162153, “Overview of Non-Orthogonal Multiple Access for 5G,” April 2016.
[22] 3GPP RP-160324, “Text Proposal to TR 38.913 for Mapping of KPIs to Scenarios,”
March 2016.
[23] 3GPP RP-160671, “New SID Proposal: Study on New Radio Access Technology,” March
2016.
[24] 3GPP RP-170855, “Work Item on New Radio (NR) Access Technology,”
[25] 3GPP RPa-160064, “Technical Requirements for Next Generation Radio Access
Technologies,” January 2016.
[26] 3GPP RP-162213, “Considerations on New Band Structure for 5G,” December 2016.
[27] U.S. FCC, “Millimeter Wave Propagation: Spectrum Management Implications,” July
1997.
[28] Yu, Y., et al., “Integrated 60GHz RF Beamforming in CMOS,” Analog Circuits and Signal
Processing, 2011.
[29] METIS D5.4, “Future Spectrum System Concept,” April 2015, https://www.metis2020.
com.
[30] 3GPP R1-1611366, “NR Time Domain Structure: Slot and Mini-Slot and Time Interval,”
November 2016.
[31] Hochwald, B., T. Marzetta, and V. Tarokh, “Multi-Antenna Channel-Hardening and
Its Implications for Rate Feedback and Scheduling,” IEEE Transactions on Information
Theory, September 2004.
[32] 3GPP R2-163437, “Beam Terminology,” May 2016.
[33] 3GPP R2-162520, “General Principles and Paging Optimization in Light Connection,”
April 2016.
[34] Open Networking Foundation, “Software-Defined Networking (SDN) Definition,”
https://www.opennetworking.org/software-defined-standards/.
[35] ETSI, “Network Functions Virtualization – Introductory White-Paper,” October 2012.
[36] NGMN Alliance, “5G White-Paper,” February 2015.
342 From LTE to LTE-Advanced Pro and 5G
343
344 From LTE to LTE-Advanced Pro and 5G
345
346 From LTE to LTE-Advanced Pro and 5G