0% found this document useful (0 votes)
609 views377 pages

From LTE To LTE-Advanced Pro and 5G

This document provides an overview of the evolution of wireless technologies from LTE to 5G. It discusses the underlying concepts of OFDM and frame structure used in LTE. Key aspects of LTE architecture are described, including modulation schemes, data rates, MIMO operation, and physical layer measurements. The document also covers coverage and capacity planning, resource allocation, mobility management, interference coordination, self-organizing networks, and the EPC network architecture.

Uploaded by

aslam_326580186
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
609 views377 pages

From LTE To LTE-Advanced Pro and 5G

This document provides an overview of the evolution of wireless technologies from LTE to 5G. It discusses the underlying concepts of OFDM and frame structure used in LTE. Key aspects of LTE architecture are described, including modulation schemes, data rates, MIMO operation, and physical layer measurements. The document also covers coverage and capacity planning, resource allocation, mobility management, interference coordination, self-organizing networks, and the EPC network architecture.

Uploaded by

aslam_326580186
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 377

From LTE to LTE-Advanced Pro and 5G

For a complete listing of titles in the


Artech House Mobile Communications Series,
turn to the back of this book.
From LTE to LTE-Advanced Pro and 5G

Moe Rahnema
Marcin Dryjanski
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Congress.

British Library Cataloguing in Publication Data


A catalogue record for this book is available from the British Library.

Cover design by John Gomes

ISBN 13: 978-1-63081-453-3

© 2017 ARTECH HOUSE


685 Canton Street
Norwood, MA 02062

All rights reserved. Printed and bound in the United States of America. No part of this book
may be reproduced or utilized in any form or by any means, electronic or mechanical, including
photocopying, recording, or by any information storage and retrieval system, without permission
in writing from the publisher.
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of
a term in this book should not be regarded as affecting the validity of any trademark or service
mark.

10 9 8 7 6 5 4 3 2 1
Contents

Preface xv

1 Introduction 1
References 8

2 Underlying DFT Concepts and Formulations 9

2.1 Discrete-Time Fourier Transform 9

2.2 Discrete Fourier Transform 11

2.3 Zero-Padding for Efficient FFT Implementation 15

2.4 Frequency Resolutions and Impact of Zero Padding 16


References 18

3 Air Interface Architecture and Operation 21

3.1 Spectrum Allocation 22

3.2 OFDM Multiuser Access Mechanism 23

3.3 Framing and Physical Synchronization Signals 26


3.3.1 Type 1 FDD Frame Structure 26
3.3.2 Type 2 TDD Frame Structure 27
3.3.3 Physical Reference Signals 28

v
vi From LTE to LTE-Advanced Pro and 5G

3.3.4 Basic Unit of Time in LTE 32

3.4 Timing Advance Function 32

3.5 MBMS Transmission 33

3.6 DL OFDMA and Implementation 34


3.6.1 Modulation within eNodeB 34
3.6.2 Demodulation within UE 36
3.6.3 Susceptibility to Frequency Offsets 38

3.7 Uplink SC-FDMA and Implementation 40


3.7.1 Implementation within the UE and eNodeB 41

3.8 Channels in LTE 43


3.8.1 Physical Channels 45
3.8.2 Transport Channels 47
3.8.3 Logical Channels 49

3.9 Layer 2 Protocol Sublayers 50


3.9.1 MAC Sublayer 50
3.9.2 RLC Sublayer 53
3.9.3 PDCP Sublayer 55

3.10 Modulation and Coding Schemes and Mapping Tables 56


3.10.1 MCS Index Mapping to Transport Block Size 59
3.10.2 Effective Channel Coding Estimation 61
3.10.3 CQI Mapping to MCS 61

3.11 Data Rates and Spectral Efficiencies 63

3.12 Transmission Modes and MIMO Operation in LTE 69


3.12.1 MIMO Operation 73

3.13 LTE Physical Layer Measurements 75


3.13.1 In the UE 75
3.13.2 In the eNodeB 77

3.14 UE Categories 79
References 79

4 Coverage-Capacity Planning and Analysis 81

4.1 Radio Link Budgeting for Coverage Dimensioning 82


4.1.1 Link Budgeting Formulations 83
Contents vii

4.2 Capacity Quality Analysis and Verification 107


4.2.1 Capacity Estimation 108
4.2.2 Traffic Demand Estimation and Site Calculation 110

4.3 Optimum System Bandwidth for Coverage 111

4.4 Trade-Offs Between Network Load and Coverage


Performance 112
References 115

5 Prelaunch Parameter Planning and Resource


Allocation 117

5.1 PCI Allocation 117

5.2 Uplink Reference Signal Allocation 118


5.2.1 Using Cyclic Shifts of the ZC Sequences 118
5.2.2 Defining u Independently from PCI 119
5.2.3 Pseudo-Random u-Hopping 119

5.3 Random Access Planning 119


5.3.1 PRACH Preamble Format Selection and Parameter
Setting 120
5.3.2 Derivation and Assignment of Preamble Sequences 120
5.3.3 RootSequenceIndex Planning 123
5.3.4 PRACH Capacity Planning 124

5.4 PRACH Procedure 124

5.5 PRACH Optimization 125


References 126

6 Radio Resource Control and Mobility Management 127

6.1 RRC State Model in LTE 127


6.1.1 RRC Idle State 128

6.2 Handovers in LTE 131


6.2.1 Intrasystem Handover 132
6.2.2 Inter-RAT Handover 135
6.2.3 Handover Performance and Optimization 137
6.2.4 Impact of Measurement Filtering 137
6.2.5 Impact of Measurement Hysteresis and Time-to-
Trigger Parameters 138
viii From LTE to LTE-Advanced Pro and 5G

6.2.6 Impact of RLC/MAC Protocol Resets and Packet


Forwarding 140
6.2.7 E-UTRAN to E-UTRAN Handover Delay (FDD) 140
6.2.8 E-UTRAN to UTRAN Handover Delay 141

6.3 Paging and Resource Allocation 143

6.4 DRX 145

6.5 Uplink Power Control and Optimization Parameters 145


6.5.1 Power Control on PUSCH 146
6.5.2 Power Control on PUCCH 148

6.6 CQI Measurement and Reporting 150


References 151

7 Intercell Interference Management in LTE 153

7.1 Interference Cancellation 154

7.2 Interference Randomization 154

7.3 Intercell Interference Avoidance or Coordination 155

7.4 Intercell Interference Coordination Mechanisms 156


7.4.1 Static/Quasi-Static ICIC Schemes 156
7.4.2 Dynamic ICIC Schemes 157
7.4.3 Dynamic ICIC Based on the PFR and SFR Concepts 159

7.5 Conclusions 160


References 160

8 SON Technologies in LTE 163

8.1 SON Standardization History and Status 164

8.2 Self-Configuration 165

8.3 Self-Optimization 165

8.4 Self-Healing 166

8.5 SON Architectures 166

8.6 SON Use Cases 167


Contents ix

8.6.1 Automatic Neighbor Relation Setting 167


8.6.2 Coverage and Capacity Optimization 168
8.6.3 PCI Configuration 168
8.6.4 Mobility Robust Optimization 169
8.6.5 Load-Balancing Optimization 170
8.6.6 Cell Outage Compensation 170
8.6.7 Dynamic Multiantenna Configuration 171
8.6.8 Interference Reduction 171
8.6.9 Energy Saving 171
8.6.10 RACH Parameter Optimization 172

8.7 Utilization of Automated UE Measurements 173


References 174

9 EPC Network Architecture, Planning, and


Dimensioning Guideline 177

9.1 Mobility Management Entity 178

9.2 Serving Gateway 178

9.3 Packet Data Network Gateway 179

9.4 Home Subscriber Server 180

9.5 Policy Control and Charging Rules Function 180

9.6 E-SMLC and GMLC 180

9.7 IP Multimedia Subsystem 181

9.8 EPC Element Interconnection and Interfaces 181


9.8.1 S1-C and S1-U Interfaces 181
9.8.2 S5 Interface 182
9.8.3 S6a Interface 182
9.8.4 SGi Interface 183
9.8.5 Gx Interface 183
9.8.6 S13 Interface 183
9.8.7 S10 Interface 184
9.8.8 S12 Interface 184
9.8.9 X2 Interface 184

9.9 UE Mobility and Connection Management


within EPC 185
x From LTE to LTE-Advanced Pro and 5G

9.9.1 EPC Mobility Management State 186


9.9.2 EPC Connection Management State 187

9.10 QoS Classes, Parameters, and Mapping Strategies 188

9.11 Charging Parameters 193

9.12 EPC Planning and Dimensioning Guidelines 193


9.12.1 Network Topology Considerations 193
9.12.2 EPC Dimensioning 194
References 197

10 LTE-Advanced Main Enhancements 199

10.1 Carrier Aggregation 200


10.1.1 Connection Signaling 200
10.1.2 Resource Scheduling 201

10.2 Enhanced Multiantenna Transmissions 202


10.2.1 Spatial Multiplexing on Uplink 202
10.2.2 Spatial Multiplexing on Downlink 202
10.2.3 Downlink Reference Symbols 203

10.3 Relay Nodes 204

10.4 IP Flow Mobility and Seamless WLAN Offload 206


10.4.1 Local and Selective IP Traffic Offload 206
10.4.2 Multiaccess PDN Connectivity and IP Flow Mobility 207

10.5 Enhanced PDCCH 208

10.6 Coordinated Multipoint Transmissions and Reception 209

10.7 Machine-Type Communication 210


10.7.1 Architectural Enhancements 211
10.7.2 Managing the Network Access 215
10.7.3 LTE MTC Devices 217
References 218

11 Optimization for TCP Operation in 4G and Other


Networks 221

11.1 TCP Fundamentals 222


11.1.1 TCP Connection Setup and Termination 223
Contents xi

11.1.2 Congestion and Flow Control 224


11.1.3 Slow Start Congestion Control Phase 225
11.1.4 Congestion Avoidance Phase 226
11.1.5 TCP RTO Estimation 226
11.1.6 Bandwidth-Delay Product and Throughout 228

11.2 TCP Enhanced Lost Recovery Options 229


11.2.1 Fast Retransmit 229
11.2.2 Fast Recovery 229
11.2.3 Selective Acknowledgment 230
11.2.4 Timestamp Option 230

11.3 TCP Variations as Used on Fixed Networks 230


11.3.1 TCP Tahoe 231
11.3.2 TCP Reno 231
11.3.3 TCP New Reno 231
11.3.4 TCP SACK 231
11.3.5 TCP Vegas 231

11.4 Characteristics of Wireless Networks, Particularly 3G 232


11.4.1 Packet Losses, Delays, and Impacting Parameters 232
11.4.2 Delay Spikes 233
11.4.3 Dynamic Variable Bit Rate 234
11.4.4 Asymmetry 234

11.5 TCP Performance Optimization for Wireless Networks 235


11.5.1 Link Layer Solutions 235
11.5.2 TCP Parameter Tuning 243
11.5.3 Proxy Solutions 245
11.5.4 Selecting the Proper TCP Options 247
11.5.5 TCP Implementation Types and Impacts 249
11.5.6 Split TCP Solutions 250
11.5.7 TCP End-to-End Solutions 252

11.6 Application-Level Optimization 253


References 254

12 Voice over LTE 257

12.1 Radio Resource Scheduling 258

12.2 IMS Architecture in VOLTE 259


12.2.1 P-CSCF 260
12.2.2 S-CSCF 261
xii From LTE to LTE-Advanced Pro and 5G

12.2.3 I-CSCF and IBCF 261


12.2.4 Application Server and the HSS 261
12.2.5 Naming and Addressing 262
12.2.6 UE Application Software 264

12.3 IMS Signaling Protocols in VOLTE 264

12.4 Voice Capacity Analysis 265


References 269

13 LTE-Advanced Pro: Enhanced LTE Features 271

13.1 LTE-Advanced Pro Introduction and Main Features


Overview 272

13.2 Dual Connectivity 273


13.2.1 DC Design, Operation, and Configuration 275

13.3 LTE-Advanced Pro Interworking with WiFi 280


13.3.1 LTE-WLAN Aggregation 282
13.3.2 LTE-WLAN Radio Level Integration with IPsec Tunnel 284
13.3.3 RAN Controlled LTE-WLAN Interworking 285
13.3.4 Summary of the LTE-WLAN Interworking Schemes 286

13.4 LTE Operation in Unlicensed Spectrum 287


13.4.1 Licensed-Assisted Access 288
13.4.2 Nonstandardized Unlicensed LTE Access Schemes 292

13.5 Carrier Aggregation Enhancements 294


13.5.1 Massive CA 295
13.5.2 Uplink Enhancements 295
13.5.3 UE Support 296

13.6 Device-to-Device Communications 297


13.6.1 D2D Scenarios and Architecture 297
13.6.2 Architecture and Protocols 299
13.6.3 D2D Radio Interface: Side Link (PC5) 301
13.6.4 D2D Direct Communication 301
13.6.5 Synchronization and Broadcast 302

13.7 Evolution of Machine-Type Communications 303


13.7.1 Narrowband IoT 304
Contents xiii

13.8 From LTE over LTE-Advanced to LTE-Advanced


Pro: A Summary 308
13.8.1 LTE Release 8 310
13.8.2 LTE-Advanced Release 10 311
13.8.3 LTE-Advanced Pro Release 13 311
References 312

14 Toward 5G 315

14.1 Standardization Timeline and 5G Phases 316

14.2 5G Use Cases and System Performance Requirements 318

14.3 5G Air Interface: New Radio 321


14.3.1 Spectrum Considerations 321
14.3.2 NR PHY Layer 324
14.3.3 Protocol Stack Consideration 328

14.4 5G Architecture Concepts 329


14.4.1 5G Network Flexibility 329
14.4.2 System Architecture 332
14.4.3 Tight Interworking with eLTE 334
14.4.4 Migration Toward the 5G System 338
References 340

About the Authors 343

Index 345
Preface
The demand for broadband mobile services has brought new challenges for op-
erators to provide higher-capacity and higher-quality networks with lower per
bit cost. The Long-Term Evolution (LTE) fourth generation (4G) broadband
mobile communication systems and LTE-Advanced brings higher capacities
and spectral efficiencies for supporting high data rate services and the flexibility
for mixed media all-Internet Protocol (IP) communication with flexible spec-
trum allocation and utilization. The orthagonal frequency division multiple
access (OFDMA) concept underlying the air interface of LTE-based systems is
also what is being considered as the technology choice for implementation of
the fifth generation (5G) mobile communication system, which is still under
standardization. This book provides a detailed description of the LTE and LTE
advanced air interface at the physical and link layer protocol levels, the evolved
packet core network architecture and protocols, the network planning, and op-
timization with the many design trade-offs, performance analysis and results,
and the self-organizing network (SON) feature specifications and realization,
as well as the new features under LTE-Advanced Pro, the transition to 5G,
the requirements and the technology options, the standardization efforts, and
related issues and challenges. The book also provides a detailed chapter on the
end-to-end data transfer optimization mechanisms based on the Transmission
Control Protocol (TCP).
Chapter 1 introduces the basics of the LTE 4G broadband mobile com-
munication systems air interface (E-UTRAN) and the evolved packet core net-
work architecture. It explains the advantages of the new air interface design
based on the multicarrier transmission scheme of OFDMA and the simplified
distributed radio access network with the all-IP system architecture provides
over the earlier third generation (3G) cellular networks in terms of increased

xv
xvi From LTE to LTE-Advanced Pro and 5G

throughput and spectral efficiencies, reduced network delays, and increased


coverage ranges. The chapter also provides a brief description of the evolution
of the LTE releases, highlights the main features introduced in each release,
and provides some statistics on the commercialization status of the 4G cellular
networks.
Chapter 2 provides a well-detailed and comprehensive coverage of the
discrete Fourier transform (DFT) concepts and formulations that are used as
the underlying implementation mechanisms for the orthogonal frequency divi-
sion multiplexing (OFDM) technology of the air interface in LTE and derives
and explains the various frequency and DFT resolutions. This chapter provides
the background and helps to clarify and explain some of the commonly asked
questions about such things as the sampling rates, and fast Fourier transform
(FFT) sizes used in the implementation of the air interface in practical devices,
and can also be used as a stand-alone chapter for those who are interested in
getting a deeper understanding of FFT and the DFT concepts and technologies
used in spectral analysis.
Chapter 3 provides a detailed description of the air interface architec-
ture, its operation in terms of the OFDMA and single-carrier frequency mul-
tiple access (SC-FDMA) concepts and the implementation details, the framing
structure, the LTE basic timing units, the synchronization signals, the reference
symbol structure for the uplink and downlink, the physical channel structure
and the definition of the transport and the logical channels, and the layer-2
protocol sublayers including medium access control (MAC), radio link con-
trol (RLC), and the Packet Data Convergence Protocol (PDCP) protocols and
explains the transmission modes in LTE and the multiple input and multiple
output (MIMO) concepts and schemes used. The chapter explains the chan-
nel quality indicators and other channel characterization measures such as the
rank indicator and the precoding matrix indicator and their reporting details
and use. The chapter further derives the practical and theoretical channel data
rates for various system bandwidths, explains the third generation partnership
project (3GPP) mapping tables used for this purpose, and derives the channel
spectral efficiencies achieved in LTE. The various user equipment (UE) catego-
ries are also discussed in the chapter.
Chapter 4 discusses the details of radio access dimensioning based on the
criteria of coverage and capacity requirements, derives the detailed link budget-
ing formulations and flowcharts for this purpose, discusses and presents their
simplifications for practical realization, provides a snapshot of various com-
monly used path loss models and related parameters such as the log normal fade
margins, derives the gains due to hard handover in cell borders, and the various
criteria on which the link budgeting was based, estimates the overheads due to
various protocol layers and the reference symbols to give the designer a feeling
for the percentage of channel capacities used up in supporting the protocol
Preface xvii

layers, and covers the capacity-based dimensioning and its integration with the
coverage based dimensioning to derive the final site counts based on the traffic
model. The chapter also derives and presents the trends in the optimum sys-
tem bandwidth for coverage and provides an analysis of the trade-offs involved
between network load and coverage performance and presents the trends in
graphical format.
Chapter 5 discusses the three important prelaunch planning tasks: the
allocation of physical cell identities (PCI), the uplink reference signal sequence
planning, and the physical random access channel (PRACH) parameter plan-
ning. Thereafter, it presents related algorithms. The chapter also discusses deri-
vation of the 64 PRACH preambles in each cell based on cyclic shifting of
Zadoff-Chu sequences, the PRACH capacity planning and optimization in
terms of configuration parameters such as the RACH physical resources, the
RACH preamble allocation, the RACH persistence level and backoff control,
and the RACH transmission power control.
Chapter 6 covers the main services and functions of the RRC sublayer
such as the cell selection and reselection in the idle state and the handover func-
tions and performance in the connected state with the transition between the
states. The related measurement events, the filtering schemes, and the triggering
mechanisms and related parameters are defined and optimization-related con-
siderations are explained. The chapter also provides the detailed description of
the paging mechanisms and the related parameters for optimization, defines the
discontinuous reception (DRX) cycle and parameters, and explains the uplink
power control on the data and control channels along with the related param-
eters and optimization considerations.
Chapter 7 discusses the 3GPP proposed mechanisms for dealing with in-
tercell interference on both the uplink and the downlink in LTE systems. These
mechanisms are based on interference cancellation, interference randomization,
and interference avoidance through intercell interference coordination (ICIC)
measures in the resource assignment processes and are all discussed in detail.
Both the static and the dynamic variants of ICIC schemes such as those based
on resource sharing between the inner and outer cell edge users in a self-config-
ured centralized or distributed scheme and the fractional or partial frequency
reuse are explained. This chapter provides the detailed implementation mecha-
nisms to adequately cover this important feature of LTE systems.
Chapter 8 covers the subject of self-organizing network (SON), which
is a feature specification of LTE networks. The SON feature specifications in-
clude self-configuration, self-healing, and self-optimization and various 3GPP
use cases for each. These are all discussed in detail with sample algorithms and
mechanisms for implementing each use case. The chapter also covers a review
of the related standardization history and the status and provides some key ref-
erences for further study on related algorithms and implementation means for
xviii From LTE to LTE-Advanced Pro and 5G

various use cases. This chapter also provides the stimulus background for those
who are involved in ongoing research and development of new algorithm for
the implementation of the SON concepts and use cases.
Chapter 9 covers the Evolved Packet Core (EPC) network architecture
and defines and explains the various EPC network elements, their functions,
and interfaces with the protocols used and the UE mobility and connection
management states within the EPC. The chapter also discusses the quality of
service (QoS) classes, parameters, their mapping to various Evolved Packet Sys-
tem (EPS) bearers, and service samples and the performances achieved, discuss-
es the parameter mappings strategies for QoS realization, the flow control pa-
rameters, and the charging parameters, and provides the necessary background
and the guidelines for the design of the core network topology and the dimen-
sioning of the network elements and their interfaces based on well characterized
traffic models and related engineering formulas. This chapter complements the
rest of the book in providing a thorough understanding of the Evolved System
Architecture (ESA) of the 4G mobile communication systems.
In Chapter 10, the main key enhancements introduced in LTE-Advanced
Releases 10, 11, and 12 are explained and discussed, which include carrier ag-
gregation and the associated signaling and scheduling means, the improvements
in multiantenna techniques on downlink (DL) and uplink (UL) and the new
UE-specific demodulation reference symbols, relaying mechanisms to extend
cell coverage and the required extra functions and channels, various mecha-
nisms for IP traffic offload, and PDN connectivity with IP flow mobility, the
enhanced PDCCH for handing the required signaling for new features, and the
various scenarios for implementing coordinated multipoint transmission and
reception and the achievable performance benefits. This chapter also provides
a detailed discussion of the increasingly important area of machine-to-machine
(M2M) communication, issues involved, and the provisions made or under de-
velopment in LTE and LTE advanced to facilitate its use for efficient machine
type communication from both the access side and the core network side.
Chapter 11 provides the reader with a thorough detailed discussion of the
important TCP and its many variations and important parameters. The chapter
presents several solutions and their performance based on research results from
the literature for optimization of TCP operation and performance in wireless
networks, particularly for LTE systems, Universal Mobile Telecommunications
System (UMTS), and General Packet Radar Service (GPRS) networks. The
solutions include parameter tuning and feature selection at the radio link layer
protocols, automatic repeat request (ARQ) proxies involving no changes to the
TCP, parameter tuning and optimization within the transport protocol itself,
and possible modifications of the protocol to help it to best suit the charac-
teristics of the lossy radio links. The optimization of the TCP performance in
wireless networks where packet losses on the radio link is a common setback is
Preface xix

very crucial for optimum use of the limited radio resources and maximization
of the goodput while preventing unnecessary delays.
Chapter 12 covers the implementation of voice over LTE (VOLTE) based
on voice over IP technology. This chapter discusses the semipersistent sched-
uling for voice packets on the air interface to reduce the signaling overhead,
explains the IP multimedia system element functions and interfaces in VOLTE
architecture, and provides a comprehensive discussion of the signaling proto-
cols and naming and addressing, as well as a rather simple means for estimating
the voice capacity of LTE networks. The chapter also presents typical voice
capacity estimates obtained from simulations results in the literature for com-
parison purposes.
Chapter 13 serves as an overview of the newly standardized features in
the LTE area including Rel-12 and Rel-13. It covers multilink aggregation with
the use of dual connectivity, unlicensed LTE, and tight radio-level interwork-
ing with WiFi, and outlines the further carrier aggregation enhancements. The
chapter addresses the device-to-device communications, explains the architec-
tural and air interface details, and concludes with a section that compares the
features and performance of the key LTE evolution steps.
Chapter 14 offers an overview of the potential 5G technologies and out-
lines the standardization approach and the main requirements and spectrum
aspects. The chapter presents the novel radio and architecture concepts includ-
ing multiple numerologies and network slicing, which serves as an input to the
overall system view and migration paths including the tight interworking with
the evolved LTE system.
This book is an outgrowth of the authors’ independent research and study
in the field for more than five years and involvement in the planning, design,
and analysis of the first LTE networks in Latin America, the design of a holistic
framework for the implementation of the LTE-Advanced Pro features includ-
ing licensed-assisted access (LAA), LTE-WLAN aggregation (LWA), dual con-
nectivity (DC), and carrier aggregation (CA), narrowband internet of things
(NB-IoT) implementation evaluation within the LTE system framework, the
design of 5G scheduler within the 5G radio access network (RAN) architecture
in the tier 1 vendor’s research and development (R&D) office, implementation
of the L2/L3 protocol stack on the LTE radio interface, and providing technical
trainings in the area of LTE, LTE-Advanced and UMTS for operators, vendors,
R&D institutions in the United States, Mexico, Asia, the Netherlands, Swe-
den, Spain, and Poland. The authors have more than 20 years of professional
engineering experience and consulting in the wireless telecommunication field
starting from low Earth orbit satellites at Motorola to GSM, to 3G planning
in the United States and Asia, and LTE network planning and analysis in Mex-
ico and Sweden with several related technical publications and patents in the
field, and have been working with 5G since its early research phase from 2012,
xx From LTE to LTE-Advanced Pro and 5G

within EU-funded project 5G NOW. The book provides a self-sufficient con-


cise and adequate system level coverage of the LTE and LTE-Advanced network
architecture and system design, has a heavy focus on the radio frequency (RF)
planning aspects of LTE networks, and the features in advanced LTE, but is
intended to also significantly benefit professionals involved in the evolved core
network planning and dimensioning, and end-to-end optimization aspects, and
for those interested to complement their knowledge on LTE with the trends,
issues, and challenges in the design and development of the upcoming 5G
network.

Acknowledgments
The authors would like to thank the many pioneering researchers in the indus-
try and the academia, the results of whose efforts helped to create the ground-
work for composing this book. Without those efforts, it would never have been
possible to put together this book. We would also like to thank the many clients
who provided the consulting opportunities to us in this industry, which helped
us with the knowledge, experience, and insight to be able to put together a book
of this scope in the field.
Special thanks are due to the review and production teams at Artech
House Publishers who provided excellent support in the production of this
book and helped to meet a reasonable schedule. In particular, we would like to
thank Aileen Storry, the senior acquisitions editor, for coordinating the initial
review of the scope and content of the proposal for publication approval and
her subsequent kind efforts in the arrangement of review and refinement of the
material. We would also like to thank Sarah O’Rourke and her team for their
kind cooperation in the copyediting and production of the book.
1
Introduction
The three generation (3G) Long-Term Evolution (LTE) is the technology speci-
fication from the 3GPP Standards Group that provides the cellular operators
with the capability to offer wireless broadband services to their subscribers with
increased capacity, coverage and speed of mobile over the earlier wireless sys-
tems. The LTE-Advanced from Release 10, which fulfills the requirements from
IMT-Advanced, is referred to as the fourth generation (4G) mobile network-
ing technology. With increased data applications, content, and video services,
broadband is a significant part of today’s mobile user experience. This has been
demonstrated by the rapid increase in the uptake of wideband code division
multiple access (WCDMA) and high speed packet access (HSPA) networks
worldwide. Mobile video streaming and social networking now make up the
largest segment of data traffic in networks. Video makes up approximately 45%
of mobile data traffic and it is expected to increase to 55% by 20201.
This trend in the demand for broadband mobile services has brought new
challenges for operators to provide higher capacity and higher quality networks
with reduced delays and lower per bit cost. The GSA report2 in May 2016
stated that 503 operators have commercially launched LTE networks across 167
countries. The LTE networks are already dominating the world’s mobile infra-
structure markets. The technology is one of the fastest network migrations ever
seen and is making it possible for operators to either compete with fixed service
providers or to provide broadband services in areas where the fixed infrastruc-

1. These trends are the kind that have been reported, for instance, in the Cisco white paper,
“VNI Forecast and Methodology 2015-2020,” http://www.cisco.com/c/en/us/solutions/col-
lateral/service-provider/visual-networking-index-vni/complete-white-paper-c11-481360.pdf.
2. According to http://gsacom.com/

1
2 From LTE to LTE-Advanced Pro and 5G

ture does not exist and would be too expensive to deploy. Many industry ana-
lysts have forecasted that LTE’s global momentum will continue between now
and 2020. By the end of 2015, global LTE and LTE-Advanced connections
reached 1.068 billion according to GSMA report. By 2020, LTE is expected to
account for nearly 30% of global connections.
In the LTE and LTE-Advanced radio access network architecture, the
control and management of the radio resources, as implemented in the central-
ized RNCs in UMTS, have been distributed basically in the evolved Node Bs
(eNode B) which intercommunicate through the X2 interface. The X2 interface
has a key role in the intra-LTE handover operation. These measures result in
speedier response to real-time channel conditions for resource scheduling. It
also helps to reduce the protocol processing overhead in the control plane and
along the user data paths, which, in turn, reduces the latency in setting up calls
and data connections as well as the network deployment costs. The extension
of an IP mode of transmission to the radio access for all communications has
helped to reduce the time it takes to access the radio and the core network re-
sources helping to significantly reduce, for instance, a data session setup time
that can take anywhere from 1 to 15 seconds in the current wireless mobile
networks.
LTE introduces a new air interface based on orthogonal frequency divi-
sion multiple access (OFDMA) and multiple antenna techniques to achieve
high-data-rate and low-latency data services. The OFDMA technology is avail-
able freely on the market and has been used in digital audio satellite broadcast
systems as well as in the implementation of WiMAX as specified in the IEEE
802.16 Standards for WLANs. The OFDMA removes the intracell interference
that exists in 3G CDMA-based networks and hence allows for much higher
data rates, increased capacity, and larger cell sizes. Speeds of beyond 100 Mbps
are made possible whereas the highest speed achieved in 3G networks using
HSDPA feature is at around 43 Mbps. In heavily loaded networks, the highest
speed observed with the advanced 3G networks has been at around 2 Mbps,
whereas LTE has allowed for speeds of up to 20 Mbps, a factor of 10 times
higher. Moreover, the mobility across the cellular network can be maintained
at speeds from 120 km/h to 350 km/h or even up to 500 km/h depending
on the frequency band for usage on high-speed trains, as the number of high-
speed rail lines increases and train operators aim to offer an attractive working
environment to their passengers. Moreover, the use of dedicated channels at the
transport level has been eliminated in LTE, which simplifies the MAC (elimi-
nating MAC-d entity) and improves resource utilization. The radio interface is
based on shared and broadcast channels, and there is no longer any fixed level of
resources dedicated to each user independent of the real-time data requirement.
This increases efficiency of the air interface, as the network can control the use
of the air interface resources according to the real-time demand from each user,
Introduction 3

through resource multiplexing in real time and semipersistent scheduling tech-


niques for certain types of traffic such as voice and streaming video which have
known regularity in traffic (packet) generation.
The LTE RAN, also referred to as Evolved UMTS Terrestrial Radio Ac-
cess Network (E-UTRAN), is designed primarily for full-duplex operation in
paired spectrum. In contrast, WiMAX operates in half duplex over the same
spectrum for uplink and downlink, where information is transmitted in one
direction at a time. LTE can support time division duplexing (TDD) opera-
tion in unpaired spectrum; however, it is not the primary focus of the design
as in UMTS. LTE has been designed to support scalable bandwidths of 1.4, 3,
5.0, 10.0, 15.0, and 20.0 MHz with the peak data rate scaling with the system
bandwidth. Peak data rates in the range of 100 Mbps for downlink and 50
Mbps for uplink using 1Tx-1Rx are expected with a 20-MHz spectrum result-
ing in spectral efficiencies of 5 bps/Hz/sector. This will allow a spectrum of 5
MHz to at least theoretically support 200 users per sectors. The higher spectral
efficiency allows higher network throughputs in the same amount of spectrum
and hence assuming same base station cost reduces the cost per bit to the end
user. Further advancements made to LTE in Releases 10, 11, and 12 known
as LTE-Advanced is expected to deliver a peak data rate of 1,000 Mbps in the
downlink and 500 Mbps in the uplink with eventual increase to peak data rates
of 3,000 and 1,500 Mbps on downlink and uplink, respectively, using a total
bandwidth of 100 MHz that is made from five separate components of 20
MHz each. This has been extended to up to 32 components within Release 13
and the Massive CA feature to be able to aggregate up to 640 MHz, while still
assuring backwards compatibility to Rel-8 numerology.
The OFDM-based modulation and multiaccess technology used over the
air interface will avoid the royalties, which are currently a considerable fraction
of the overall costs in CDMA-based systems. The technology basically splits
the allocated spectrum into thousands of extremely narrowband carriers, each
carrying a fraction of the data to be transmitted. Then the sophisticated con-
volutional turbo-coded data streams (using Rel-6 turbo codes) are divided into
parallel substreams mapped into QAM, 16 QAM, or 64 QAM symbols (and up
to 256 QAM from Release 13) and used to modulate each narrowband subcar-
rier. The particular QAM format used for modulating the data stream on each
subcarrier will depend on the real-time channel condition. In this way, data
are transmitted in parallel substreams on each subcarrier. The air interface uses
short transmission time intervals (TTI) of 1 ms to rapidly respond to changing
channel conditions over the frequency-selective wideband spectrum by adapt-
ing the modulation order over each subcarrier within the allocated channel re-
sources and thus further enhance the spectral efficiency. The OFDM scheme is
also well suited for the spatial multiplexing techniques of MIMO. The MIMO
has been specified to use up to 4 transmit antennas in Release 8 and extended
4 From LTE to LTE-Advanced Pro and 5G

up to 8 in Release 10. Due to the detection of the multiuser channels in rather


perfect orthogonal mode made possible with OFDM, there is virtually no cell
breathing (cell coverage shrinkage with the cell load) phenomenon in LTE as
would exist in CDMA-based systems. This provides for a more flexible radio
planning by decoupling coverage from capacity on a cell basis. LTE is expected
to offer optimum performance in a cell size of up to 5 km while still being
capable of delivering effective performance in cell sizes of up to 30-km radius,
with a more limited performance in cell sizes up to 100-km radius. To maximize
the spectral efficiency, frequency reuse factor of 1 has been proposed for both
downlink and uplink. However, this will result in some intercell and intersector
interference which can be more severe at cell edges. This interference is usually
mitigated through slow power control on uplink, as well as certain interference
coordination measures between cells and sectors. OFDMA is still considered (at
least to up to 40 GHz) for the fifth generation (5G) systems as being currently
discussed within 3GPP.
While the term LTE encompasses the evolution of the Universal Mo-
bile Telecommunications System (UMTS) radio access through the Evolved
UTRAN (EUTRAN), it is accompanied by an evolution of the non-radio as-
pects under the term System Architecture Evolution (SAE). Together, LTE and
SAE comprise the Evolved Packet System (EPS). This EPS, in turn, includes
the Evolved Packet Core (EPC) on the core side and Evolved UMTS Terrestrial
Radio Access Network (E-UTRAN) on the access side. The SAE/EPC simpli-
fies the internetworking with the fixed and non-3GPP wireless networks and
brings the fixed Internet-like service access and experience to the mobile user.
The simplified network architecture results in much reduced call set up time
down to just few seconds while reducing the round-trip times to 10 ms or
even less compared to 40 to 50 ms in HSPA leading to better user experience
in data applications. The new architecture provides seamless Internet Protocol
(IP) connectivity between user equipment (UE) and the Packet Data Networks
(PDN), without any disruption to the end users’ applications during mobility
and reduces the connections states in LTE to only two states from the previous
four in UMTS/HSPA. The all-IP transmission also unifies the voice-oriented
environment of today’s mobile networks with the data-centric service possibili-
ties of the fixed Internet. Voice traffic is also supported mainly as Voice over
IP via VOLTE (as explained in Chapter 12), which enables a better integra-
tion with other services to provide a flexible multimedia means of communica-
tion. However, most early rollouts of LTE support voice calls through circuit-
switched fallback (CSFB), in that the network transfers an LTE mobile to a
legacy 2G/3G cell, so that it can place a voice call through the 2G/3G circuit
switched domain in the traditional way. However, as of May 2017, about 596
Introduction 5

LTE networks had been launched of which 106 networks are VOLTE capable,
according to GSA reports3.
LTE provides security in a similar way to its predecessors UMTS and
GSM. Because of the sensitivity of signaling messages exchanged between the
eNodeB and the terminal, or between the MME (a core network element) and
the terminal, all the information is protected against eavesdropping and altera-
tion. The implementation of security architecture of LTE is carried out by two
functions: ciphering of both control plane (RRC) data and user plane data, and
integrity protection, which is used for control plane (RRC) data only. Cipher-
ing is used in order to protect the data streams from being received by a third
party, while integrity protection allows the receiver to detect packet insertion
or replacement. RRC always activates both functions together, either following
connection establishment or as part of the handover to LTE.
The structural-functional simplifications and unifications made possible
with the LTE networks make them practically more suitable to self-optimi-
zation features and hence are sometimes classified as self-organizing or self-
optimizing networks (SONs). The SONs help to reduce the operating costs and
automatically tune the network in real time to changing network conditions
such as load and interferences. This has been brought with the three groups of
use cases including: self-configuration, self-optimization, and self-healing, with
centralized, distributed, and hybrid architectures covered in detail in Chapter 8.
The 3GPP standards for LTE are provided in 3GPP documents with the
numbering format of 36.SSS-xyz (e.g., TS 36.300-800). The TS stands for the
technical specification, 36 is the series number, SSS is the number of the speci-
fication within that series, x is the release number, y is the technical version
number within that release, and the final z is an editorial version number that is
incremented to track nontechnical changes. The standardization work on LTE
has been going on since 2004, building on the GSM/UMTS family of standards
that dates from 1990. Stage 1 specifications define the service from the user’s
point of view and lie exclusively in the 22 series. The Stage 2 of the standard,
the system’s high-level architecture and operation was completed in 2007. The
most useful stage two specifications for LTE are TS 23.401 [1] and TS 36.300
[2], which cover both the air interface and RAN architecture and protocols,
respectively. The stage-3 specifications define all the functional details and were
completed at the end of 2008 and are mainly provided in the 36 series.
The specifications for LTE are now encapsulated in 3GPP Release 8,
which is the earliest set of standards that defines the technical evolution of the
3GPP 3G mobile network systems. Release 7 that includes specifications for

3. https://gsacom.com/paper/volte-network-status/ and https://gsacom.com/paper/lte-global-


network-status/.
6 From LTE to LTE-Advanced Pro and 5G

HSPA+, the missing link between HSPA and LTE also allows for the introduc-
tion of a simpler, flat, IP-oriented network architecture extending the IP trans-
mission to the radio access network. Further evolution of LTE are specified in
3GPP Release 9 and then into IMT-advanced (LTE-Advanced) as Release 10
with further enhancements in Releases 11 and 12 towards data rates of 1 Gbps.
UE enhancements, such as the new signaling to support advanced interfer-
ence cancellation receivers in the UE, has been a focus for Release 12. Release
12 also introduces the dual connectivity feature that enables aggregation of
component carriers from different enhanced NodeBs (eNodeBs), and the abil-
ity to support carrier aggregation (CA) between frequency division duplexing
(FDD) and TDD component carriers. Other enhancements in Release 12 in-
clude Web Real Time Communication (WebRTC) and other multimedia such
as Enhanced Voice Services (EVS), enhanced multicast and broadcast services
and video with also continued work from Release 11 to Policy and Charging
Control (PCC). The core specification for Rel-12 specifications was frozen in
September 2014 and the ASN.1 was completed in March 2015.
Moving forward to Release 13, 3GPP has provided a new marker for the
LTE brand namely LTE-Advanced Pro, which is to show the point, where the
system has been significantly enhanced with respect to LTE-Advanced. Among
others, this is achieved through the dual connectivity enhancements, carrier ag-
gregation enhancements with the use of up to 32 component carriers, SON for
Advanced Antenna Systems (AAS) and full-dimension MIMO (FD-MIMO).
Another set of features under Rel-13 consideration include the aggregation of
licensed and unlicensed spectrum. From one side, the WiFi is exploited, with
the LTE-WLAN Aggregation (LWA), LTE-WLAN Radio Level Integration
with IPsec Tunnel (LWIP) and RAN-Controlled LTE-WLAN Interworking
(RCLWI). Those three features are addressing the same aspect (i.e., the tight
WiFi interworking with RAN), but require different upgrades to the WiFi net-
work and to the UE. Therefore, all of the options are included in the standard,
so that an operator can select a suitable option with respect to the required
upgrades. On the other end of the unlicensed spectrum usage modes, the re-
sources from licensed and unlicensed spectrum can be aggregated exploiting
the CA framework under Licensed-Assisted Access (LAA), where the tailored
version of LTE radio has been designed to support (e.g., Listen-Before-Talk
(LBT) mechanism). Narrowband Internet-of-Things (NB-IoT) is addressing
the need for low-power machine-type communications (MTC) through the
lean air interface using 180-kHz carriers and network optimizations enabling
the transmission of the short packets over NAS. The Device-to-Device (D2D)
and sidelink (SL) design enables to utilize the direct communication between
UEs within the LTE coverage or out of coverage to enable both public safety
Introduction 7

and commercial services in this area. Release 13 was completed in December


2015 with the exception of NB-IoT, which was part of Release 13, but was
finalized in June 2016. The vehicular communication is being addressed within
Release 14, which was completed in June 2017. This builds upon D2D and
SL framework with enhancements targeting Vehicular-to-Vehicular (V2V), Ve-
hicular-to-Network (V2N), Vehicular-to-Infrastructure (V2I), and Vehicular-
to-Pedestrian (V2P) services with both collision avoidance and entertainment
types of services and is to be most likely deployed in 5.9 GHz.
3GPP has, in parallel with LTE developments, started the study phase of
the 5G systems already in Release 14. The set of documents for 5G RAN within
3GPP is addressed by a separate set of specifications from LTE: the 38-series
[3]. The 5G normative work, standardization, will be split into (at least) two
releases. Release 15 (5G phase 1) targeting the most urgent use cases is planned
to be completed in Q2 2018 (stage 3) and ASN1 completion in September
2018, while the full 5G scope in terms of supporting all the IMT-2020 use cases
should be frozen by end of 2019 (5G phase 2). The 5G phase 1 work has been
further split into non-stand-alone (NSA) 5G mode (with the connection being
anchored in LTE) and stand-alone mode (SA). The NSA stage-3 completion
date has been set to December 2017, and SA-mode finalization targets June
2018 [4]. The IMT-2020 [5] (defined by ITU-R) targets the three key usage
scenarios (also known as vertical services), namely, enhanced Mobile Broad-
band (eMBB), massive Machine Type Communications (mMTC), and Ultra-
Reliable and Low Latency Communications (URLLC). The 5G air interface
should be able to cope with these use cases and their divergent requirements
on either high throughput, or low latency, or massive sporadic transmissions.
On top of this, the next generation system should support the large frequency
range, from several hundred megahertz to up to 100 GHz, taking into consider-
ation the millimeter-wave communication. For this purpose, to capture the dif-
ferences in propagation and use case requirements, the scalable numerology has
been proposed for the New Radio (NR), serving as a new air interface. On the
network side, the concept of network slicing has been introduced, to capture
the differences in the various use cases and to enable optimization of the inde-
pendent logical networks built upon a single infrastructure. Taking the recent
Rel-13 and Rel-14 LTE-Advanced Pro improvements described above, it can be
seen that some of the use cases already envisioned for 5G are addressed by LTE
enhancements. One of the reasons for this is that 3GPP is going to submit the
evolved LTE together with the New Radio (NR) as its 5G proposal to ITU-R.
To be able for those systems to coexist, a tight interworking between them is an
important part of the 5G study, together with the smooth migration path from
the current status towards a fully enabled 5G system.
8 From LTE to LTE-Advanced Pro and 5G

References
[1] 3GPP TS23.401, v. 8.0.0, “General Packet Radio Service (GPRS) Enhancements for
Evolved Universal Terrestrial Radio Access Network (E-UTRAN) Access,” December,
2007.
[2] 3GPP TS36.300, v. 8.0.0,“Evolved Universal Terrestrial Radio Access (E-UTRA) and
Evolved Universal Terrestrial Radio Access Network (E-UTRAN); Overall Description;
Stage 2,” April, 2007.
[3] 3GPP 38-series: http://www.3gpp.org/ftp/Specs/html-info/38-series.htm.
[4] 3GPP RP-170741, “Way Forward on the Overall 5G-NR eMBB Workplan,” March
2017.
[5] “Recommendation ITU-R M.2083: IMT Vision – Framework and Overall Objectives of
the Future Development of IMT for 2020 and Beyond,” September 2015.
2
The Underlying DFT Concepts and
Formulations
The use of DFT for the implementation of OFDM modulation was first pro-
posed in 1971 by Weinstein and Ebert [1]. However, the multicarrier com-
munication concept used in LTE today was invented in the 1960s but did
not become popular, because of the high level of complexity involved in the
generation of the transmitted signal and the processing in the receiver. Some
decades later, rapid advance in integrated circuit design and the advent of digi-
tal signal processing provided a breakthrough for the efficient implementation
of multicarrier systems based on the discrete Fourier transform (DFT) and the
related fast Fourier transform (FFT) realization of DFT [2–4]. Today, multicar-
rier systems based on OFDM are used in digital audio and video broadcasting
(DAB and DVM), the WLAN standard, and in the asymmetrical digital sub-
scriber line modems (ADSL). This chapter provides a comprehensive review of
the DFT and FFT concepts and mechanisms as used in the implementation
of OFDM in LTE and is expected to answer many of the usually unanswered
questions often faced by the deeper thinking readers in this regard. We will start
the discussions by first introducing the Fourier transform of discrete time func-
tions known as discrete-time Fourier transform (DTFT).

2.1 Discrete-Time Fourier Transform


The DTFT provides a useful frequency-domain transform ST(f ) on sampled
continuous time function s(t). The sampled values of the continuous finite-
values function s(t) are represented by the sequence s(nT ) for integer values

9
10 From LTE to LTE-Advanced Pro and 5G

of n and a sampling time interval T. The DTFT is then defined following the
Fourier transform concept as

ST ( f ) = ∑ s (nT ).e − j 2 πfnT T (2.1)


n

which is an approximation to the continuous-time Fourier transform:


+∞
− j 2 πft
S( f ) = ∫ s (t ).e dt (2.2)
−∞

and can be regarded as the Fourier transform of a continuous function obtained


by using the s[n] sequence to modulate a Dirac Comb function [5].
Considering that the term e–i.2πnTf is just the Fourier transform (F ) of the
direct delta function δ(t – nT ), (2.1) can be rewritten as

ST ( f ) = T .∑ s (nT ).F [ δ(t − nT )]


n
= T .F [ ∑ s (nT ).δ(t − nT )] = T .F [ ∑ s (t )δ(t − nT )] = (2.3)
n n
[F [s (t )T ∑ δ(t − nT )]
n

Replacing the periodic function ∑ δ(t − nT ), by its Fourier series expan-


n
sion (1/T )∑ e − j 2 πkt /T , (2.3) takes the form
k

F [ ∑ s (t )e − j 2 πkt /T ] = ∑ F [s (t )e − j 2 πkt /T ]
k k (2.4)
= ∑ S ( f − k /T )
k

The result arrived at in (2.4) is known as the Poisson summation formula


[6], and shows that sampling the continuous function s(t) causes its spectrum
(DTFT) to become periodic. The DTFT is thus seen to comprise the addi-
tion of exact copies of the Fourier transform of the continuous function s(t)
shifted by multiples of 1/T (the sampling frequency). In terms of the frequency,
f (cycles per second), the period is the sample rate, 1/T (fs ). For sufficiently large
fs, as set by the Nyquist sampling theorem, the n = 0 term can be observed in
the region [−ƒs /2, ƒs /2] with no distortion (aliasing) from the other repetitions.
The Underlying DFT Concepts and Formulations 11

In terms of the normalized frequency, f/fs (cycles per sample), the period is
1. In terms of radians per sample, the period is 2π, which also follows directly
from the periodicity of e–jωn. That is:

e − j ( ω+ 2 πk )n = e − j ωn

where both n and k are arbitrary integers. Therefore,

S ( ω) = S ( ω + 2 πK )

Thus, with the normalized frequency in radians,

ω = 2 πf / f s = 2 πfT (2.5)

The DTFT expressed in (2.1) takes the form:

ST ( ω) = T ∑ s (nT ).e − j ω.n (2.6)


n

2.2 The Discrete Fourier Transform


For the practical computation of the DTFT, the function s(t) and hence the se-
quence s(n) must have a finite-length L. When the function s(t) has finite dura-
tion, a discrete subset of the values of its continuous Fourier transform, DTFT
normalized by the sequence duration is sufficient to reconstruct/represent the
function s(t) on its support. In that case, the same discrete samples from the
DTFT can be obtained by treating L as if it is the period of a periodic function
and computing the Fourier series coefficients for the periodic sequence s(n),
which will then be referred to as the DFT of the sequence s(n) with a period of
L samples spaced T seconds apart.
With the sampling of the DTFT as expressed in (2.6) at L equally spaced
frequency points of


ωk = k k = 0,1,,L − 1
L

We obtain
12 From LTE to LTE-Advanced Pro and 5G

k
1 L −1 − j 2π n
S (k ) / LT = S ( ωk ) / LT = ∑ s (n )e L k = 0,1,, L − 1
(2.7)
L n =0

or in another perspective by computing the Fourier series coefficients Sf (k) for


the periodic sequence s(n) with the period LT, we have

k
 ∞  1 L −1 − j 2π t
S f (k ) = F  ∑ s (n )δ(t − nT ) = ∫ ∑ s (n )δ(t − nT )e LT dt
n =−∞  LT LT n = 0
L −1 k
1 − j 2π t
=
LT
∑ s (n ) ∫ δ(t − nT )e LT dt (2.8)
n =0 LT
L −1 k
1 − j 2π n
= ∑ s (n )e L k = 0,1,, L − 1
L n =0

1
− j 2π
where the e LT term is the fundamental frequency as defined by the se-
1
− j 2π
quence period LT, and the e , k = 0, 1, … are the harmonics of the
LT

fundamental frequency, in radians/sec.


The result obtained in (2.8) via Fourier series expansion of the periodic
sequence s(n) is the same result as in (2.7) obtained by sampling the DTFT and
is referred to as the DFT of the periodic sequence s(n). It is easily seen from
the above expression that the DFT has the same period L as the input sequence
where one period of it is computed from one period of the input sequence. This
periodicity may be viewed as the time-domain consequence of approximating
the continuous-domain function, ST(ƒ), with the discrete subset, S(k), k = 0,
1, ..., L − 1.
Note that the DFT may also be viewed as the resulting comparison of
the time sequence x(n) against a set of complex sinusoidal basis functions
e j2πkn/LT (k = 0, 1, 2, ..., L − 1) which have all an integer number of cycles over
the sequence duration LT and are mutually orthogonal. This comparison is
performed through the correlation defined in (2.7) or (2.8). The resulting L
spectral amplitudes Sf(k) will be sufficient to reconstruct the sequence s(n) over
its period L, as shown below from application of the inverse transform to (2.8),
The Underlying DFT Concepts and Formulations 13

L −1 k k k
j 2π n 1 L −1 L −1 − j 2π m j 2π n
∑ S f (k )e L = ∑ ∑
L k =0 m =0
s (m )e L e L
k =0
k k
(2.9a)
1 L −1 L −1 − j 2 π m j 2 π n
= ∑ s (m ) ∑ e L .e L
L m −0 k =0

k
− j 2π m
but the vectors e L form an orthogonal basis over the set of the L-dimen-

sional complex vectors in that

L =1 − j 2 π k m k
j 2π n
∑ e L ⋅e L = L .δmn
k =0

where δmn is the Kronecker delta function, and is 1 when m = n, and zero
otherwise.
With this, (2.9) becomes

L −1 k
j 2π n 1 L −1
∑ S f (k )e L = ∑ s (m )L .δmn = s (n ) for 0,1,, L − 1
L m =0
(2.9b)
k =0

showing that the sequence s(n) can be fully recovered from its DFT, through the
inverse DFT process. The DFT and the DTFT can be viewed as the results ob-
tained by applying the standard continuous Fourier transform to discrete data.
From that perspective, if the input data are discrete, the transform becomes a
DTFT. If the input data are periodic and continuous, the Fourier transform be-
comes a Fourier series, whereas if the input data are both discrete and periodic,
the Fourier transform becomes a DTF. One may view the DFT as a transform
for Fourier analysis of finite-domain, discrete-time functions.
Note that the frequencies represented for k > L/2 are just the shifted ver-
sion or better dislocated version of the negative frequencies in the Fourier map-
ping of the real signal s(n) and are redundant in that they do not present any
new information. This artifact occurred when we defined the frequency index
in the definition of the DFT, in (2.8), from K = 0 to L − 1 for convenience
instead of from something like –L/2 to +L/2 − 1. In fact, a real signal such as
s(n) or s(t) can only be expanded in terms of either sinusoidal or cosinusoidal
harmonics depending on weather the signal is odd or even with respect to t = 0.
But for representing a sinusoidal or cosinusoidal in the Fourier series or Fourier
transform expansion, it takes a pair of positive and negative frequencies that
are e −jnk/L, and e +jnk/L to form the function. We can further back up this notion
by considering that the frequency amplitudes S(k) are complex conjugate sym-
14 From LTE to LTE-Advanced Pro and 5G

metric with respect to the Nyquist frequency calculated for k = L/2 (half the
sampling rate), as shown below:

S (L / 2 + m ) = ∑ s (n )e − j 2 πn (L /2 +m )/L
n

= ∑ s (n )e − j 2 πn /2
.e − j 2 πnm /L = ∑ s (n )[Cos ( πn )].e − j 2 πm /L
n n

Similarly,

S (L / 2 − m ) = ∑ s (n )e − j 2 πn (L /2 −m )/L
n

= ∑ s (n )e − j 2 πn /2
.e + j 2 πnm /L = ∑ s (n )[Cos ( πn )].e + j 2 πm /L
n n

Since the cosine term and the sequence s(n) are both real, we see that

S (N 2 + m ) = S ∗ (N 2 − m ) , for m = 1,, N 2 − 1

This shows that the DFT coefficients S(k) for K > L/2 are just the same
values of S(k) by replacing the frequency index k, with –k for the values calcu-
lated for K < L/2. This also stems from the well-known symmetric nature of
the Fourier transform of a real signal. That is, the spectrum magnitude is sym-
metric with respect to the zero frequency DC line, which is represented by the
mid-sample corresponding to k = L/2 in the conventional frequency indexing
of the DFT. Thus, the highest frequency extracted by the DFT on a sampled
version of a continuous time signal s(t), corresponds to k = L/2 which translates
to a frequency of ω = π rad/sample, or 1/2T Hz, using the definition in (2.5)
for normalized frequency. This is just half the sampling rate fs/2, as expected by
the Nyquist sampling rate theory. The DFT coefficients corresponding to the
frequency index k beyond the Nyquist rate L/2 simply calculate the spectral
amplitudes for an aliased version of these higher frequencies into the actual
negative frequency half (i.e., the image frequencies) of the spectrum.
In passing, we will further note that the above conclusions would not
change when applied to complex signals in engineering if they are viewed as
simply representing real data in two orthogonal independent dimensions. For
instance, the QAM complex modulation symbols consist of real data sequences
that will be carried on the in-phase and the quadrature components of a carrier
signal (the cosine and sine). Hence, each component represents real data with
DFT and FFT of the same exact properties.
The Underlying DFT Concepts and Formulations 15

2.3 Zero-Padding for Efficient FFT Implementation


The zero padding helps to facilitate the efficient computation of the DFT/
IDFT in the modulation and demodulation of OFDM systems via the software
based processor architectures. It allows one to take advantage of the FFT algo-
rithms [3] by bringing the sequence lengths to values that can be represented
as powers of 2 (in the form of 2M, such as 128, 256, 512). The FFT reduces
the DFT computation for a sequence of size N from O(N 2) to O(N log(N)).
The FFT algorithms also compute only the first N/2+1 samples up to the Ny-
quist frequency when performed for a sequence of length N. This is because
as was discussed in detail in the previous section, the DFT output beyond the
Nyquist rate (corresponding to the frequency index of N/2) are redundant and
just reflect the values for the negative half of the frequency spectrum. Thus, an
N-point DFT translates into an N/2 + 1 point array of FFT. Likewise, an M-
point FFT basically produces the output of a DFT computation performed on
a sequence of length 2M − 2 samples.
In the zero padding process, if the sequence length may not be zero pad-
ded and extended to an optimal length expressed as a power of 2, zero padding
may still be done to bring up the DFT size (number of subcarriers allocated)
to a product of relatively small prime numbers for the efficient implementation
via FFT. That is efficient FFT processing can still be performed via relatively
low complexity non-radix-2 processing, as in the form of 2n × 3m where n and
m are integers. For instance, the 3GPP specifications have assigned the FFT size
of 1,536 (=3 × 29) for the bandwidth allocation case of 15 MHz (equivalent to
900 subcarriers).
Zero padding can generally be performed both in the frequency domain
(in the inverse DFT process) and in the time domain. Zero padding in the
frequency domain results in a time-domain interpolation of the sequence s(n),
providing for a smother (filtered) waveform for transmission, for instance. In
the time domain, zero padding will result in a frequency interpolation by reduc-
ing the FFT bin width and providing finer FFT resolution. For example, zero
padding in the time domain is frequently used in audio analysis for picking up
spectral peaks in the waveform.
When performed in the frequency domain, the zero padding must take
place exactly prior to half the sampling rate f s/2, which is in the middle of the
original sequence S(k). Note that because of the way that the index numbering
in S(k) is mapped to the frequency-domain axis, the f s/2 sample is the highest
positive frequency in the spectrum, as mentioned previously. Thus the zeros
should be inserted after the first L/2 spectral samples, where L is the length of
S(k), in order to maintain spectral symmetry. More discussions on this with
example can be found in [7].
16 From LTE to LTE-Advanced Pro and 5G

2.4 Frequency Resolutions and Impact of Zero Padding


In the expression derived for the DFT in (2.8), and its inverse in (2.9b), we can
see that the spectral lines represented are spaced by a normalized frequency of

1
ωs = 2 π
L
1
which using the definition in (2.5) translates to a frequency spacing of Hz.
LT
This shows that the frequency resolution of DFT is inversely proportional to
the period LT of the sampled waveform. However, the sequence period LT can
be made larger by choosing a period N that is actually larger than the actual
nonzero portion of s(n), and setting the rest to zero over the period. That means
defining a new sequence by zero padding the old sequence s(n) over the L non-
zero portions to bring its period to NT, that is,

s ( ) (n ) = {s (n ) for n = 0,1., L − 1
zp

= 0, for n = L , L + 1,, N − 1

The DFT of the zero-padded sequence s(zp)(n), following the expression


in (2.8), is obtained from

N −1 k
1 − j 2π n
S ( zp ) (k ) =
N
∑ s ( zp ) (n )e N k = 0,1,2,, N − 1 (2.10)
n =0

with N much larger than L. This results in a smoother spectral profile by filling
in for samples between the estimated frequencies through an interpolated DFT.
Similarly, the inverse DFT (IDFT) is given by [in analogy with (2.9b)],

N −1 k
1 j 2π n
s ( zp ) (n ) =
N
∑ S ( zp ) (k )e N
(2.11)
k =0

It is important to note that zero padding does not add any new informa-
tion to the data to increase the frequency resolution. It only helps to increase the
FFT resolution (i.e., reducing the FFT binwidth) and provides a denser sam-
pling or interpolation of the estimated frequency spectrum. However, the FFT
resolution or binwidth has nothing to do with the frequency resolution in the
sense of either resolving closed spaced frequencies or covering the full frequency
span of the continuous waveform s(t) that has been represented by its sampled
The Underlying DFT Concepts and Formulations 17

version s(n). Appending zeros does not change the input sampling rate, and
hence it would not affect the frequency span of the FFT output. And the length
of the sampled signal s(n) determines how closely spaced frequencies within the
original continuous waveform represented by its sampled version can be dif-
ferentiated, that is the inverse of the observation time of the signal given by 1/
LT. This is simply because it is the nonzero portion of the padded sequence that
determines how closely spaced sinusoidal harmonics can be differentiated over
the duration provided [in the Fourier series expansion of the sequence through
the correlation process as given in (2.10)].
Zero padding changes the intersample spacing in the FFT output and
results in a denser sampling of the frequency spectrum that would result from
a DFT on the nonzero-padded data sequence. The zero padding would not
help to distinguish closely spaced spectral peaks when the original input signal
lacks sufficient frequency-domain resolution. In other words, the zero padding
results in a sequence with a larger period and hence a smaller fundamental
frequency whose harmonics are estimated in the Fourier series expansion of
the periodic sequence. The zero padding results in a frequency interpolation
of the spectrum conveyed by the nonzero-padded sequence and provides finer
FFT binwidth and improved amplitude estimation. It does not increase the fre-
quency resolution in the sense of either introducing new spectral information,
or allowing to resolve closely spaced frequencies contained within the underly-
ing continuous waveform s(t). It is possible to have very fine FFT resolution, yet
not be able to resolve two coarsely separated frequencies contained within the
original continuous waveform.
The frequency resolution can be increased only by sampling the data
more finely (sampling required with at least twice the Nyquist rate to cover
the full frequency spectrum of the continuous waveform s(t) or by taking more
data points to be able to differentiate between closely spaced frequencies with
distinct peaks. (i.e., observing it over a longer time). Generally, the original data
sequence s(n) should provide enough samples to provide a frequency resolution
that is smaller than the minimum spacing between the frequencies of interest.
The FFT resolution should at least support the same resolution (binwidth) as
the frequency resolution provided by the nonpadded data. Zero padding also
shifts the frequency bins within the DFT computation of the nonzero padded
data sequence. This shift can cause problems if it alters the estimated samples
relative to a known frequency of interest. In order to provide a bin at a fre-
quency of interest, the FFT length should be set to cover an integer number
of cycles at that frequency. Otherwise, the amplitude of the desired frequency
may be estimated through further interpolation between the two adjacent bins
covering the frequency.
However, as far as these discussions would relate to what we need as
the background for LTE, the frequencies of interest within the modulated
18 From LTE to LTE-Advanced Pro and 5G

waveforms are the known subcarriers spaced at 15 kHz. In the case of 5G, the
baseline subcarrier spacing of 15 kHz has been scaled by an integer number to
support a maximum of 480 kHz (see Chapter 14). The spacing of the spectral
samples in the FFT from zero padding in the time domain is given by f s /N,
where fs is the sampling frequency (1/T  ) and N is the number of FFT points
(the length of the new zero padded sequence). With this spacing, slot n of the
FFT output array represents the frequency n × (f s /N). Ideally, one may choose
values for fs and N such that the subcarriers contained in the modulated wave-
form will end up at the computed bin locations in the FFT for precise ampli-
tude estimation (demodulation of the subcarriers in LTE, for example). This
happens if you apply an FFT to a record length that covers an integer num-
ber of cycles at the frequency of interest. However, the reduced FFT binwidth
(higher density bins) resulting from zero padding the time sequence makes it
more likely that a contained subcarrier frequency is more closely represented by
one of the bins within the FFT, hence providing for a more accurate amplitude
estimation through for instance a simple parabolic interpolation between the
two adjacent bins containing the subcarrier frequency. For instance, the 3GPP
assignment of an FFT size of 1,536 for the bandwidth of 15 MHz results in a
binwidth of 15/1,536 =0.0098 MHz or 9.8 kHz, which is considerably smaller
than the LTE subcarrier spacing of 15 kHz. Whereas if FFT size of 1,024 were
to be assigned, it would result in a binwidth of 15/1,024 = 0.0146 or 14.6 kHz,
which does not provide enough margin in the accurate amplitude estimation
of the 15-kHz spaced subcarriers (through, for instance, interpolation of the
closely spaced adjacent bins) carrying the information transmitted.

References
[1] Weinstein, S. B., and P. M. Ebert, “Data Transmission by Frequency-Division Multiplex-
ing Using the Discrete Fourier Transform,” IEEE Trans. on Commun. Technol., Vol. 19,
No. 5, October 1971, pp. 628–634.
[2] Bracewell, R., The Fourier Transform and Its Applications, 2nd ed., New York: McGraw-
Hill, 1986.
[3] Oppenheim, A. V., and R. W. Schafer, Discrete-Time Signal Processing, 2nd ed., Upper
Saddle River, NJ: Prentice Hall, 1999.
[4] Brigham, E. O., The Fast Fourier Transform and Its Applications, Englewood Cliffs, N.J.:
Prentice Hall, 1988.
[5] Arfken, G. B., and H. J. Weber, Mathematical Methods for Physicists, 5th ed., Boston, MA:
Academic Press, 2000.
[6] Córdoba, A., “La formule sommatoire de Poisson,” C.R. Acad. Sci. Paris, Series I, Vol. 306,
1988, pp. 373–376.
The Underlying DFT Concepts and Formulations 19

[7] http://www.dspguru.com/dsp/howtos/how-to-interpolate-in-time-domain-by-zero-pad-
ding-in-frequency-domain.
3
Air Interface Architecture and Operation
LTE is built on an all new air interface based on orthogonal frequency divi-
sion multiplexing (OFDM) technology. The air interface, which is termed En-
hanced UTRA (E-UTRA), uses orthogonal frequency division multiple access
(OFDMA) scheme on the downlink and a slight variation of it referred to as
single-carrier frequency division multiple access (SC-FDMA) on the uplink.
The SC-FDMA will help to lower the peak to average power ratio (PAPR) in
the transmitter which is a critical factor for the handsets to prolong battery life.
The LTE physical specifications [1, 2] allow for the allocation of channel bands
from 1.4 MHz, 3 MHz, 5 MHz, 10 MHz, 15 MHz, and 20 MHz and support
both frequency division duplex (FDD) and the time division duplex (TDD)
modes for transmission on paired and unpaired spectrum. An advantage of
TDD is that the allocated system bandwidth can be dynamically proportioned
between the uplink and downlink to meet the load conditions on each link.
However, the focus of this chapter will be on the FDD, mode which offers the
advantages of self-interference protection and user to user interference mitiga-
tion through duplex frequency spacing and duplex gap, while it does not have
the time synchronization complexities associated with the TDD mode. Fur-
thermore, the E-band duplex 71–76 and 81–86 GHz is being deployed as FDD
for terrestrial point-to-point and for high-throughput low Earth orbit satellites
(HTS), so this in itself is a good reason to favor FDD.�

21
22 From LTE to LTE-Advanced Pro and 5G

3.1 Spectrum Allocation


The standards have identified 19 bands for FDD operation, ranging from fre-
quencies of approximately 700 MHz to frequencies in the range of 2.7 GHz.
In addition, there are Band 31 at 450 MHz and the new U.S. bands at 600
MHz and aggregated bands at 2 GHz (Bands 70 and 71) defined for LTE.
There are eight bands identified for the TDD operation ranging from approxi-
mately 1,900 MHz to 2.6 GHz. Considerable scope has been left in the stan-
dards to add more frequency bands as global requirements evolve. Two biggest
U.S. mobile operators, Verizon and AT&T Mobility, have already started to
deploy LTE in the new U.S. Digital Dividend band at 700 MHz. The 700-
MHz band is considered very valuable spectrum due to its better propagation
characteristics, which will allow extensive coverage with a much smaller number
of sites. The 2.50–2.69-GHz band (the IMT-2000 extension band) is expected
to be the largest new spectrum resource for mobile broadband services, and this
frequency band has been harmonized by the European Conference of Postal
and Telecommuniations Administrations (CEPT). It is large enough to allow
multiple operators to deploy technologies utilizing wide channels, such as the
2 × 20-MHz channels preferred for LTE. The regulator in Sweden, the Swed-
ish Post and Telecom Agency, was the first to adopt the 2.6-GHz frequency
band. Sweden was the first country to auction the 2.6-GHz band in accor-
dance with the CEPT band plan. The FDD is expected to allocate a globally
harmonized frequency band of 2 × 70 MHz from 2.50 GHz to 2.57 Hz and
2.62 GHz to 2.69 GHz. The allocation should facilitate economies of scale
for operators and secure the availability of standardized terminals and allow
roaming between different countries. A globally harmonized spectrum is key in
deploying affordable mass-market mobile services as using the same frequency
as in other countries would lower LTE handset and infrastructure costs. Nev-
ertheless, spectrum auctions in 700 MHz and 2.5–2.6-GHz bands will have a
direct influence on which bands LTE will be deployed in different countries.
The 2,600-MHz band will be important for very high-capacity environments,
such as for femto, pico, and micro cells, but probably not optimal for macro
cells, while the spectrum now used by analog television, 600-800 MHz, carries
over longer distances and can be used by customers in rural areas. Furthermore,
there has also been other spectrum identifications at WRC-07 (450–470 MHz,
2,300–2,400 MHz, 698–862 MHz, and 3,400–3,600 MHz) to help fulfill the
projected need for future bandwidth as well as facilitate global roaming.
In Europe, operators have been looking at LTE deployments in the equiv-
alent European Digital Dividend band (790–862 MHz) and a refarming of
the existing 900-MHz spectrum. There have also been plans to use the current
GSM1800 band with refarming for LTE1800 services since there is generally
Air Interface Architecture and Operation 23

plenty of radio spectrum at the 1,800-MHz band, and a rollout at 1,800 MHz
will be less costly than at the 2,600-MHz band.
The OFDMA channels are allocated within an operator’s licensed spec-
trum allocation. The center frequency is identified by an EARFCN (E-UTRA
Absolute Radio Frequency Channel Number). The precise location of the
EARFCN is an operator decision, but it must be placed on a 100-kHz raster
and the transmission bandwidth must not exceed the operator’s licensed spec-
trum. Separate EARFCNs are required to describe an uplink and a downlink
frequency pair in an FDD channel.

3.2 OFDM Multiuser Access Mechanism


The multiuser access mechanism used in LTE is based on the OFDM concept,
which converts the frequency selective allocated frequency band into multiple
flat fading subcarriers. The allocated frequency band for the network is split
into many adjacent narrowband subcarriers, each carrying a portion of the sig-
nal from the time domain in the case of OFDMA on DL and from the fre-
quency domain in the case of SC-FDMA on UL. The basic subcarrier spacing
is 15 kHz, with a reduced subcarrier spacing of 7.5 kHz specified for MB-SFN
scenarios (as detailed in a later section). The 15 kHz provides adequate subcar-
rier spacing to avoid degradation from phase noise and Doppler (250 km/hr at
2.6 GHz) with a maximum of 64 QAM modulation. The number of available
subcarriers depends on the transmission bandwidth allocated for the network.
However, LTE comprises a maximum of 1,200 different subcarriers on DL (and
same on UL in FDD) having a spacing of 15 kHz. Although it is mandatory
for the mobiles to have the capability to be able to receive all 1,200 subcarriers
on downlink (and uplink), not all need to be transmitted by the base station.
The exact number depends on the amount of spectrum allocated to the opera-
tor (1.4, 3, 5, 10, 15, or 20 MHz). In this way, all mobiles will be able to talk
to any base station.
The symbol timing on each subcarrier is chosen as the inverse of the sub-
carrier frequency width (i.e., 66.7 µs), which results in mutual orthogonality
between any two subcarriers as long as the subcarriers are placed within the
allocated band such that an integral number of cycles are contained within the
symbol period for each subcarrier. That is, ∫ sin(2pnt/T)sin(2pmt/T)dt = 0 over
symbol period T, where n and m are integers.
In this way, the peak symbol timing of each adjacent subcarrier falls on
the zero crossings of all other subcarriers and hence eliminates intersymbol in-
terference as illustrated in Figure 3.1.
The data stream is convolutionally Turbo encoded [3] through a tailed
8-state rate 1/3 Turbo encoder and after puncturing or repetition for further
24 From LTE to LTE-Advanced Pro and 5G

Figure 3.1 OFDM waveforms (assumes 1-bit symbols, as each waveform carries one symbol).

code rate adjustment (to channel conditions) and interleaving is divided into
a number of lower rate parallel bit streams, which are used to modulate each
subcarrier assigned for the connection using the spectrally efficient modulation
schemes of QAM, 16 QAM, or 64 QAM. The modulation order and the final
coding rate from the process will depend on the indicated channel conditions
(i.e., the signal-to-noise ratio) at the time on the subcarrier. Turbo codes were
used rapidly in UMTS after their publication in 1993, with the benefits of their
near-Shannon limit performance outweighing the associated costs of memory
and processing requirements.
Since each subcarrier of 15 kHz is narrow enough, the channel response
is made flat (nonfrequency selective) over the subcarrier modulation frequency
range. In other words, the symbol time of 66.7 µs on each subcarrier (i.e., the
OFDM symbol time) is made much longer than the typical channel dispersion
time (delay spread) of 0.5 to 15 µs, as observed in cellular communication,
making the channel resistant to multipath delays. By changing the frequency-
selective wideband channel to a flat fading condition on each subcarrier, chan-
nel estimation and compensation are done easily in the frequency domain in
the receiver. The subcarriers are allocated to a connection in units of 12 adja-
cent subcarriers (180 kHz), called physical resource blocks (PRBs), which last
for a minimum time of 1 ms, where PRB is the smallest element of resource
allocation assigned by the base station scheduler.
The number of PRBs and the number of subcarriers that LTE has pro-
vided for various channel bandwidths are given in Table 3.1. Note that the
downlink has an unused central subcarrier.
When some data are lost occasionally due to channel error conditions
on some of the subcarriers, they can be recovered through the error correcting
properties of the convolutional-Turbo codes used. Since each subcarrier is able
to carry data at a maximum symbol rate of 15 ksps (kilo symbols per second),
this results in a raw symbol rate of 18 Ms per second over the maximum of 20-
MHz channel band that can be allocated to the system. This translates into 108
Mbps of raw data when the modulation order of 64 QAM is used. The actual
Air Interface Architecture and Operation 25

Table 3.1
Number of Resource Blocks with Channel Bandwidth
Available channel 1.4 3 5 10 15 20
bandwidth (MHz)
Number of occupied 72 180 300 600 900 1,200
subcarriers
Number of PRBs 6 15 25 50 75 100

peak data rates are obtained by subtracting from the theoretical peak rates the
error coding and protocol overheads and adding the gains arising for any spatial
multiplexing such as achieved via MIMO.
The perfect mutual orthogonality between user channels (the subcarriers)
is preserved all the way within the detection process with the OFDM technol-
ogy, making the radio planning more flexile with LTE. The coverage shrinkage
due to cell load phenomenon observed in CDMA-based systems (cell breath-
ing) as a result of intracell mutual interference between users is no longer the
case. Nevertheless, intercell interference, particularly at cell edges, will exist
since LTE is also based on frequency reuse. Specifically, the intercell interfer-
ence will arise in OFDM systems when the same physical resources (PRBs) are
used simultaneously in neighboring cells. To deal with this problem, 3GPP [4]
has been investigating a number of alternative interference mitigation mecha-
nisms based on intercell resource usage coordination, intercell interference av-
eraging and intercell interference cancellation, which are discussed further in
Chapter 7. However, with the absence of intracell interference, LTE can deliver
optimum performance in cells up to 5 km in radius, while still able of delivering
effective performance in cell sizes of up to 30-km radius. Both the OFDMA
used on downlink and the SC-FDMA scheme used on the uplink are imple-
mented through discrete Fourier transform (DFT) mechanisms as discussed in
more detail in later sections.
LTE extends the packet-mode IP transport for all services to the radio
access and into the handset. Voice that is transmitted in the circuit mode in
UMTS will be transmitted over IP in LTE. Thus, no channels are dedicated to
any one user at the transport level, as all user information will be transported
over shared channels. For each transmission time interval of 1 ms, which de-
fines the size of a transport block in LTE, a new scheduling decision is taken
regarding which users are assigned to a frequency resource block (RB) for the
time interval. This results in more resource sharing over the air and hence a
more efficient use of the scare radio resources. The short transmission interval
used also results in quicker response to changing channel conditions and more
real-time, multiuser resource sharing over the radio link. For the same user, the
frequency sets (RBs) can be changed on a per 0.5-ms basis, which is the slot
duration defined in LTE.
26 From LTE to LTE-Advanced Pro and 5G

3.3 Framing and Physical Synchronization Signals


Generally, framing in UMTS is used to maintain synchronization and manage
the different types of information such as user traffic and reference synchroni-
zation signals between the user equipment (UE) and the base station, evolved
node B (eNodeB). LTE specifications [5] have defined different frame struc-
tures for the TDD and the FDD modes as there are different requirements on
segregating the transmitted data. The FDD frame structure is referred to as type
1, and the type 2 frame structure has been defined for the TDD mode. Both
have a duration of 10 ms. The LTE frame and subframe structures are linked
to the system timings and can be derived as multiples of the clock frequencies
within the system.

3.3.1 Type 1 FDD Frame Structure


The FDD frame consists of 20 slots of 0.5 ms each. Two adjacent slots form a
subframe of 1-ms duration. Each slot of 0.5 ms within the subframe contains 7
OFDM symbols where each OFDM symbol location within the slot is referred
to as the resource element (RE). The 1-ms subframe on each subcarrier defines
the transmission time interval (TTI), which is the user resource scheduling in-
terval. In the frequency domain, the smallest number of subcarriers that may be
allocated to a UE is 12. The 12 subcarriers on a subframe are referred to as the
PRB, which is the smallest unit of time-frequency resource that can be allocated
at a time to a UE for transmission. The PRB is thus a time-frequency matrix
that consists of 12*14 REs as shown in Figure 3.2. Each of the two 0.5 ms in a
subframe on 12 subcarriers is referred to as the RB size.
The 1-ms subframe on a subcarrier carries one transport block. A trans-
port block consists of the medium access control (MAC) header followed by
radio link control (RLC) header and then potentially a number of PDCP PDUs
consisting of a header and the SDUs and then finally any padding to fill ex-
tra remaining space. Each slot consists of seven, six, or three OFDM symbols
depending on whether a short/normal cyclic prefix (CP), a long CP, or an ex-
tended CP is used in the slot at the time, respectively. The cyclic prefix provides
resilience to multipath channel dispersion for frequency-domain equalization
in the receiver as discussed in detail in Section 3.3. The cyclic prefix is formed
by copying a small portion from the end of each symbol and attaching it to the
beginning of the symbol.
The normal cyclic prefix has a length of 5.2 µs for the first symbol and a
length of 4.7 µs for the remaining symbols. The long cyclic prefix has a length
of 16.7 µs and is defined to be used for large cell scenarios with large multipath
delay spreads. The extended cyclic prefix with a length of 33.3 µs is used when
the reduced subcarrier spacing of 7.5 kHz is used (to provide a much longer
Air Interface Architecture and Operation 27

Figure 3.2 Frequency-time grid for one FDD mode subframe, the REs are numbered from 0
= 6 in each of the two time slots within the 1-ms subframe (with the downlink single antenna
reference symbol structure on antenna port 0).

symbol time) in the case of Multimedia Broadcast Multicast Services (MBMS)


when provided in the multicell mode.
Each slot can be dynamically allocated an RB consisting of 12 adjacent
subcarriers of 15 kHz each (or 7.5 kHz in the case of the MB-SFN multicell
mode). Each of these subcarriers can then transmit 6, 7, or 3 OFDM symbols
over the slot depending on whether a short, long, or extended cyclic prefix is
used over the slot at the time. The 12 adjacent subcarriers assigned for one slot
duration define a PRB, which is the smallest unit of resource allocation that
can be made by the base station scheduler. PRBs thus have both a time and
frequency dimension and can be represented by a resource grid of 12 times the
number of OFDM symbols. The allocation of PRBs is handled by a scheduling
function at the base station (eNodeB). Depending on the required data rate,
each application (user) can be assigned one or more PRBs in each transmission
time interval of 1 ms. Thus, users are scheduled every TTI of 1 ms, allocated
a minimum of two consecutive resource blocks in time at every scheduling
instance.

3.3.2 Type 2 TDD Frame Structure


The type 2 frame structure used on LTE TDD is somewhat different. The 10
ms frame is made up of two half-frames, each 5 ms long. The LTE half-frames
28 From LTE to LTE-Advanced Pro and 5G

are further split into five subframes, each being 1 ms long. The subframes may
consist of three fields:

1. DwPTS: Downlink Pilot Time Slot;


2. GP: Guard Period;
3. UpPTS: Uplink Pilot Time Slot.

These fields are individually configurable in terms of length, but the total
length of the three must sum to 1 ms. The subfields can be dynamically config-
ured as special subfields and switched to uplink or downlink within a frame to
dynamically allocate the channel for UL or DL transmission depending on the
load on each link. In TDD the allocated spectrum, say, 20 MHz, is unpaired in
that it is one piece of 20 MHz which is shared between the DL and UL.

3.3.2.1 Uplink-Downlink Frame Timing


The transmission of the uplink radio frame number i from the UE starts (NTA
+ NTA offset) × Ts seconds before the start of the corresponding downlink radio
frame at the UE, where 0 ≤ NTA ≤ 20,512, and NTA offset = 0 for frame structure
type 1 and NTA offset = 624 for frame structure type 2. This timing relationship
is illustrated in Figure 3.3. The variable NTA is the timing adjustment value sent
to UE by the eNodeB and is discussed in Section 3.4.
Note that not all slots in a radio frame may be transmitted. An example
of such a case is in the TDD mode, where only a subset of the slots in a radio
frame is transmitted [5].

3.3.3 Physical Reference Signals


LTE does not provide a fixed preamble for timing synchronization and chan-
nel estimation. Instead, it places special reference symbols in fixed recurring

Figure 3.3 The uplink-downlink frame timing relation.


Air Interface Architecture and Operation 29

places within the PRBs. There are different types of reference symbols that are
used in the downlink and uplink frames for timing synchronization with the
network and coherent demodulation and are each positioned differently within
the PRBs.
On the downlink, there are two types of physical signals that are transmit-
ted. There are the cell-specific reference signals or symbols and the synchroniza-
tion signals or symbols. The reference symbols are a set of known symbols that
are transmitted during the first and fifth symbol location (resource element)
within each 0.5-ms slot when the short CP is used and during the first and
fourth OFDM symbol (resource element) when the long CP is used. The refer-
ence symbols are transmitted on every sixth subcarrier and are staggered in time
as illustrated in Figure 3.2 for the case of a single Tx antenna (on antenna port
0). LTE supports a number of transmission modes based on multiple Tx and
Rx antenna configurations. Up to 4 antennas can be configured in the eNodeB,
each designated with port numbers 0 to 3. In the downlink, multiple antenna
transmissions are organized using antenna ports, each of which has its own copy
of the resource grid that we introduced earlier. Ports 0 to 3 are used for single
antenna transmission, transmit diversity, and spatial multiplexing, while port
5 is reserved for beamforming. There are reference symbols also provided for
antenna ports 1, 2, and 3 as will be discussed in Section 3.11. Antenna ports
0 and 1 use eight reference symbols per PRB, while antenna ports 2 and 3 use
only four. This is because a cell is likely to use four antenna ports when it is
dominated by slowly moving mobiles, for which the amplitude and phase of the
received signal will only vary slowly with time.
The reference signals have two functions. (1) provide an amplitude and
phase reference in support of channel estimation and demodulation, and (2)
provide a power reference in support of channel quality measurements and fre-
quency-dependent scheduling. The cell-specific reference signals support both
of these functions. The channel response on subcarriers that contain the refer-
ence symbols is estimated directly. Then interpolation is used to estimate the
channel response on the remaining subcarriers.
Besides the reference symbols used in channel estimation for the coherent
demodulation of the downlink channels, there are the synchronization signals
or symbols that are used for deriving the network timing information in the
UE as well as cell search and neighbor cell monitoring as they carry the cell
identity. E-UTRA uses a hierarchical cell search scheme similar to WCDMA.
This means that the synchronization acquisition (frequency and time synchro-
nization) and the cell group identifier are obtained from a combination of a
primary synchronization signal (P-SCH) and a secondary synchronization sig-
nal (S-SCH) defined with a predefined structure. They are transmitted on the
72 center subcarriers (around the DC subcarrier) in predefined slots twice per
30 From LTE to LTE-Advanced Pro and 5G

frame in the first and sixth subframes, as shown in Figure 3.4. It is to be noted
that the accurate clock references and clock distribution both become more
critical as throughput rates increase. The link http://www.rttonline.com/tt/
TT2016_004.pdf provides a summary of information from Chronos on this
topic and relevant ITU timing standards documents.
The primary synchronization signal (PSS) is used to discover the symbol
timing and obtain some information about the physical cell identity. The sec-
ondary synchronization signal (SSS) is then used to obtain the frame timing,
the physical cell identity, the transmission mode (FDD or TDD), and the cyclic
cell-specific reference signals. These provide an amplitude and phase reference
for the channel estimation process, so they are essential for everything that fol-
lows. The mobile then receives the physical broadcast channel and reads the
master information block. With this, it finds the number of transmit antennas
at the base station, the downlink bandwidth, the system frame number and a
quantity called the PHICH configuration that describes the physical hybrid
ARQ indicator channel. The mobile can then start reception of the physical
control format indicator channel (PCFICH), so as to read the control format
indicators. These indicate how many symbols are reserved at the start of each
downlink subframe for the physical control channels to read the information.
Each synchronization signal sequence is generated as a symbol-by-symbol
product of an orthogonal sequence (3 of them existing) and a pseudo-random
sequence (170 of them existing). Each cell is identified by a unique combina-
tion of one orthogonal sequence and one pseudorandom sequence allowing 510
different cell identities.
On the uplink there are two types of reference symbols. These are:

1. Demodulation reference symbols (DRS): These provide the eNodeB with


known symbols for channel estimation in the coherent demodula-
tion of user and control data. The DRS symbols are thus multiplexed

Figure 3.4 PSS and SSS frame and slot structure in the time-domain FDD case.
Air Interface Architecture and Operation 31

with the PUSCH and PUCCH. On the PUSCH, one DRS symbol
is transmitted per slot in the fourth symbol position (symbol number
3). On the PUCCH, from 2 to 3 DRS symbols per slot may be config-
ured. The DRS symbols occupy the same allocated uplink bandwidth
as for the user data. Therefore, the length of the reference symbol
sequence will be the same as the number of allocated subcarriers in
the uplink transmission bandwidth and hence a multiple of 12. Mul-
tiple symbol sequences have been designed to accommodate different
bandwidth allocations. There are 30 base sequences for bandwidth
allocations from 1 to 3 resource blocks, whereas more than 30 base
sequences have been defined for bandwidth allocations of more than
three resource blocks. These symbol sequences have been organized
into 30 sequence groups. Each sequence group contains one base DRS
sequence of a length up to that suitable for bandwidth allocations up
to five resource blocks, and two base DRS sequences for bandwidth
allocations above five resource blocks. Each cell is allocated one se-
quence group. The DRS sequences are based on the Constant Am-
plitude Zero Auto-Correlation (CAZAC) prime-length Zadoff-Chu
sequences, which are cyclically extended to the desired length and are
discussed in [5]. Multiple orthogonal DRS sequences are created from
a single base sequence using cyclic shifts resulting in 12 orthogonal
sequences for each base sequence. These orthogonal sequences are as-
signed to different UEs in the same cell and are carried in the PRBs
allocated for both the uplink traffic and the uplink control channels.
2. Channel sounding reference symbols: When there is no uplink trans-
mission taking place, the eNodeB cannot take measurements on the
channel to perform, for example, frequency-selective scheduling. In
these circumstances, UE may be instructed to perform uplink sound-
ing. This will involve the UE transmitting a sounding reference signal
(SRS) within an uplink resource allocation specifically set aside for the
purpose. The eNodeB then performs channel estimation on the re-
ceived SRS signals to choose the resource blocks that contain the best
performing set of subcarriers for a UE. This is similar to the downlink
scheduling in which the UE reported CQI is used for the purpose.
The SRS symbols are allocated over multiples of four resource blocks
and always transmitted in the last symbol of a subframe. The SRS
transmissions can be set as periodic, with variable bandwidth, the con-
figuration of which is set using higher-layer signaling. �
32 From LTE to LTE-Advanced Pro and 5G

3.3.4 Basic Unit of Time in LTE


The basic time unit in LTE is denoted by the parameter Ts on which the LTE
frame structure and everything related is based on multiples of this time unit.
This time unit is defined by the relation

1
Ts = seconds (3.1)
2048 *15000

in which the 1/15,000 is the subcarrier spacing in seconds, making the OFDM
symbol in the time domain equal to 2,048 × Ts. The value 2,048, as will be
seen in later sections, is the FFT size for the system bandwidth of 20 MHz
in the OFDM transceiver implementation. That is, Ts can also be considered
the sampling time for the OFDM signal in LTE where an FFT size of 2,048 is
used in the modulation/demodulation process. However, with the definition in
(3.1), the LTE frame of 10 ms will equal to exactly 307,200 × Ts. A time slot, of
which 20 forms a frame, will be exactly 15,360 × Ts. The short and long cyclic
prefixes can also be shown to be multiples of this basic time unit, that is, 144 ×
Ts, and 512 × Ts, respectively, except for the first symbol of each slot, which has
a slightly longer cyclic prefix equal to 160 × Ts.
One other significance of this basic time unit, Ts, is the fact that it is also
an exact multiple of the UMTS and 1xEV-DO chip rates. The chip rate in
UMTS is 3.84 Mcps, and for 1x-based technologies it is 1.2288 Mcps. These,
if expressed as the chip periods, become exactly as 8 × Ts for UMTS and 25 ×
Ts for 1xEV-DO. Such integer relationships play an important role in reducing
the chipset complexity when the same chipset has to support both UMTS and
1xEV-DO technologies simultaneously in the handset. That is, the basic time
unit Ts in LTE will allow the same clock source be used for all.

3.4 Timing Advance Function


The OFDMA subcarrier orthogonality within the eNodeB receiver requires
simultaneous arrivals of uplink transmissions from multiple users within the
UEs. To ensure this, the timing adjustment (TA) function is used to adjust
the transmission instants from UEs to compensate for variations in propaga-
tion delays. The TA is referenced relative to the downlink frame timing. The
eNodeB calculates and transmits to the UE the initial timing adjustment that is
based on the arrival time of the UE’s random access preamble on the PRACH.
Subsequent timing adjustments are commanded to the UE as multiples of 16
Ts (about 0.52 µs) where Ts is the LTE basic standard sampling time discussed
in the previous section. Thus, a single timing adjustment step of 0.52 µs cor-
Air Interface Architecture and Operation 33

responds to a distance of 156m in absolute measure (i.e., c × 0.52 µs where


c is the speed of signal propagation in the air). The TA can be used to either
advance or delay the uplink transmission timing. The timing update commands
are transmitted to UEs via MAC control messages on the DL PCCCH. The
timing advance value is indicated via an 11-bit field but with the range limited
to 0 to 1,282. However, the command itself is a 6-bit value giving a range from
0 to 63. The values of less than 31 are used to reduce the timing advance and
the values greater than 31 are used to indicate an increase in the timing advance.
Thus, a single timing step results in 78-m delay or advancement. This means
that with a UE moving at the maximum speed of 500 km/s relative to eNodeB
as allowed in LTE (equivalent to 139 m/s) will experience slightly less than two
timing advance changes every second.�

3.5 MBMS Transmission


MBMS stands for Multicast Broadcast Multimedia Services defined in 3GPP
TS 36.201 and can be performed in either a single-cell or a multicell mode. In
single-cell transmissions, MBMS traffic is mapped to the regular shared traffic
channels. In the multicell mode, transmissions from cells are carefully synchro-
nized to form a Multicast/Broadcast–Single Frequency Network (MB-SFN).
MB-SFN is an elegant application of OFDM for cellular broadcast and the
single-frequency term signifies a network plan with a frequency reuse of 1 (the
allocated frequency band is reused in every cell in the network). In this mode,
identical transmissions are broadcast from closely coordinated cells simultane-
ously on a common frequency band to mitigate intercell interference. Signals
from adjacent cells arrive at the receiver and are dealt with in the same manner
as multipath delayed signals. In this manner, the UE can easily combine the
energy from multiple transmitters. If the UE is at a cell boundary, the relative
delay between the signals from the two cells is quite small. However, if the UE
is close to one base station and relatively distant from a second base station,
the delay difference between the two signals can be quite large. For this reason,
MB-SFN transmissions are implemented using a 7.5-kHz subcarrier spacing
which results in doubling of the symbol duration, and with the extended CP
used, this helps to absorb the large delay differences in signals from multiple
nodes. With this MB-SFN transmission scheme, the mobile can move between
cells with no handoff procedure required. Signals received from various cells can
vary in strength and in relative delay, but the received aggregated signal is dealt
with in the same manner as a conventional single-channel OFDM transmis-
sion. MB-SFN networks also use a common reference signal from all transmit-
ters within the network to facilitate channel estimation.
34 From LTE to LTE-Advanced Pro and 5G

3.6 DL OFDMA and Implementation


As discussed in earlier sections, transmission bandwidth is allocated to UE on
the air interface in units of PRBs. Each PRB consists of 12 subcarriers in the
frequency domain where each subcarrier will carry consecutive symbols whose
modulation can be QPSK, 16 QAM, or 64 QAM from the user data in the
time domain. A basic advantage of OFDMA in addition to what was discussed
in earlier chapters is that the modulation and demodulation processes can be
implemented efficiently using the techniques of digital signal processing based
on discrete Fourier transform (DFT) and its FFT implementation as discussed
in the following two sections.

3.6.1 Modulation within eNodeB


The transmitted signal on the DL is the time superposition of the 12 nar-
rowband modulated subcarriers within each allocated PRB, and is generated
through the IDFT (inverse discrete Fourier transforms) techniques of digital
signal processing [6]. The serial stream of modulation data symbols (QAM, 16
QAM) are collected into the allocated consecutive blocks of 12 symbols sets
for the connection, and used as the complex amplitudes of the 15 kHz-spaced
discrete subcarriers (frequency tones) over one data symbol duration which is
1/15 kHz = 66.7µ by requirement of the orthogonality as discussed in Section
3.1. Since no pulse shaping is needed on the modulation symbols due to the
mutual subcarriers’ orthogonality over a symbol duration, each subcarrier will
have the shape of a Sinc function in the frequency domain. Thus, the resulting
symbol weighted tones represent a 2L-point DFT with 15-kHz spaced frequen-
cy tones where L denotes the contained number of subcarriers. The terms for L
+ 1, …, 2L − 1 are just the shifted negative frequencies in the DFT definition
whose complex amplitudes are the complex conjugates of the amplitudes (the
QAM symbols) for the L subcarriers (the concept is well explained in Chapter
2). Then to generate the time-domain waveform for transmission, an inverse
DFT needs to be performed on the sequence that is created through an L-point
inverse fast Fourier transform to increase the efficiency and speed of the com-
putation. Note that a 2L-point inverse DFT requires only an L-point inverse
FFT implementation. Now the more efficient FFT and IFFT algorithms re-
quire the sequence size to have a power of 2 (radix-2 processing) or otherwise
via low complexity nonradix-2 processing, as in the form of 2n × 3m where n
and m are integers. Therefore, zero-padding is done to bring the sequence to
the nearest optimal size represented by power of 2 or multiples of powers of 2
and 3. The number of zeros added thus depends on the number of subcarriers
contained within the allocated PRBs for the connection in accordance with the
3GPP specifications [5], as given in Table 3.2. The zeros are added right after
Air Interface Architecture and Operation 35

Table 3.2
Relationship of FFT Size and Sampling Frequency Used with the Transmission Bandwidth in
LTE OFDM Modulation and Demodulation Processes
Transmission BW (MHz) 1.4 3 5 10 15 20
No. of Resource Blocks (PRBs) 6 15 25 50 75 100
No. of Subcarriers 72 180 300 600 900 1200
FFT Size 128 256 512 1024 1536 2048
Effective Sampling Frequency (MHZ) 3.84 7.68 15.36 30.72 46.08 61.44

the modulation weighted subcarriers that in the DFT definition would be in


the middle of the sequence that is after the Nyquist frequency term represented
by the frequency index (subcarrier index) k = L.
Note that in some related literature the sampling frequencies reported
are half of what is given in Table 3.2. This has resulted in many questions from
mathematically-challenged readers on violating the Nyquist sampling rates.
That is misleading because an N-point inverse FFT processing is the realization
of a 2N point inverse DFT. It is the number of samples in the equivalent DFT
that defines the effective sampling rate, as given by the number of DFT points
divided by the symbol time of 1/15 kHz or 66.67 µs. For the 15-MHz trans-
mission bandwidth, although an FFT size of 1,024 would also bring the 900
subcarriers to a power of 2, 3GPP has assigned an FFT size of 1,536, which is
represented by a mixed radix 2 and 3 processing (3 × 29). This is in order to in-
crease the DFT bin resolution in the demodulation process when interpolating
to separate the subcarriers. In fact, with this the resolution obtained will be 15
MHz/1,536 = 9.8 kHz, which is about the same as for the other five transmis-
sion bandwidths in the table.
We will refer to the resulting time-domain baseband waveform from the
IFFT process as the OFDM symbol. In the transmission process, the time-
dispersive radio channel will perform a linear convolution on the modulation
signal samples obtained in the above. Ideally, it is desired to have the channel
impact to look like a circular convolution of its impulse response with the signal
to be transmitted. That would make the received signal in the frequency do-
main equal to the product of the channel frequency response and the transmit
signal frequency transform assuming quasi static channel condition over the
modulation symbol. The transmitted symbol can be recovered at the receiver
by calculating and dividing the FFT of the received signal by the FFT of the
channel impulse response. To make this possible, we must make the OFDM
symbol waveform prepared at the transmitting side to look periodic (cyclic)
over at least the channel impulse length or equivalently the channel dispersion
time before it is transmitted. By making the signal at the transmitter look cy-
clic, it will effectively change the linear convolution with the channel impulse
response to a circular (cyclic) convolution. The cyclic repetition of the signal
36 From LTE to LTE-Advanced Pro and 5G

over a time duration equal to the channel impulse response (maximum delay
spread) is called adding a cyclic prefix to each OFDM symbol waveform at the
transmitter. The basic idea is to replicate part of the OFDM symbol waveform
from the back to the front and make the OFDM symbol look cyclic, that is,
periodic over the observation time of the channel’s worse-case delay spread in
the intended environment. Thus, by repeating the signal cyclically, we make the
linear convolution that actually takes place look circular. Since the addition of
cyclic prefix consumes bandwidth (i.e., reduces the data rate), its length should
be minimized and set to no more than the expected worse-case delay spread of
the multipath channel in the operating environment. The 3GPP has defined
three different size categories for the cyclic prefix and they are referred to as the
normal, long, and the extended cyclic prefixes whose lengths and applications
were discussed in Section 3.2. It is important to note that the function of the
cyclic prefix as explained above is wider than the guard band used in traditional
TDMA or CDMA systems, which simply compensates for intersymbol and
intercarrier interference. In traditional systems, a guard band or alternatively
special pulse shape filtering may be used to prevent adjacent symbol overlap-
ping at either symbol or chip rate sampling points and hence prevent signal-
to-noise degradation caused by intersymbol interference (ISI). The cyclic prefix
in OFDM systems completely removes any ISI while it also turns the linear
convolution that takes place between the channel impulse response and the
modulation symbol blocks into a circular convolution.
With the cyclic prefix added, the symbols are converted into a continuous
signal using two digital-to-analog converters (DAC) modules for the real and
imaginary parts of the signal. The continuous signal is then upconverted to the
allocated carrier frequency by a local oscillator for transmission over the air. The
conceptual block diagram for the transmission process is shown in Figure 3.5.

3.6.2 Demodulation within UE


The receiver is basically the reverse process of the transmitter except for the
channel equalization module, which is based on frequency-domain equalization
(FDE). The FDE operates on the notion that there is a correspondence between
the circular convolution of two sequences in the time domain and the point-
wise product of their DFT (via FFT processing) outputs in the frequency do-
main [7]. The FDE is performed by calculating the DFT (via FFT processing)
of the received frequency downconverted signal (i.e., the baseband signal) and
dividing the value at each frequency bin (subcarrier) by the channel frequency
response at the corresponding subcarrier to compensate for channel distortion
effects. The channel frequency response for each modulated tone (subcarrier) is
simply a complex scaling represented by an amplitude and a phase shift consid-
ering that the channel is time-invariant over the small symbol duration of 66.6
Air Interface Architecture and Operation 37

Figure 3.5 The conceptual block diagram for the DL OFDM transmitter (in eNodeB).
38 From LTE to LTE-Advanced Pro and 5G

µs and is flat over the narrow modulated subcarrier width of 15 kHz. With this
consideration, the channel frequency response for subcarriers and at symbol
positions containing the reference symbols which are transmitted on every sixth
subcarrier twice per transmission slot (of 0.5 ms) within each PRB is obtained
directly. Then interpolation is used to estimate the channel frequency response
on the remaining subcarriers and symbol positions in the PRB. The conceptual
block diagram for the OFDM receiver within the UE is shown in Figure 3.6.

3.6.3 Susceptibility to Frequency Offsets


Subcarrier mutual orthogonality in OFDM systems and hence zero intercarrier
interference can be maintained if the signal samples in the receiver are taken
precisely at the modulated subcarriers center frequencies. A frequency offset
caused by a difference between the transmitter and the receiver will cause en-
ergy from one subcarrier’s symbol to spill over and interfere with the next. Simi-
larly, a phase noise caused by the random fluctuation of the receiver and trans-
mitter oscillators will cause similar ISI in the adjacent subcarriers on both sides.
In order to minimize the bandwidth loss caused by the use of cyclic prefix, it is
desirable to have long symbols, which resulted then in close subcarrier spacing
of 15 kHz. This, aside from increasing the required processing, results in easier
loss of subcarrier mutual orthogonality due to frequency errors. Such frequency
errors can result from Doppler effects as well as frequency offsets and phase
noise in the local oscillators within the receiver system. The Doppler effect re-
fers to a shift in the frequency spectrum of a signal within the receiver from the
frequency of the transmitted signal when the receiver is moving relative to the
transmitter. This frequency shift is given by fd = v/λ Hz, where v (in m/s) is the
relative velocity of the transmitter and receiver in m/s (in the direction of the
arriving signal), and λ is the wavelength of the transmitted wave in meters when
the signal is a single sine wave. For example, the Doppler frequency of a 2-GHz
carrier wave at a cellular phone in a vehicle moving at 30 km/hr, 350 km/hr,
and 500 km/hr is fd = 0.06, 0.64, and 0.92 kHz, respectively. The Doppler shift
at 30 km/hr where LTE is expected to still provide optimum performance is
therefore only 0.06 kHz, which is a small fraction of the subcarrier bandwidth
and quite negligible. The shift at the maximum LTE specified vehicular speed
limit with the 2-GHz carrier is 0.92 kHz, which is about 6% of the subcarrier
bandwidth.
The receiver frequency errors can also result from frequency offsets and
phase noise within the receiver local oscillators. To keep the performance on
track, any frequency errors in the receiving side must be estimated and compen-
sated for before the signal sampling for the computation of the DFT; otherwise,
the orthogonality between subcarriers is lost. For these reasons, the stability
and performance of the local oscillators with respect to phase noise and the
Air Interface Architecture and Operation 39

Figure 3.6 Conceptual block diagram of the OFDM receiver within the UE.�

receiver frequency tracking loops are critical factors to the proper operation of
the OFDM system. In fact, 3GPP specifications [8] have set frequency stability
40 From LTE to LTE-Advanced Pro and 5G

requirements of 0.1 ppm error for the UE modulation frequency observed over
a period of one subframe (1 ms). Furthermore, to prevent the transmitter and
receiver local oscillators to drift invariably over time, the base station periodi-
cally sends synchronization signals (discussed in previous sections), which are
used by the UE to track the transmitter frequency. �

3.7 Uplink SC-FDMA and Implementation


In the OFDM scheme used on DL, the parallel transmission of several subcar-
riers where each is modulated by one of the data symbols creates an undesirable
high peak-to-average power ratio (PAPR) in the transmitter power amplifier.
This is concluded easily by considering that mathematically the addition of
many subcarrier single tones modulated each by random data symbols results
in a waveform with an amplitude statistics that will have a white Gaussian
noise distribution. High PAPR reduces the power amplifier efficiency at the
transmitter as it would not allow one to operate the power amplifier close to
its saturation point where high power efficiency is achieved. With high PAPR
signals, the power amplifier operating point must be backed off to lower the
signal distortion and as a result lowering its efficiency. While this may not be
as much of a concern in the node B, it will become a critical issue within the
mobile station since the RF power amplifier is the largest single factor affecting
the battery life. To prevent this situation, 3GPP has specified a somewhat dif-
ferent modulation scheme for the uplink which is referred to as the SC-FDMA.
SC-FDMA is a new hybrid modulation scheme that cleverly combines the low
PAR of single-carrier systems with the multipath resistance and the flexible
subcarrier frequency allocation offered by OFDM.
In SC-FDMA, the set of complex quadrature symbols assigned to subcar-
riers within PRBs (12 from each) are treated as a set of discrete time-domain
samples and converted to a set of discrete frequency-domain samples using the
DFT transform. The DFT samples are then used to modulate the subcarri-
ers assigned to the PRBs. Thus, unlike in the OFDM scheme, each subcar-
rier carries information from all the symbols to be transmitted, since the input
data stream is spread by the DFT transform over the available subcarriers. In
contrast to this, each subcarrier of an OFDMA signal only carries information
related to specific modulation symbols. There is always a one-to-one correlation
between the number of data symbols to be transmitted during one SC-FDMA
symbol period and the number of DFT bins spaced 15 kHz apart.
The spectrally shifted frequency-domain representation of the user sym-
bol sets thus obtained are then mapped to the discrete time domain represen-
tation using an inverse FFT (after zero padding the sequence to a power of
2 representation) and the cyclic prefix inserted to obtain the waveforms for
Air Interface Architecture and Operation 41

transmission over the air. The DFT processing is the fundamental difference
between SC-FDMA and OFDMA signal generation as shown from compari-
son of the block diagrams shown in Figures 3.5 and 3.6. The signal thus formed
for transmission will occupy the same bandwidth as the DL OFDMA but re-
sults in a single carrier-like signal than the sum composite multicarrier signal in
the DL OFDMA and hence the name SC-FDMA. As a result, the uplink signal
will have a much lower amplitude variations and result in a lower PAPR. The
spectral representation of the user symbol sequences will have a much lower
variation than the random symbol sequence itself used to modulate the subcar-
riers as in the DL OFDM. Therefore, the SC-FDMA time waveform obtained
in the inverse DFT will also experience a much smaller variation and hence
result in lower PAPR. The analysis provided in [9] has shown that the LTE UE
power amplifier can be operated about 2 dB closer to the 1-dB compression
point than would be possible if OFDMA were used on the uplink.
Since the subcarriers are basically used here to spread (or shift) the fre-
quency-domain representation of the time-domain symbols, the name discrete
Fourier transform spread OFDM (DFT-SOFDM) is also used for SC-FDMA.

3.7.1 Implementation within the UE and eNodeB


The uplink uses the same generic frame structure as the downlink. It also uses
the same subcarrier spacing of 15 kHz and PRB width of 12 subcarriers. The
data is mapped onto a signal constellation that can be QPSK, 16 QAM, or 64
QAM depending on channel quality. However, rather than using the QPSK/
QAM symbols to directly modulate subcarriers as in OFDM, uplink symbols
are sequentially fed into a serial/parallel converter and then into a DFT block
as shown in Figure 3.7. This DFT processing may also be performed through
an FFT algorithm, but if the radix-2 algorithms are used for better efficiency,
the sequence has to be padded with enough zeros to bring its size to the nearest
power of 2 representation. However, that would change the number of bins in
the spectral output and shift the computed bins with respect to the subcarri-
ers. Then further interpolation and processing will be required to estimate the
complex amplitudes of the spectral lines corresponding to the assigned subcar-
riers for the connection as discussed in Chapter 2. Therefore, whether the DFT
block is implemented through a radix-2 FFT algorithm or not will depend on
the trade-offs between the additional processing and the gain obtained with the
FFT approach and may vary from vendor to vendor.
The result at the output of the DFT block is a discrete frequency-domain
representation of the QPSK/QAM symbol set. In this way, there is no lon-
ger a direct relationship between the amplitude and phase of the individual
DFT bins and the original QPSK data symbols. This is quite different from
the OFDMA example in which data symbols directly modulate the subcarriers.
42 From LTE to LTE-Advanced Pro and 5G

Figure 3.7 Conceptual block diagram for the UL SC-FDMA transmitter (in UE).
Air Interface Architecture and Operation 43

The L discrete Fourier terms at the output of the DFT block are then used as
the modulators of the assigned L subcarriers before being converted back into
the time domain using the IFFT process. An N-point IFFT where N > L (L be-
ing the number of symbols or subcarriers), is performed as in OFDM, followed
by addition of the cyclic prefix. By choosing N larger than the maximum num-
ber of occupied subcarriers, we obtain efficient oversampling and sinc (sin(x)/x)
pulse-shaping. There are two cyclic-prefix lengths defined on the uplink. These
are referred to as the normal CP and extended cyclic prefix corresponding to
seven and six SC-FDMA symbols per slot. The cyclic prefix provides OFDMA’s
fundamental robustness against multipath. The extended CP is beneficial for
deployments with large channel delay-spread characteristics, and for large cells.
SC-FDMA uses either contiguous subcarrier tones or uniformly spaced tones
(distributed). The current working assumption in LTE is that localized subcar-
rier mapping will be used. This decision was based on the consideration that the
localized mapping would make it possible to exploit frequency-selective gain via
channel-dependent scheduling (assigning uplink frequencies to UE based on
favorable propagation conditions).
The receiver functions within the eNodeB are basically the reverse of the
processing performed within the transmitter and is conceptually illustrated in
Figure 3.8. After time and frequency synchronization is achieved with the trans-
mitter, a number of samples corresponding to the length of the CP are removed,
so such that only the intersymbol interference free block of samples is passed to
the DFT. Note that a frequency domain (FD) equalization is performed after
presenting the received baseband signal into the frequency domain through the
FFT block using the reference symbols for channel estimation.

3.8 Channels in LTE


There are three categories of channels referred to in LTE: the physical, the
transport, and the logical channels. The physical channels are defined by a set
of resource elements in the frequency-time grid built over subcarriers and the
14 slots of the 1-ms subframe structure. A resource element is a single subcarrier
over one slot that contains one OFDM symbol per antenna port. The physi-
cal channels use the transport channels to offer services to the MAC protocol
sublayer. There is also what is referred to as the physical signals, which are for
the exclusive use of the physical layer. The physical signals are implemented on
assigned resource elements and are of two types: the reference symbols, which
are used to determine the channel impulse response, and the synchronization
signals, which convey network timing information.
The transport channels provide the service access points (SAPs) between
MAC and the physical layer. A transport channel defines the format with which
44 From LTE to LTE-Advanced Pro and 5G

Figure 3.8 Conceptual block diagram of the UL SC-FDMA receiver within the eNodeB.

data is transferred over the radio interface such as the channel modulation, cod-
ing scheme, and antenna mapping. The number of transport channels has been
reduced compared to UTRAN since LTE does not use dedicated channels for
specific UEs. The transport channels organize the data into transport blocks
with a TTI of 1 ms (that is the duration of the LTE subframes). The TTI defines
also the minimum time interval over which link adaptations are performed and
scheduling decisions for transmission to different UEs are carried out.
Air Interface Architecture and Operation 45

The logical channels provide the interface between the MAC and the
RLC protocol sublayers. The MAC uses the logical channels to provide services
to the RLC. Thus, the logical channels are the SAPs between MAC and RLC
sublayers. A logical channel is defined by the type of information that it carries,
traffic or control data and system broadcasts. A logical channel is identified by a
channel ID, which is a field within the MAC header. This ID is used for multi-
plexing the logical channels within the transport channels and specifies to what
higher layer entity the information should be transmitted. The physical chan-
nels and the details of their mappings to resource elements are discussed in [5],
and the transport and logical channels and the mappings are explained in [13].
The following sections provide a short description of the various channels
types, the formats, and the mappings to different protocol layers.

3.8.1 Physical Channels


The physical channels are defined on both the downlink and the uplink in
LTE, and provide for the actual implementation of the transport and the logi-
cal channels over which user data and control information flows. The downlink
physical channels include the following channels:

• Physical downlink shared channel (PDSCH);


• Physical broadcast channel (PBCH);
• Physical multicast channel (PMCH);
• Physical control format indicator channel (PCFICH);
• Physical downlink control channel (PDCCH);
• Physical hybrid ARQ Indicator channel (PHICH).

The physical downlink shared channel (PDSCH) is used for downlink


transmission of data and uses adaptive modulation and coding from the set
QPSK, 16 QAM, and 64 QAM. PDSCH may be semistatically configured
with one of several transmission modes, which depend on the site configuration
and the instantaneous radio channel conditions.
The physical broadcast channel (PBCH) carries system information for
UEs wishing to access the network. It carries the Master Information Block
(MIB) messages that convey the cell information. The modulation scheme used
is always QPSK and the information bits are coded and rate matched. The bits
are then scrambled using a scrambling sequence specific to the cell to prevent
confusion with data from other cells. The PBCH is transmitted during sub-
frame 0 of each 10-ms frame on the central 72 subcarriers or six central resource
blocks regardless of the overall system bandwidth and is transmitted as close to
46 From LTE to LTE-Advanced Pro and 5G

the center frequency as possible. A PBCH message is repeated every 40 ms, that
is, with a transmit time interval of four LTE frames. The PBCH transmissions
consist of 14 information bits, 10 spare bits, and 16 CRC bits.
The physical multicast channel (PMCH) carries multicast/broadcast in-
formation for the MBMS service and uses the same modulation formats as
the PDSCH. The PMCH is transmitted in the MBSFN region of an MBSFN
subframe.
The physical control format indicator channel (PCFICH) informs the
UE about the format of the data being received. It indicates the number of
OFDM symbols used for the PDCCHs, which can range from 1 to 3 and
are located within the first three OFDM symbols of the 1-ms subframes. The
number of symbols setting will impact the available capacity and therefore it
is a parameter that needs proper setting or may be dynamically adjusted in a
SON implementation. The PCFICH is transmitted on the first symbol of every
subframe and carries a Control Format Indicator (CFI) field. The CFI contains
a 32-bit code word that represents one of the numbers 1, 2, or 3. A CFI of 4
is reserved for possible future use. The channel uses the more robust QPSK
modulation with a rate ½ block coding.
The physical downlink control channel (PDCCH) carries downlink con-
trol information (DCI), which consist of uplink power control, downlink re-
source scheduling, uplink resource grant, and the indications for a paging or
system information. The DCI format consists of several different types which
are defined with different sizes. The different format types include: Type 0,
1, 1A, 1B, 1C, 1D, 2, 2A, 2B, 2C, 3, 3A, and 4. Type 0 contains scheduling
grants for the mobile’s uplink transmissions. DCI types 1 to 1D and 2 to 2A
are used for scheduling commands for downlink transmissions. The DCI type
1 schedules data that the base station will transmit using one antenna, open
loop diversity, or beam forming for mobiles that already have been configured
into one of the downlink transmission modes 1, 2, or 7. Further information
on DCI types and formats is given in [5].
The physical hybrid ARQ indicator channel (PHICH) is 1 bit long and is
used to report the hybrid automatic repeat request (HARQ) status with a 0 for
ACK, and a 1 for NACK. The PHICH is transmitted within the control region
of the subframe.
All the control signaling transmitted on the PDCCH, the PCFICH, and
the PHICH are located in the first n OFDM symbols within each subframe
with n ≤ 3.
The uplink physical channels consist of the following:

• Physical uplink shared channel (PUSCH);


• Physical uplink control channel (PUCCH);
Air Interface Architecture and Operation 47

• Physical random access channel (PRACH).

The physical uplink shared channel (PUSCH) is the uplink counterpart


of the PDSCH.
The physical uplink control channel (PUCCH) carries uplink control in-
formation, and is never transmitted simultaneously with the PUSCH from the
same UE. The channel is carried over a configurable number of outer RBs con-
veyed to the user through a parameter within the RRC messages. The parameter
is conveyed within the SIB type 2 for the common PUCCH and within the
RRC Connection Setup, RRC Connection-Reconfiguration, and Re-Establish-
ment messages for the dedicated PUCCH to the users. The PUCCH supports
a number of different formats, each of which uses a modulation scheme from
the set (BPSK, QPSK, BPSK+QPSK), and either a normal or an extended
CP. The PUCCH transmits the scheduling requests for uplink transmissions
from the UE and a number of other control signaling information such as the
CQI (channel quality indicator), RI (channel rank indicator), and the channel
precoding matrix (PMI). The RI and PMI are related to the MIMO opera-
tions mode and are discussed in Section 3.12. The information is transmitted
through different formats efficient for the scenario as explained in [5] and the
modulation used is a mixture of BPSK and QPSK.
The physical random access channel (PRACH) transmits the random
access preambles for accessing the network. The transmission of the random
access preamble is restricted to certain time and frequency resources. These
resources are enumerated in increasing order of the subframe number within
the radio frame and the PRBs in the frequency domain such that index 0 cor-
responds to the lowest numbered physical resource block and subframe within
the radio frame. The location and periodicity of the resources allocated for the
PRACH are indicated in the system information messages broadcast on the
BCCH. Since when this channel is used, the uplink and downlink transmission
delays are unknown, this channel is used in nonsynchronized manner. This is
the only nonsynchronized transmission that the UE makes within LTE. The
PRACH instance contains a CP and a guard period. The preamble sequence
may be repeated to enable the eNodeB to decode the preamble when the link
conditions are poor.

3.8.2 Transport Channels


The transport channels map to the physical channels and act as SAPs for the
higher layers. They provide and structure for passing data and control-plane
control information to higher layers. The transport channels organize the data
into transport blocks with a TTI of 1 ms. The TTI is also the minimum interval
for link adaptation and scheduling decision. Without spatial multiplexing, at
48 From LTE to LTE-Advanced Pro and 5G

most one transport block is transmitted to a UE in each TTI. The transport


channels on the downlink consist of the following:

• Broadcast channel (BCH);


• Downlink shared channel (DL-SCH);
• Paging channel (PCH);
• Multicast channel (MCH).

The broadcast channel (BCH) maps to the PBCH. All UEs within the
cell must receive BCH information error-free. The reception of the BCH is
mandatory for accessing any service of a cell.
The downlink shared channel (DL-SCH) maps to the PDSCH and is
used for transmitting the downlink traffic and the higher layer control-plane
information and hence supports both the logical control and traffic channels. It
supports adaptive modulation and coding and various transmissions modes to
increase the efficient usage of the radio channel conditions. It also supports the
DRX operation, explained in Chapter 6.
The paging channel (PCH) maps to dynamically allocated resources on
the PCCCH via its own identifier (P-RNTI), and must be received within the
entire cell coverage area. The PCH supports DRX in order to increase the bat-
tery operating life cycle.
The multicast channel (MCH) maps to the PMCH and is used to trans-
mit the same information from multiple synchronized base stations to mul-
tiple UEs. MCH transmissions occur in subframes configured by upper layer
for MCCH or MTCH transmission. For each such subframe, the upper layer
indicates if signaling modulation and coding schemes (MCS) or data MCS ap-
plies. The transmission of an MCH occurs in a set of subframes defined by the
PMCH-Configuration.
The transport channels on the uplink consist of the following:

• Uplink shared channel (UL-SCH);


• Random access channel(RACH).

The uplink shared channel (UL-SCH) maps to the PUSCH and is the
uplink counterpart of the DL-SCH, and supports the same basic function such
as adaptive modulation-coding, HARQ, and spatial multiplexing.
The random access channel, RACH, maps to the PRCH and is used by
the UE for initial access without synchronization with the network. It supports
both the collision-based and collision-free modes as will be discussed in later
sections.
Air Interface Architecture and Operation 49

3.8.3 Logical Channels


The 3GPP TS 36.321 [13] defined a set of logical channels for different kinds
of data transfer services offered by the MAC sublayer. A logical channel is de-
fined by the type of information that it transfers such as traffic data, control
channel information, and the system broadcasts. The logical channels are ad-
dressed with a logical channel ID, which is a field within the MAC header
PDU. The logical channels are multiplexed into the transport channels within
the MAC layer using the logical channel ID which specifies where the informa-
tion should be transmitted. The logical channels used to transfer control and
traffic information consist of the following:

• Broadcast control channel (BCCH), (DL);


• Paging control channe (PCCH), (DL);
• Common control channe (CCCH), (DL and UL);
• Dedicated control channe (DCCH), (DL and UL);
• Mulicast control channe (MCCH), (DL);
• Dedicated traffic channel (DTCH), (DL and UL);
• Multicast traffic channel (MTCH), (DL).

The broadcast control channel (BCCH) is used on the downlink to broad-


cast system control information and is mapped to the transport channel BCH.
The paging control channel (PCCH) is used on the DL and transfers
paging information. This channel is used when the network does not know the
location of the UE. The PCCH is mapped to the PCH transport channel.
The common control channel (CCCH) is defined on both the UL and
the DL and is mapped to the UL-SCH and the DL-SCH transport channels.
It is used for transmitting control information between the network and UEs
when no RRC connection is in place, that is, when the UE is not attached to
the network such as in the idle state. It is most commonly used in the random
access procedure.
The dedicated control channel (DCCH) is defined on both the UL and
the DL and is mapped to the UL-SCH and DL-SCH transport channels. The
DCCH is used to transmit dedicated control information between a UE and
the network. It is used by UEs that have an RRC connection, that is, the UEs
that are attached to the network.
The multicast control channel (MCCH) is a point-to-multipoint down-
link channel and is used for transmitting control information to multiple UEs
in the cell for receiving multicast/broadcast information and is mapped to the
MCH transport channel.
50 From LTE to LTE-Advanced Pro and 5G

The dedicated traffic channel (DTCH) is defined on both the UL and


DL and is mapped to the UL-SCH and the DL-SCH transport channels. The
DTCH is dedicated to one UE and is used to transmit the user data. Note that
the word dedicated is not in the sense of dedicated resources. It is used to iden-
tify a specific UE or application sharing a transport traffic channel (UL-SCH
or DL-SCH).

3.9 Layer 2 Protocol Sublayers


The layer 2 protocols in LTE have been optimized for low delay and low over-
head. The design provides effective cross layer functionality for increased reli-
ability and reduced delays for a wide range of IP data flows involving services
with different bit rates and delay requirements. For instance, voice over IP
(VoIP) flows can tolerate delays on the order of 100 ms and packet losses of up
to 1% (Chapter 12). However, TCP file downloads at high data rates (multiple
megabits per second) in heavy data applications are more sensitive to IP packet
losses than VoIP.
The layer 2 in LTE is structured into three sublayers consisting of the
MAC, the RLC, and the Packet Data Convergence Protocol (PDCP) sublay-
ers, which are intertwined for cross-layer operations. The physical layer, that is,
layer 1, interfaces the MAC sublayer of layer 2 and the Radio Resource Control
(RRC) layer of layer 3, and provides the transports layers to the MAC. The
physical layer is at the bottom of the protocol stack and carries 1-ms subframes
that contain a transport block. Within the transport block is a MAC header
and any extra space filled by padding. Next is the RLC layer within which there
can be one or a number of PDCP entities. The PDCP takes up the IP packets,
which form the SDUs (service data units) coming from the top of the protocol
stack. Multiple PDCPs are used within one RLC if several users or applications
are multiplexed within the 1-ms subframe. The link layer protocol stack is il-
lustrated in Figure 3.9. In order to counteract BER variations that occur on the
radio link, the link layer protocols implement error detecting and error cor-
recting techniques such as the combination of forward error correction (FEC)
and automatic repeat request (ARQ) technique known as HARQ on the MAC
layer, as well as ARQ on the RLC layer. The HARQ performs better than the
simple ARQ in poor signal conditions, but it can result in lower throughput in
good signal conditions

3.9.1 MAC Sublayer


The MAC sublayer is responsible for medium access such as the random ac-
cess process and resource scheduling and implements the HARQ protocol. A
MAC PDU is a bit string that is byte aligned (i.e., multiple of 8 bits) in length.
Air Interface Architecture and Operation 51

Figure 3.9 Link layer protocol stack.

The MAC PDU consists of a MAC header, zero or more MAC Service Data
Units (MAC SDU), zero or more MAC control elements, and optionally pad-
ding. Both the MAC header and the MAC SDUs are of variable sizes. The
MAC header contains a logical channel ID field (LCID), which identifies the
logical channel instance of the corresponding MAC SDU or the type of the
corresponding MAC control element or padding for the DL-SCH, UL-SCH,
or MCH. The MAC header size can range from 2 to 3 bytes depending on
whether a 7-bit or 15-bit length field is used. The MAC controls what to send
at a given time and a number of other functions. These include the following:

1. Mapping between logical channels and transport channels;


2. Multiplexing of MAC SDUs from one or different logical channels
onto transport blocks (TBs) to be delivered to the physical layer on
transport channels;
3. Demultiplexing of MAC SDUs from logical channels that are con-
tained in the TBs, which are delivered from the transport channels
carried over the physical layer;
4. Scheduling measurement information reporting;
5. Error correction through HARQ;
6. Priority handling between UEs by means of dynamic scheduling;
7. Priority handling between logical channels of one UE;
8. Transport format selection.

The HARQ is similar to the one used in HSDPA and performs continuous
transmissions and instead of using a status message containing a sequence num-
ber uses a single-bit HARQ feedback ACK/NACK, with a fixed-timing relation
52 From LTE to LTE-Advanced Pro and 5G

to the corresponding transmission attempt, to provide information about the


successful reception of the HARQ process. In the HARQ scheme implemented
by MAC, the packet is not dropped at the receiver side in case the cyclic redun-
dancy check (CRC) fails. Instead, the packet is stored even in the case when a
retransmitted packet is erroneous and then a combined recovery is attempted.
The techniques used for this are known as chase combining and incremental
redundancy. Chase combining [10] uses frame retransmissions to combine with
the information contained in erroneously received frames to recover the data,
whereas in the incremental redundancy scheme, the transmitter sends only the
parity bits in a retransmission. The receiver can then buffer the parity bits sepa-
rately from the systematic bits and attempt to recover the frame. The HARQ
uses both methodologies depending on which one is selected. However, the
simulation results presented in [11] indicate that incremental redundancy tends
to have a lower overall BLER when compared to chase combining. If a packet
is rescheduled, then the transmission can use either incremental redundancy or
chase combining, depending on which one was selected. Eventually either the
error is resolved or the maximum number of retransmissions is reached. Each
following retransmission contains additional redundancy information (incre-
mental redundancy). LTE uses a stop-and-wait HARQ in the MAC imple-
mentation, which provides the sender with a binary single-bit ACK/NACK
feedback for every transmitted packet with a fixed-timing relation to the cor-
responding transmission attempt. The 1-bit HARQ feedback helps to save on
transmission resources and also result in simplicity, reduced delay and overhead
compared to a window-based selective repeat protocol. The retransmissions can
be rapidly requested after each packet transmission, thus helping to minimize
the impact on end-user performance from erroneously received packets. The
HARQ retransmission must occur exactly after one round-trip delay of 8 ms (in
LTE FDD mode) in the uplink direction, which is the time from transmission
to receipt of an ACK/NACK by the eNodeB (includes processing time within
the base station). In the downlink, the HARQ has a minimum of 8 ms RTT,
where the scheduler can postpone downlink retransmissions in favor of higher
priority transmissions. The maximum number of HARQ retransmissions is set
by a parameter which is operator reconfigurable. To differentiate among pos-
sible HARQ processes that may exist within one HARQ entity within the node,
the receiver uses the timing of the ACK/NACK used to associate the ACK/
NACK with a certain HARQ process.
With the HARQ, the probability for misinterpreting a negative acknowl-
edgment as a positive acknowledgment, thereby causing a residual packet loss
is reduced to around 10–4 to 10–3 [12]. Furthermore, errors in other control
signaling, such as scheduling information, can result in HARQ failures. To effi-
ciently reduce the error rate further and ensure the desired very low residual loss
rates required for TCP operation in high speed heavy data applications, the fast
Air Interface Architecture and Operation 53

HARQ protocol with low-overhead ACK/NACK feedback and retransmissions


with incremental redundancy is complemented by a highly reliable window-
based selective repeat ARQ protocol that resides in the RLC sublayer. However,
simulation results [11] have shown that the use of HARQ has an advantage
when the BLER is around 10−3 or more, in which case the HARQ helps to
improve the throughput at the expense of system delay.
The details of the MAC header fields and functions are provided in the
3GPP specifications [13].

3.9.2 RLC Sublayer


The RLC sublayer performs the segmentation of the RLC SDUs into RLC
PDUs for delivery to the MAC and the duplicate detection and reassembly
functions for the in-sequence delivery of the RLC SDUs to the next higher
sublayer, that is the PDCP sublayer. The RLC sublayer uses a sequence number
to detect gaps within the received SDUs and for the reordering and in sequence
delivery to the next sublayer. A reordering timer is started when such gaps
are detected. When the gap is filled by HARQ retransmissions, the RLC re-
ceiver stops the reordering timer and delivers reassembled SDUs from received
PDUs to the PDCP. If the reordering timer expires, which happens typically
in a HARQ failure case, an RLC UM receiver delivers the SDUs to the PDCP
with a certain amount of loss. The RLC SDUs are of variable sizes which are
byte aligned (i.e., multiple of 8 bits). The variable size helps to eliminate the
overheads associated with padding. The RLC PDUs are formed only when a
transmission opportunity has been notified by the lower layer (i.e., by MAC)
for delivery.
The RLC PDU size is based on transport block size which the eNodeB
assigns to every UE on the downlink. The transport block size is dependent on
the assigned transmission bandwidth (i.e., number of resource blocks assigned
for the connection) and the modulation-coding scheme selected for the given
channel condition. Thus, widely scalable transport block sizes are made pos-
sible, which results in a wide range of user-data rates. Since the RLC SDU size
varies with the service (large packets for video or small packets for voice over
IP), the resulting RLC PDU size may turn out to be too large if the available ra-
dio data rate is low (resulting in small transport blocks), or if the available radio
data rate is high, the PDU size may be too small. In those cases, both splitting
and packing is performed to break down the SDU into smaller PDUs or pack
more SDUs into one PDU.
The RLC operates in three modes that consist of the transparent mode
(TM), acknowledged mode (AM), and unacknowledged mode (UM). These
are used by different radio bearers to meet different purposes. In the TM RLC,
54 From LTE to LTE-Advanced Pro and 5G

no segmentation or concatenation of RLC PDUs is done in the transmitting


and receiving sides, and the RLC PDUs formed are just the RLC SDUs.
In the UM RLC, segmentation and reassembly of RLC SDUs into PDUs
and vice versa are performed with reordering and in-sequence delivery of out-
of-sequence received PDUs without accounting for lost PDUs. Duplicate PDUs
are discarded. Hence, the relevant RLC PDU header is inserted.
The AM RLC includes all the functions performed under the UM RLC
with the addition of retransmission request for lost PDUs to the peer RLC en-
tity. This is done through the RLC window-based selective repeat ARQ mecha-
nism. The combined usage of ARQ in the AM RLC and the HARQ in the
MAC sublayer help to improve the radio channel errors significantly. The BER
is typically equal to 10−2 with no retransmissions, 10−3 after the HARQ proce-
dure at the MAC layer, and 10−6 after the ARQ at the AM RLC layer [1]. With
RLC UM, the residual errors propagate to higher layers and must be handled by
TCP, whereas with RLC AM, they are recovered by the RLC protocol. Services
that can sustain error rates on the order of 10−3 to 10−2, can be mapped to a
radio bearer running the RLC protocol in unacknowledged mode (UM), that
is, without the second ARQ layer and benefit from reduced delays. In that case,
residual errors on the MAC layer are not recovered, but packet losses propagate
to higher layers. The RLC UM is normally more suited for VoIP streaming traf-
fic and real-time gaming applications.
Since in LTE, the RLC and MAC are both terminated in the same node
(i.e., eNodeB), it results in a tighter integration of the two protocol sublay-
ers. That is, when the HARQ transmission fails, such as when the maximum
number of HARQ transmission attempts has reached, the HARQ can indicate
a local NACK to the RLC ARQ transmitter instead of waiting for transmission
of a periodic status report. This results in a faster RLC retransmission of the
missing PDU.
The benefit of the local NACK is the shorter detection delay, resulting in
improved performance compared to the gap detection at the ARQ receiver. The
two-layer ARQ design results in low latency and low overhead while providing
the increased reliability. The lightweight HARQ detects and corrects most of
the errors while any residual HARQ errors are detected and resolved by the
more expensive (in terms of latency and overhead) ARQ retransmissions of the
RLC sublayer.
The TM RLC can be configured to deliver/receive RLC PDUs through
the logical channels of BCCH, and the CCCH and the PCCH on UL and DL.
The UM RLC can be configured to deliver and receive RLC PDUs through
the logical channels of DTCH on DL and UL, and the MCCH and MTCH.
The AM RLC can be configured to deliver and receive RLC PDUs through the
logical channels of DCCH and DTCH on UL and DL.
The functions performed by the RLC sublayer consist of:
Air Interface Architecture and Operation 55

1. Transfer of upper layer PDUs;


2. Error correction through ARQ (only for AM data transfer);
3. Concatenation, segmentation and reassembly of RLC SDUs (only for
UM and AM data transfer);
4. Resegmentation of RLC data PDUs (only for AM data transfer);
5. Reordering of RLC data PDUs (only for UM and AM data transfer);
6. Duplicate detection (only for UM and AM data transfer);
7. RLC SDU discard (only for UM and AM data transfer);
8. RLC reestablishment;
9. Protocol error detection (only for AM data transfer).

The RLC sublayer is defined in [14].

3.9.3 PDCP Sublayer


In the case of 3G networks, the PDCP was only used for the packet data bear-
ers, but the circuit-switched bearers connected directly from the host to the
RLC layer. However, since LTE provides all-IP packet transmissions, the PDCP
sublayer handles decryption, robust header compression (ROHC), sequence
numbering, and duplicate removals. The PDCP provides its services to the
RRC and the user plane upper layers at the UE or to the relay at the eNodeB.
The maximum supported size of a PDCP SDU is 8,188 octets. The PDCP sub-
layer is responsible mainly for IP header compression and ciphering. However,
it also supports lossless mobility handling in case of inter-eNodeB handovers
and provides integrity protection to higher layer-control protocols. The PDCP
sublayer ensures that no data is lost at handover for RLC AM bearers by retrans-
mitting missing data. It handles duplicate removal and in-sequence delivery of
the PDCP SDUs received from the source and the target eNodeBs based on the
PDCP SN. For RLC UM, no data is retransmitted by the PDCP.
The PDCP services to the upper layers are summarized as:

1 IP header compression and ciphering. The PDCP includes an IETF


protocol known as robust header compression (ROHC). The prin-
ciple is that the transmitter sends the full header in the first packet,
but only sends differences in subsequent packets. Most of the header
stays the same from one packet to the next, so the difference fields are
considerably smaller. The protocol can compress the original 40- and
60-byte headers to as little as 1 and 3 bytes, respectively, which greatly
reduces the protocol overhead.
2. Transfer of data (user plane or control plane).
56 From LTE to LTE-Advanced Pro and 5G

3. Maintenance of PDCP SNs.


4. In-sequence delivery of upper-layer PDUs at the reestablishment of
lower layers.
5. Duplicate elimination of lower-layer SDUs at the reestablishment of
lower layers for radio bearers mapped on RLC AM.
6. Ciphering and deciphering of user plane data and control plane data.
7. Integrity protection and integrity verification of control plane data.
8. For RNs, integrity protection and integrity verification of user plane
data.
9. Timer-based discard.
10. Duplicate discarding.

The PDCP uses the services provided by the RLC sublayer. The PDCP
header size can range from 1 to 3 bytes depending on the length of the sequence
number. There is one PDCP instance per radio bearer. The radio bearer is simi-
lar to a logical channel for user and control data.
The header compression is particularly important for VoIP, which is a
critical application in LTE. Since there is no circuit switching in LTE, all voice
signals must be carried over IP and there is a need for efficiency. Various stan-
dards are specified for use in robust header compression (ROHC), which pro-
vides a tremendous savings in the amount of header that would otherwise have
to go over the air. These protocols are designed to work with the packet loss
that is typical in wireless networks with higher error rates and longer round-
trip time. There are multiple header compression algorithms, called profiles,
defined for the ROHC framework. Each profile is specific to the particular net-
work layer, transport layer, or upper layer protocol combination (e.g., TCP/IP
and RTP/UDP/IP), with the details given in RFCs as referenced in the 3GPP
specifications [15].
The ciphering including encryption and decryption has to occur below
the ROHC because the ROHC can only operate on unencrypted packets. It
cannot understand an encrypted header. The ciphering protects user plane data,
radio resource control (RRC) data and non-access stratum (NAS) data. The
ciphering algorithm and key to be used by the PDCP entity are configured by
upper layers [16] and the ciphering method is applied as specified in [17].

3.10 Modulation and Coding Schemes and Mapping Tables


The MCS specifies the modulation and the effective coding used on the chan-
nel. The modulation schemes supported by LTE are QPSK, 16 QAM, and 64
QAM in the uplink and the same with 256 QAM in the downlink. Current
Air Interface Architecture and Operation 57

terminals and equipment do not support 64 QAM on uplink and 256 QAM
on downlink. The QAM type modulations used in LTE provide higher spectral
efficiencies compared to the constant envelope schemes such as the Gaussian
minimum phase shift keying and PSK modulations used by single carrier sys-
tems as in WCDMA in which the signal amplitude remains constant. On the
downside, the higher PAPR of the QAM modulation results in higher dynamic
range requirement for the AD and DA convertors and what is even worse is
that they reduce the efficiency of the transmitter RF power amplifier (RFPA).
However, this drawback would happen in any case in a multicarrier OFDMA
system such as LTE due to the fact that the OFDMA symbol is made up of a
combination of many subcarriers in which signals can add when in phase and
attenuate each other when out of phase, resulting in high PAPR. The MCS
with the lowest modulation order such as QPSK results in lower data rates but
require less SINR to operate and hence allow larger coverage area and opera-
tion in poor channel conditions. The selection of the modulation scheme from
the specified set takes place adaptively under the control of eNodeB based on
measurements it collects on the UL and the CQI (channel quality indicator)
signaled by the UE for the DL. It is therefore important that the channel mea-
surements are processed and reported properly by the receivers are acted on
correctly by the transmitters at each end for efficient data transmissions in ac-
cordance with channel conditions. The CQI measurements are a measure of the
SINR of the channel and indicate the downlink channel quality to the eNodeB,
whereas the eNodeB makes its own estimate of the supportable uplink data rate
directly from UE demodulation reference symbols or otherwise from channel
sounding reference symbols. The CQI, which conveys the downlink channel
quality to the eNodeB, also incorporates the quality of the UE’s receiver. A
UE with a better receiver can report better CQI for the same downlink chan-
nel conditions and thus receive downlink data with a higher MCS order. The
standard proposes to use 15 different CQI levels to indicate the channel quality
and SINR. The mobile can report the CQI to the base station either periodi-
cally or aperiodically. The periodic reporting is carried out at regular intervals
between 2 and 160 ms for the CQI (and PMI) and are up to 32 times greater
for the channel rank indicator (RI). The information is usually transmitted on
the PUCCH, but otherwise is carried on the PUSCH if the mobile is sending
uplink data in the same subframe. The maximum number of bits in each pe-
riodic report is 11, to accommodate the lower data rate that is available on the
PUCCH. The aperiodic reporting is carried on the PUSCH when transmitting
user data that are requested using a field in the mobile’s scheduling grant. If
both types of reporting are scheduled in the same subframe, then the aperiodic
report takes the priority. Since the channel fading may vary across the subcar-
riers within the assigned system bandwidth, the base station can configure the
mobile to report the CQI via:
58 From LTE to LTE-Advanced Pro and 5G

1. The wideband reporting that is over the entire downlink band;


2. Via higher layer-configured subband reporting where the base station
divides the downlink band into subbands, and the mobile reports one
CQI value for each;
3. Via mixed wideband and mobile selected subband reporting.

In (3), the mobile selects the subbands that have the best channel quality
and reports their locations, together with one CQI that spans them and a sepa-
rate wideband CQI. If the mobile is receiving more than one transport block,
then it can also report a different CQI value for each to reflect the fact that dif-
ferent layers can reach the mobile with different values of the SINR. The base
station uses the received CQI to select the modulation scheme and coding rate
and for frequency-dependent scheduling. However, the base station only uses
one frequency-independent modulation scheme and coding rate per transport
block for transmitting the downlink data.
The channel coding scheme for transport blocks (i.e., on traffic channels)
in LTE is Turbo coding with a coding rate of R = 1/3, two 8-state constitu-
ent encoders, and a contention-free quadratic permutation polynomial (QPP)
Turbo code internal interleaver. Trellis termination is used for the turbo coding.
The Turbo-coder internal interleaver used is only defined for a limited number
of code-block sizes, with a maximum block size of 6,144 bits. If the transport
block, including the transport-block CRC, exceeds this maximum code-block
size, code-block segmentation is applied before the Turbo coding.
Code-block segmentation means that the transport block is segmented
into smaller code blocks, the sizes of which should match the set of code-block
sizes supported by the Turbo coder. Therefore, before the Turbo coding, the
transport blocks are segmented into byte-aligned segments with a maximum
information block size of 6,144 bits where 24-bit CRC error detection is also
used [2]. The CRC allows for the receiver side to detect errors in the decoded
transport block. The error indication can, for example, be used by the downlink
HARQ protocol as a trigger for requesting retransmissions. The Turbo coder
adds redundancy bits to the data, which enables some level of error correc-
tion. The basic coding rate is 1/3 where one information bit is encoded into
3 bits, but a wide range of other coding rates is achieved from the adaptive
changing of the coding operation. The coding rate is a trade-off between error
protection and data transfer efficiency. This is realized through puncturing the
native coded bit stream to a higher code rate (less protection) or by repeating
coded bits if a smaller code rate is desired to provide more protection based
on reported CQIs. The code rate is the ratio of the source data to the resulting
protected data. However, the overall coding rate achieved defines how many
data bits are present out of every 1,024 bits after puncturing, repetition, and
rate matching. There is normally an effective coding rate defined that is the
Air Interface Architecture and Operation 59

number of information bits transmitted, which is obtained by just the transport


block size plus the 24-bit CRC appended to the transport block divided by the
number of bits transmitted (excluding control and reference symbols) over the
1-ms PDSCH channel subframe, where the latter is dependent on the number
of PRBs assigned. Channel coding spreads each bit of the user data over sev-
eral coded bits before transmission. The coded bits are then mapped into the
modulation symbols and transmitted by a set of OFDM subcarriers that are
distributed over the overall transmission bandwidth. In this way, each informa-
tion bit experiences frequency diversity when transmitted over a radio channel
that is frequency-selective. The distribution of the coded bits over the frequency
domain is referred to as frequency interleaving.
The eNodeB selects the MCS so that the modulation order and the trans-
port block size correspond to a code rate that is the closest possible to the error
protection and the coding rate requirements indicated by the CQI index. The
transport block size (TBS) defines the number of bits that are transported to
the user at layer 1 (excluding the 24-bit CRC, physical channel overheads such
as coding, reference symbols, and control channel bits). The 3GPP has defined
an MCS index that ranges from 0 to 31 to capture the combinations of the
modulation order and overall coding details and redundancy to accommodate
the channel qualities as indicated by the CQI index. Thus, it is the MCS which
determines the modulation order and the transport block size. Since there are
only 15 levels defined for the CQI index, each CQI value may associate to one
of two possible MCS indexes. However, the specifications have not specified the
mapping of CQI to the appropriate MCS index, and therefore the association
between the CQI index and the MCS is vendor-specific.

3.10.1 MCS Index Mapping to Transport Block Size


With the adaptive changes to the overall protective coding and the adaptive
selection of the modulation order, the amount of useful data bits that are trans-
mitted in each TTI of 1 ms will depend on both the channel conditions and
the number of PRBs assigned to the connection. This results in a wide range
of effective coding rates under various scenarios of PRB assignment (channel
bandwidth). We can then expect to have a mapping from an MCS index and
the number of PRBs assigned for the connection to the amount of useful bits
(information bits) transmitted in each TTI. The amount of useful information
transmitted at the physical layer is referred to as the transport block size (TBS)
by the 3GPP. The 3GPP TS 36.213 [18] provided this mapping through an-
other set of tables whereby the MCS index, (IMCS), is first mapped to another
index referred to as the transport block size index, (ITBS), defined over the range
from 0 to 26, as reproduced from the 3GPP [18] in Table 3.3. From the IMCS
index another mapping can be performed to the transport block size transmit-
60 From LTE to LTE-Advanced Pro and 5G

Table 3.3
Modulation and TBS Index Table for
PDSCH
MCS Index Modulation TBS
IMCS Order Qm Index ITBS
0 2 0
1 2 1
2 2 2
3 2 3
4 2 4
5 2 5
6 2 6
7 2 7
8 2 8
9 2 9
10 4 9
11 4 10
12 4 11
13 4 12
14 4 13
15 4 14
16 4 15
17 6 15
18 6 16
19 6 17
20 6 18
21 6 19
22 6 20
23 6 21
24 6 22
25 6 23
26 6 24
27 6 25
28 6 26
29 2 Reserved
30 4
31 6
Source: [18]. (© 2008. 3GPP™ TSs and TRs are
the property of ARIB, ATIS, CCSA, ETSI, TTA,
and TTC.)

ted, depending on the number of PRBs assigned, using a second much longer
table, Table 7.1.7.2.1-1 of [18]. The 3GPP table numbers for these mappings
in the case of PDSCHs and PUSCHs are provided in Table 3.4.
Air Interface Architecture and Operation 61

Table 3.4
The Mapping Tables in 3GPP TS36.213 for
Determination of TBS Transmitted
Table Number in
The Mapping 3GPP TS 36.213 [18]
MCS index to MCS order Table 7.1.7.1-1
and TBS index on DL
TBS index and number of Table 7.1.7.2.1-1
PRBs to TBS on DL and UL
MCS index to MCS order Table 8.6.1-1
and TBS index on UL

3.10.2 Effective Channel Coding Estimation


The effective channel coding rate over a 1-ms transmit time interval is defined
as the number of information bits (including CRC bits) divided by the number
of physical channel bits on the channel. In another words, it is the ratio of data
bits transmitted at the physical layer to the maximum amount of bits that could
be transmitted in the 1-ms TTI excluding the reference and control channel
symbols. Thus,

coding rate = ( TBS + CRS) ( NRE × Bit per RE ) (3.2)

where TBS is the transport block size as calculated from Table 7.1.7.2.1-1 of
3GPP TS 36.213 [18], CRC is the cyclic redundancy check, that is, the num-
ber of bits appended for error detection (24 bits), NRE is the number of useful
resource elements (that is excluding reference and control channel symbols)
contained in the assigned PDSCH or PUSCH, Bits per RE is 2 for QPSK, 4 for
16 QAM, and 6 for 64 QAM, NRE is the number of useful RE within each PRB
* the number of PRBs assigned for the connection where the number of use-
ful REs per PRB for two transmit antennas (using one for a transmit diversity
scheme for example), as seen from Figure 3.12, is calculated as:
Number of useful REs per PRB for two transmit antenna = 12 * 14 − 3 *
12 − (16 − 4) = 120
It is noted that symbols 0 through 2 in each 1-ms subframe of a PRB are
assumed to be occupied by the PDCCHs (excluding those used for reference
symbols).

3.10.3 CQI Mapping to MCS


The eNodeB must have algorithms and criteria to map the channel quality indi-
cator (CQI), which is defined over the range from 1 to 15 in 3GPP TS 36.213
to one of the MCS indexes for formatting the transmissions so that the BLER
62 From LTE to LTE-Advanced Pro and 5G

is kept below 10%. The UE calculates the CQI based on its measurements on
the received signal and transmits the corresponding index to the eNodeB. The
CQI in LTE (as in HSDPA) does not reflect the SINR only. The measurement
corresponds to the highest MCS allowing the UE to decode the transport block
with error rate probability not exceeding 0.1. For the mapping of the CQI
to a proper MCS (indicated by the MCS index), each vendor may develop a
scheduling algorithm that takes the CQI value, the ACK/NACK rate, the data
to transmit, the UE category, the available network resources (such as PRBs),
and any other applicable factors, in order to provide efficient trade-offs between
spectral efficiency and the consumed resources (power, PRBs). The criteria used
by a vendor may not be equally important to every operator depending on what
their priorities are. Therefore, the vendor may incorporate weighting for each
criterion within implemented algorithms to provide the flexibility to meet dif-
ferent operators’ priorities. This is one instance where the standards leave room
for product differentiation.
However, for the purpose of presenting examples of achievable data rates,
we have provided in Table 3.5 two different recommendations for mapping the
CQI to a modulation order and an effective coding rate. One is based on 3GPP
TS 36.213 [18] and the other is taken from [19] where the results are derived

Table 3.5
CQI to MSC Mapping
3GPP TS 36.213 SINRs, dB Proposed in [19]
Modulation Coding (Typical from Modulation
CQI Order Rate Vendors) Order Coding Rate
1 QPSK 0.078 −4.46 QPSK 0.096
2 QPSK 0.12 −3.75 QPSK 0.096
3 QPSK 0.19 −2.55 QPSK 0.24
4 QPSK 0.30 −1.15 QPSK 0.24
5 QPSK 0.44 1.75 QPSK 0.37
6 QPSK 0.60 3.65 QPSK 0.438
7 16QAM 0.37 5.2 QPSK 0.438
8 16QAM 0.49 6.1 QPSK 0.438
9 16QAM 0.61 7.55 16QPSK 0.369
10 64QAM 0.46 10.85 16QPSK 0.42
11 64QAM 0.56 11.55 16QPSK 0.47
12 64QAM 0.66 12.75 16QPSK 0.54
13 64QAM 0.77 14.55 16QPSK 0.54
14 64QAM 0.87 18.15 64QAM 0.45
15 64QAM 0.94 19.25 64QAM 0.45
Source: [18, 19]. (© 2008. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI,
TTA, and TTC.)
Air Interface Architecture and Operation 63

based on trade-offs between spectral efficiency and power consumption. The


typical SINRs used by vendors assuming one Tx antenna diversity at the eNo-
deB are also shown in the table.

3.11 Data Rates and Spectral Efficiencies


The peak channel data rates in releases 10 and 11 of LTE-Advanced are about
1,200 Mbps on the downlink and 600 Mbps on the uplink. These arise through
the use of two component carriers (see Chapter 10), eight layers on the down-
link, and four layers on the uplink and meet the peak data rate requirements
of IMT-Advanced. The peak data rates will eventually increase to 3,000 and
1,500 Mbps, respectively, through the use of five component carriers. This has
the added complexity and cost into user devices, particularly with the need for
well-performing, high-resolution A-to-D and D-to-A converters. That is why
the practical implementation of the standards lags by at least 5 years. Therefore,
in the early days of LTE, allocations of 5 to 10 MHz will be more common. In
these smaller bandwidths, the peak data rate will be lower by a factor of 2 or
4. However, in this section, we will calculate and present the typical practical
channel data rates that are achievable in LTE (one component carrier) and com-
pare these with theoretically derived channel data rates. We will do this for the
downlink traffic channel PDSCH for all possible channel bandwidths in the
range of 1.4 to 20 MHz and all channel conditions as represented by the CQI
index over the full range from 1 to 15. The CQI mapping to the modulation
order and coding rate will be based on the 3GPP proposal given previously in
Table 3.5. We will also calculate and present the theoretical full channel capaci-
ties based on Shannon capacity formulation in each case for comparison. This
should provide a good feeling of how much of the channel capacities are taken
by the physical layer overheads (reference symbols and control channels) and
the optimality of the multiple access OFDMA/MCS used in LTE.
To obtain the downlink user data rate for each downlink channel condi-
tion defined by the CQI index over the range of 1 to 15, the modulation order
and the coding rate is taken from the 3GPP proposal in Table 3.5, and used in
(3.2) to calculate the TBS. The TBS represents the actual user data excluding all
physical layer overheads due to reference symbols, control channels, and coding
over the transmit time interval of 1 ms, which represent the data rate in mega-
bits per second if scaled down by 1,000,000. The summary results are presented
numerically in Table 3.6. However, in the actual system, the vendors provide a
mapping from the UE indicated CQI to an MCS index. The MCS index along
with the modulation index are then mapped into the transport block index,
ITBS, using the 3GPP table reproduced in Table 3.3. The ITBS index thus found
is used for each scenario of the allocated channel bandwidth (1.4 MHz or more
64 From LTE to LTE-Advanced Pro and 5G

Table 3.6
Calculated PDSCH Data Rates in Megabits per Second Excluding Overheads from Reference
Symbols, Control Channels and Coding (Based on 3GPP Mapping of CQI to MCS Mapping)
Transmission Bandwidth, MHz
Modulation Coding 1.4 MHz 3 MHz 5 MHz 10 MHz 15 MHz 20 MHz
CQI Order Rate LTE LTE LTE LTE LTE LTE
1 QPSK 0.078 0.08832 0.2568 0.444 0.912 1.38 1.848
2 QPSK 0.12 0.1488 0.408 0.696 1.416 2.136 2.856
3 QPSK 0.19 0.2496 0.66 1.116 2.256 3.396 4.536
4 QPSK 0.3 0.408 1.056 1.776 3.576 5.376 7.176
5 QPSK 0.44 0.6096 1.56 2.616 5.256 7.896 10.536
6 QPSK 0.6 0.84 2.136 3.576 7.176 10.776 14.376
7 16QAM 0.37 1.0416 2.64 4.416 8.856 13.296 17.736
8 16QAM 0.49 1.3872 3.504 5.856 11.736 17.616 23.496
9 16QAM 0.61 1.7328 4.368 7.296 14.616 21.936 29.256
10 64QAM 0.46 1.9632 4.944 8.256 16.536 24.816 33.096
11 64QAM 0.56 2.3952 6.024 10.056 20.136 30.216 40.296
12 64QAM 0.66 2.8272 7.104 11.856 23.736 35.616 47.496
13 64QAM 0.77 3.3024 8.292 11.856 23.736 35.616 47.496
14 64QAM 0.87 3.7344 9.372 15.636 31.296 46.956 62.616
15 64QAM 0.94 4.0368 10.128 16.896 33.816 50.736 67.656

precisely 6 PRBs, 15 PRBs, which translate into bandwidth of 6 * 12 * 0.01)


into the transport block size using Table 7.1.7.2.1-1 of 3GPP TS 36.213 as also
mentioned in Table 3.4.
The theoretical full channel capacities for each channel condition indi-
cated by the CQI index can be obtained from the Shannon capacity formula

C = w .log 2 (1 + SNR ) (3.3)

in which C is the maximum theoretical bit rate in megabits per second, w is the
effective channel bandwidth in megahertz (i.e., 6 * 12 * 0.015 MHz for the 1.4
MHz) and SNR is the signal power to noise power ratio at input to the receiver
in the linear scale. We will assume that the interference can be represented as
white Gaussian noise and added to the thermal noise, and hence the SIR in
the above formula becomes just the SINR (signal power to noise+interference
power ratio).

C = W .log 2 (1 + SINR ) (3.4)

To estimate the theoretical bit rates, we will need the SINR values that
would represent the channel conditions defined by each CQI value. The SINR
Air Interface Architecture and Operation 65

must reflect the actual eNodeB transmit configuration such as transmitter di-
versity and whatever will result in the effective SINR at the input to the receiver
(i.e., the UE) for inputting to the Shannon capacity formula. For this study, we
will use SINR values typically reported by vendors for a site with Tx diversity
configuration, which is what is normally the minimum case on which the data
derived previously were based. These CQI to SINR mappings were also given
before in Table 3.5.
Substituting (3.4) for various system transmission bandwidths, and tabu-
lating the results, we obtain Table 3.7, the contents of which are also presented
in Figures 3.10 and 3.11.
The actual data rates presented in Table 3.6 are considerably smaller
than the theoretical channel capacity values. This is expected since, as shown
in Chapter 4, at least 25% of the channel capacity is taken by the physical
layer overheads due to transmission of reference symbols, and the control chan-
nels. However, there still remains some significant difference between the chan-
nel full capacity and the actual achieved data rate even if we assume that the
channel overheads from reference symbols and the control signaling take up
to 28% of the channel capacity. For instance, from the tabulated results, it
can be seen that the percentage differences between the full channel capacity
reduced by the physical layer overhead of 28% and the actual data rate under

Table 3.7
Theoretical Maximum Channel Capacities in Megabits per Second Versus Channel
Quality (Includes All Overheads from Reference Symbols, Control, Coding)
Transmission Bandwidth , MHz
1.4 MHz 3 MHz 5 MHz 10 MHz 15 MHz 20 MHz
CQI Cap Cap Cap Cap Cap Cap
1 0.477 1.3192 1.987 3.974 5.961 7.949
2 0.548 1.371 2.284 4.569 6.853 9.137
3 0.689 1.722 2.870 5.740 8.610 11.480
4 0.887 2.218 3.697 7.394 11.092 14.789
5 1.425 3.563 5.939 11.878 17.817 23.756
6 1.868 4.671 7.785 15.570 23.356 31.141
7 2.277 5.692 9.487 18.973 28.460 37.946
8 2.531 6.326 10.544 21.088 31.631 42.175
9 2.961 7.403 12.338 24.675 37.013 49.350
10 4.016 10.039 16.732 33.465 50.197 66.929
11 4.249 10.623 1.705 35.410 53.115 70.819
12 4.655 11.637 19.395 38.948 65.922 77.582
13 5.274 13.184 21.974 43.948 65.922 87.897
14 6.535 16.338 27.230 54.461 81.691 108.922
15 6.925 17.312 28.853 57.706 86.559 115.412
66
From LTE to LTE-Advanced Pro and 5G

Figure 3.10 Graphical display of data presented in Tables 3.6 and 3.7.
Air Interface Architecture and Operation
67

Figure 3.11 Graphical display of data presented in Tables 3.6 and 3.7.
68 From LTE to LTE-Advanced Pro and 5G

channel condition of a CQI of 8 and 15 are around 25% and 12%, respectively.
That is, the data rates achieved are about 75% to 88% of the achievable theo-
retical capacity values. The differences may be explained by considering that
the channel assumed by the Shannon capacity formula is a Gaussian channel
that experiences white Gaussian noise and this is not always the case in cel-
lular mobile communication channels due to fading and interference patterns.
Added to that will be margins of error in practical implementation, limitations
of none-ideal error correcting codes, and possibly inaccuracies in the mapping
of the CQI to SINR. The considerably smaller differences at CQI of 15 may
be because the SINRs used for this condition (the CQI to SINR mapping) and
the choice of the detailed coding (MCS) more accurately represented the chan-
nel conditions and required modulation-coding. Nevertheless, these results
show that with LTE and single stream transmission, we can achieve capacity
as close as 15% from the theoretical capacity of the allocated bandwidth and
that reflects on the spectral efficiency of the multiple access and the modula-
tion coding schemes used. Our data shows that with 20-MHz, single stream
transmission and a CQI of 15, we achieve a spectral efficiency of 67.66/20 =
3.38 bps/Hz, and a data rate of 67.66 Mbps. This data rate is quadrupled to
around 270 Mbps if a MIMO 4 × 4 is used, which should be feasible given the
channel conditions stated (CQI of 15). Besides, Release 10 LTE-Advanced (see
Chapter 10) introduces transmission modes with up to 8 antennas allowing 8
× 8 MIMO in downlink direction. This enables theoretically up to a factor of
8 increase in efficiency by transmitting 8 parallel data streams. The receiver can
retrieve all eight individual data streams, achieving MIMO gain with spatial
diverse multipath channels. UEs inform the base station with the rank indicator
about their DL reception conditions, so that the base station can apply the best
precoding of the DL MIMO signals.
It should be noted that what we have shown here as achievable peak rates
and peak spectral efficiency is on a channel level based on the channel modula-
tion and coding scheme. We have ignored the overall system interference im-
pact by estimating the peak performance based on the best channel quality
scenario (CQI = 15) and with a single user in mind. It may not be possible
to achieve these peak channel performances when several users and adjacent
cells share the same frequencies. Moreover, the actual capacity of a cell and the
spectral efficiencies achieved will depend also on the distribution of the users
within the cell as well as site antenna configurations and interference mitiga-
tion features used in the network. Obviously, if users are located in better radio
propagation spots within the cell, higher capacity is achieved. However, mul-
tiple simulation based evaluations have been carried out for different scenarios
to provide a certain degree of diversity in the evaluations with intersite distances
of 500m and 1,732m and the averaged results reported in [20]. Compared to
the UTRA (HSPA), the results indicate that spectral efficiencies of up to 2 to 3
Air Interface Architecture and Operation 69

times in uplink and 3 to 4 times in downlink in the sense of bits per second per
hertz per cell are achievable.

3.12 Transmission Modes and MIMO Operation in LTE


LTE transmission modes (TM) are based on multiantenna technologies and are
used to improve both the link and system performance in a wide variety of radio
environment scenarios. A multiantenna site can be configured to provide vari-
ous forms of diversity gains through space time and space frequency block codes
[21] and the capacity benefits achieved through MIMO spatial multiplexing
[22]. This has resulted in a number of different transmission modes depending
on the cell antenna resources and in which configuration and mode it is oper-
ated based on channel conditions. In Release 8 version of 3GPP Standards, up
to 4 antennas can be configured in the base station and up to 2 antennas in
most UE categories. UE category 5 can be provided with 4 antennas to allow
up to four layers to be transmitted in the downlink in the spatial multiplexing
mode (MIMO 4 × 4). However, this is not likely to be implemented in the near
future and the most common configuration will be 2 × 2 SU-MIMO. LTE
uses the concept of layer in this context to refer to a single distinct user data
stream transmitted to an antenna port after having been encoded and modu-
lated. The four antenna ports are labeled 0, 1, 2, and 3. There are cell-specific
reference symbols provided within the 14 × 12 time-frequency grid of the PRB,
for estimating the channel conditions for each of the four stated antenna ports.
In fact the set of antenna ports supported is defined by the reference symbol
configuration in the cell. These cell-specific reference symbols are transmitted
in all subframes in a cell supporting non-MBSFN (multicast broadcast single
frequency network). The position of these reference symbols for each antenna
ports are shown in Figure 3.12. Rx denotes reference symbols for antenna ports
2 and 3, which are used alternatively. The channel impulse responses for each
antenna ports are determined by sequentially transmitting the known reference
signals from each transmitting antenna When the reference symbol signals are
transmitted over one antenna port, all other antenna ports are silent to avoid in-
terference. The channel estimation based on these reference symbols in the UE
and the feedback to the eNodeB transmitter uses a codebook approach to pro-
vide an index into a predetermined set of precoding matrices. The precoding is
basically a coding based on the receiver feedback on channel quality, which tries
to compensate for the subsequent effect of channel (undo the channel impact
to the signal). Since the channel is continually changing, this information is
provided for multiple points across the channel bandwidth, at regular intervals,
up to several hundred times per second. The type of precoding depends on the
multiantenna technique used as well as on the number of layers and the number
70 From LTE to LTE-Advanced Pro and 5G

Figure 3.12 position of the reference symbols within the PRB (Rx denotes the position for
reference symbols corresponding to antenna ports 2 and 3, which are used alternatively.)

of antenna ports. The precoding is intended to achieve the best possible signal
quality at the receiver. The precoding matrices for LTE support MIMO and
beamforming. Four codebook entries are used for 2 × 2 SU-MIMO and 16
entries for 4 × 4 SU-MIMO.
In addition to cell-specific reference symbols, LTE also defines UE specif-
ic reference symbols and MBSFN reference symbols. These are transmitted on
other antenna ports referred to as antenna ports 5 and 4, respectively, and their
existence is signaled to the UE by higher-layer signaling. The user specific refer-
ence symbols are used for channel estimation in the vendor-specific beamform-
ing based on multiple antennas. LTE Release 8 supports rank-1 precoding (or
beamforming) using predefined 3GPP codebook for 2 and 4 antennas and any
vendor-specific beamforming when using UE-specific reference signals (with an
arbitrary number of base station antennas). In beamforming, the transmissions
are directed to a specific UE for improved gain and reduced interference to us-
ers in neighboring cells.
In the downlink, there are nine different transmission modes, where TMs
1 to 7 were introduced in Release 8 of 3GPP standards which will be defined
below, TM8 was introduced in Release 9 for dual-layer beam forming on two
Air Interface Architecture and Operation 71

antenna arrays with different polarization, and TM9 was introduced in Release
10 (LTE-Advanced), which we will discuss in detail in Chapter 10. For the
downlink, TM1 and TM2 are specified, where TM1, the default, was intro-
duced in Release 8 and TM2 was introduced in Release 10. The different trans-
mission modes differ in the number of layers (also known as streams or ranks)
and the number of antenna ports used.
The RRC signaling is used to convey to the UE which transmission mode
to use. These modes are defined and discussed below, and we will cover the
MIMO spatial multiplexing operation in some detail subsequently.

1. Transmission mode 1 uses one single antenna at eNodeB.


2. Transmission mode 2 uses two antennas in the site with one used as a
transmit diversity in the open mode.
3. Transmission mode 3, open loop SU-MIMO. This mode allows for
open loop spatial multiplexing MIMO to a single user.
4. Transmission mode 4, close loop SU-MIMO This mode allows for
spatial multiplexing to a single user.
5. Transmission mode 5, MU-MIMO spatial multiplexing. This mode
allows for spatial multiplexing MIMO operation to multiple users
(UEs) in the cell at a time.
6. Transmission mode 6, beamforming using closed-loop Rank-1. In
this mode antenna steering is used to direct the RF signal to single
user equipment and hence helps to improve channel performance
and throughput while also reducing interference in the network. It
uses feedback information from the UE on the channel condition in
a closed loop operation to steer the multiantenna radiation to the in-
tended user.
7. Transmission mode 7, beamforming using UE-specific. This mode
uses the UE specific reference symbols to evaluate channel conditions
and based on that to steer the multiantenna configuration to direct the
RF signal to the intended UE.
8. Transmission mode 8, introduced in Release 9. In this mode, the base
station constructs two independent beams using two antenna arrays
that have different polarizations and sends a different data stream on
each beam based on spatial multiplexing, using either or both of the
two new antenna ports, numbers 7 and 8, and requires sufficiently
high signal to noise plus interference ratio. The antenna ports use a
new set of UE-specific reference signals, which are only transmitted
in the physical resource blocks that the mobile is using and behave
72 From LTE to LTE-Advanced Pro and 5G

in the same way as the reference signals that are used for single-layer
beamforming.
9. Transmission Mode 9. This mode was introduced in Release 10 LTE-
Advanced, and uses UE-specific RSs with up to eight spatial layers,
and is discussed further in Chapter 10.

The transmission modes 1 and 7 are identical from a UE perspective in


that a single layer is transmitted in both cases. In TM 1, the layer is transmitted
from one antenna port, whereas in TM 7, the layer is transmitted from one or
multiple antenna ports transparent to the UE. In TM 2, a single layer is en-
coded with a space-frequency block code (SFBC) based on the Alamouti code,
and transmitted from multiple antennas [21]. In a two-transmitter antenna ex-
ample, a single stream of data is assigned to two antennas and coded using space
frequency block coding (SFBC). SFBC is a frequency-domain version of the
well-known space time block codes (STBCs), also known as Alamouti codes.
This family of codes is designed so that the transmitted diversity streams are
orthogonal and achieve the optimal SNR with a linear receiver. Such orthogo-
nal codes only exist for the case of two transmit antennas. The SFBC provides
frequency diversity by using different subcarriers for the repeated data on each
antenna. In transmission modes 2 and 3, no feedback is required to select the
precoders. These modes are therefore suitable in scenarios where the timely
channel-dependent feedback cannot be made available from the UE as, for in-
stance, in high-speed mobile scenarios. In transmission modes 4, 5, or 6, when
the mobile is configured for closed loop spatial multiplexing, multiple-user
MIMO or closed loop transmit diversity, the mobile reports a precoding matrix
indicator (PMI), which is selected from a predefined codebook and is what the
base station should apply before transmitting the signal. The PMI indicates one
or more layers to be transmitted. The precoder selections are typically based on
channel measurements provided by the UE and is basically a coding that tries
to undo the subsequent impact of channel on the signal. Since it is based on the
receiver feedback on channel conditions, it is only applicable when the mobile
is moving sufficiently slowly for timely application of the feedback. The precod-
ers are applied to the layers transmitted from the eNodeB with the objective of
maximizing the performance according to the instantaneous channel condi-
tions. Depending on the configured uplink feedback mode, the UE may feed
back multiple preferred precoder matrix indicators (PMIs), where each PMI is
valid for a particular subband (frequency-selective precoding) or one PMI that
is valid for all subbands (wide-band precoding). Transmission mode 5 (multi-
user MIMO) provides for single-layer transmission to several users who simul-
taneously share the same frequency resource allocation. In transmission modes
3 or 4, when the mobile is configured for spatial multiplexing, it reports a rank
indication which lies between 1 and the number of base station antenna ports
Air Interface Architecture and Operation 73

and indicates the maximum number of layers that the mobile can successfully
receive. The rank indication is calculated jointly with the PMI, by choosing the
combination that maximizes the expected downlink data rate.

3.12.1 MIMO Operation


The MIMO operation uses spatial multiplexing to transmit and receive mul-
tiple streams of data through multiple Tx and Rx antennas. The spatial mul-
tiplexing through MIMO results in an increase in the transmission rate while
using the same bandwidth and total transmit power when compared to a tradi-
tional single-input single-output (SISO) system. The theoretical increase in ca-
pacity is linearly related to the number of transmit/receive antenna pairs. How-
ever, the spatial multiplexing requires higher channel SNR and a multipath-rich
environment than an equivalent SISO system. The MIMO concept exploits
multipath arising from multiple reflections to resolve independent spatial data
streams. It uses the spatial correlation (i.e., the signature) induced by the rich
multipath environment to separate the multiple data streams at the receiver.
The MIMO antenna configurations in the LTE specifications [2] include,
for example, two transmit and two receive antennas generally referred as a 2 × 2,
and a four transmit and four receive antenna configuration, or 4 × 4. A MIMO
system can also be configured with an unequal number of antennas between the
transmitter and the receiver, such as the 4 × 2 configuration, which is used to
combine spatial multiplexing with antenna diversity. When an unequal num-
ber of antennas are used in the transmitter and receiver, the MIMO capacity
improvement is constrained by the smaller number of antenna ports. For LTE,
the defined configurations are 1 × 1, 2 × 2, 3 × 2, and 4 × 2. In the latter two
cases with as many as four transmitting antennas, there are only a maximum
of two receivers and thus a maximum of only two spatial multiplexing data
streams. For a 1 × 1 or a 2 × 2 system, there is a simple 1:1 relationship between
the layers and the transmitting antenna ports. However, for the 3 × 2 and 4
× 2 operation, there are still only two spatial multiplexing channels, but with
redundancy on one or both data streams to provide the diversity benefits. The
layer mapping specifies exactly how the extra transmitter antennas are used.
The MIMO concept is illustrated in Figure 3.13 for the basic 2 × 2 spa-
tial multiplexing configuration and the associated four complex channel coef-
ficients. The linear combination of the two data streams at the two receiver an-
tennas results in a set of two equations and two unknowns, which is resolvable
into the two original data streams. That is, the transmitted symbols s0,s1 can be
computed from the received signals r0 and r1 using the equations in (3.5).

r0 = h00 .s 0 + h01 .s1


(3.5)
r1 = h10 .s 0 + h11 .s1
74 From LTE to LTE-Advanced Pro and 5G

Figure 3.13 The 2 × 2 MIMO antenna configuration and channel coefficients.

where the channel coefficients h00, h01, h10, h11 referred to as the channel cou-
pling matrix are estimated using the reference symbols for antenna ports 0 and
1. The applicability of the MIMO operation is largely dependent on the char-
acteristics of the channel and the receiver’s ability to allow the recovery of the
channel coupling matrix, which is highly impacted by noise, interference, and
channel correlation properties. Correlation in the channel matrix coefficients,
which can result from inadequate antenna spacing, common antenna polariza-
tion, and narrow angular spread created by the propagation environment, can
lead to an ill-conditioned matrix. This makes the system prone to errors and it
is highly sensitive to noise and interference. In an ill-conditioned matrix, small
errors in the coefficients may have a large detrimental effect on the solution
[13–24]. For that reason, minimization of round-off errors may require the use
of double or triple precision in order to reach sufficient accuracy in the solution.
However, the measurements of the matrix coefficients when using the known
reference symbols are also impacted by high levels of noise and interference, in
low SNR cases.
There are several techniques to quantify the channel matrix properties,
which include a key parameter referred to as the matrix condition number.
The condition number is formed by taking the ratio of the maximum to the
minimum singular values of the instantaneous channel matrix. Small values
for the condition number imply a well-conditioned channel matrix while large
values indicate an ill-conditioned channel matrix. For example, a condition
number close to the ideal value of 0 dB would imply perfect channel conditions
for the application of MIMO spatial multiplexing, while values greater than
10 dB would point to the need for a decibel per decibel improvement in the
relative SNR in order to properly realize the benefits of a MIMO system. With
a large channel matrix condition number, a small error in the received signal
may result in large errors in the recovered data. However, when the condition
number is small and approaching an ideal value of 0 dB, the system is no more
affected by noise than a traditional SISO system. The channel matrix condition
Air Interface Architecture and Operation 75

number can be calculated using vector signal analyzers available from vendors.
The condition number indicates how SNR needs to improve in order to allow
spatial multiplexing and is a useful static test that can be easily implemented by
the receiver designer. In reality, the SNR would vary for each transmission layer
in a MIMO system resulting in asymmetry between the layers at the receivers,
thus requiring higher SNRs for a MIMO system. It is possible to equalize the
layer performance at the receiver using a technique called precoding that is
included as part of the LTE specification [5]. The precoding cross-couples the
layers prior to transmission into the wireless channel with the goal of equalizing
the performance across the multiple receive antennas.
Finally, we will note that to design a MIMO system with low correlation
between channel coefficients, widely spaced antenna elements or antenna ele-
ments that are cross-polarized are often used.�

3.13 LTE Physical Layer Measurements


The 3GPP standards define measurements that are made in the UE and in the
eNodeB and reported to the higher layers. These will be discussed next. To
initiate a specific measurement in the connected mode, the network transmits a
RRC connection reconfiguration message to the UE including a measurement
ID and type, a command (setup, modify, release), the measurement objects,
the measurement quantity, the reporting quantities and the reporting criteria
(periodical/event-triggered). When the reporting criteria are fulfilled the UE re-
sponds with a measurement report message to the network including the mea-
surement ID and the results. In the idle mode, the measurement information
elements are broadcast in the system information.

3.13.1 In the UE
In the physical layer, three key measurements are made in the UE [25]. These
consist of the following:

• Reference signal received power (RSRP);


• Received signal strength indicator (RSSI);
• Reference signal received quality (RSRQ).

On the downlink side, these measurements are used by the operator to


derive statistics on the network coverage and coverage quality.
The RSRP is a measurement of received signal strength and is indicative
of the cell coverage. The RSRP is defined as the linear average over the power
76 From LTE to LTE-Advanced Pro and 5G

contributions in watts of the resource elements (REs) that carry cell-specific


reference signals within the considered measurement frequency bandwidth.
The power per resource element is determined from the energy received during
the useful part of the symbol, excluding the CP, and is measured on the refer-
ence symbols transmitted on the antenna port 0 (also known as R0 reference
symbols). However, if the UE reliably detects that the reference symbols on
antenna port 1 (the R1 reference symbols) are available, it may also use them
in addition to the R0 reference symbols to determine the RSRP. If receiver
diversity is in use by the UE, the combined RSRP must be at least as large as
the RSRP of any of the individual diversity branches. The reference point for
the measurement is the antenna connector of the UE. The number of resource
elements within the considered measurement frequency bandwidth and within
the measurement period that are used by the UE to determine RSRP is left up
to the UE implementation with the measurement accuracy that is applicable
for a received signal quality down to −6 dB over a measurement bandwidth
equivalent to the central 6 RBs. In the time domain the physical layer measure-
ment periods when no DRX is used (and for short DRX cycles) are 200 ms and
480 ms for intrafrequency and interfrequency, respectively. The interfrequency
measurement naturally takes longer as it can only be performed during the
measurement gaps. Moreover, the interfrequency RSRP physical layer measure-
ment period increases linearly with the number of carrier frequencies that has
to be measured during the gaps. The physical layer measurement period also
increases in proportion to the DRX cycle for DRX cycles larger than 40 ms for
intrafrequency RSRP and larger than 80 ms for interfrequency RSRP. The mea-
surement sampling rate is not specified but is left to the UE implementation.
The RSRP is applicable in both the RRC_IDLE and RRC_CONNECTED
states and is used for cell reselection and handover within E-UTRAN (intrafre-
quency and interfrequency) and to E-UTRAN from any of UTRAN FDD or
TDD, GSM, and CDMA2000 1xRTT, and hence the RSRP is considered to
be the most important measurement quantity for EUTRAN.
The RSSI is defined as the linear average of the total received power in
watts observed only in OFDM symbols containing the reference symbols for
antenna port 0, in the measurement bandwidth, over N number of resource
blocks by the UE from all sources, including cochannel serving and nonserving
cells, adjacent channel interference, and thermal noise.
The RSRQ is defined as the ratio N × RSRP/(E-UTRA carrier RSSI),
where N is the number of RBs within the carrier bandwidth that was used for
measuring the RSSI, as the measurements in the numerator and denominator
should be made over the same set of resource blocks. The reference point for the
RSRQ is the antenna connector of the UE. In the first release of LTE (Release
8), RSRQ was applicable in RRC_CONNECTED state. It is therefore used for
handover within E-UTRAN and from other RATs to E-UTRAN. However, in
Air Interface Architecture and Operation 77

order to prevent outages caused by high interference situations, RSRQ was also
introduced for RRC_IDLE state in Release 9. This gives the network the option
to configure the UE to use RSRQ as a metric for performing cell reselection,
at least in the cases of cell reselection within E-UTRAN, from UTRAN FDD
to E-UTRAN and from GSM to E-UTRAN. The RSRQ and RSRP together
have been shown to be particularly beneficial for performing interfrequency
quality-based handover. The RSRQ is inherently a relative quantity which to
some extent eliminates absolute measurement errors and leads to better accu-
racy than is possible for RSRP. The RSRQ accuracy requirements are down to
−6 dB (based on a measurement bandwidth equivalent to the central 6 RBs),
where the measurement sampling rate is UE implementation-dependent. The
UE is required to measure RSRQ from the same number of intrafrequency and
interfrequency cells as for the RSRP.�

3.13.2 In the eNodeB


On the uplink side, the measurements made by eNodeB and reported to the
higher layers consist of the following [25]:

• Downlink reference symbol transmit power (DL RS TX power);


• Uplink received interference power;
• Thermal noise power;
• Timing advance (TA);
• eNB Rx – Tx time difference;
• E-UTRAN GNSS timing of cell frames for UE positioning;
• Angle of arrival (AoA);
• The UL relative time of arrival (TUL-RTOA).

The DL RS TX power is determined for a cell as the linear average over


the power contributions watts of the resource elements that carry cell-specific
reference signals, which are transmitted by the eNodeB within its operating
system bandwidth. The cell-specific reference signals R0 and, if available, R1
can be used for this purpose, and the reference point for the measurement is
the TX antenna connector. The uplink received interference power includes the
thermal noise and is measured and reported over the bandwidth for each PRB
that is assigned for the uplink. In case of receiver diversity present, the reported
value is the linear average of the power in the diversity branches. The uplink
thermal noise power measures the thermal noise within the assigned uplink
system bandwidth and the measurement is made over the same time period as
78 From LTE to LTE-Advanced Pro and 5G

used for the receiver interference power measurement. The reference point is
the Rx antenna connector.
For frame type 1, the timing advance (TA) is defined as the time difference

T A = (eNodeB Rx − Tx time difference ) + (UE Rx − Tx time difference )

where the eNodeB Rx – Tx time difference corresponds to the same UE that


reports the UE Rx – Tx time difference.
For frame type 2, the timing advance (TA) is defined as the time difference

T A = (eNodeB Rx − Tx time difference )

where the eNodeB Rx – Tx time difference corresponds to a received uplink


radio frame containing PRACH from the respective UE.
The eNodeB Rx – Tx time difference is defined as eNodeB-RX – eNo-
deB-TX where:

• eNodeB-RX is the eNB received timing of uplink radio frame #i, de-
fined by the first detected path in time. The reference point for eNodeB-
RX is the Rx antenna connector.
• eNodeB-TX is the eNodeB transmit timing of downlink radio frame #i.
The reference point for eNodeB-TX is the Tx antenna connector.

The E-UTRAN GNSS Timing of Cell Frames for UE positioning is de-


fined as the time of the occurrence of a specified LTE event according to a
GNSS-specific reference time for a given GNSS (e.g., GPS/Galileo/GLONASS
system time). The specified LTE event is the beginning of the transmission of a
particular frame (identified through its SFN) in the cell. The reference point is
the Tx antenna connector.
The AoA defines the estimated angle of a user with respect to a refer-
ence direction. The reference direction for this measurement is the geographical
North, positive in a counterclockwise direction. The AoA is determined at the
eNodeB antenna for an UL channel in the UE.
The UL Relative Time of Arrival (TUL-RTOA) is the beginning of sub-
frame i containing the sounding reference symbols received in the location mea-
surement unit (LMU) j, relative to the configurable reference time [26, 27].
The reference point for the UL relative time of arrival is the RX antenna con-
nector of the LMU node when LMU has a separate RX antenna or shares RX
antenna with the eNodeB and the eNodeB antenna connector when the LMU
is integrated in the eNodeB.
Air Interface Architecture and Operation 79

Table 3.8
UE Categories in Release 8
Maximum
Number
of Layers
Maximum Maximum Supported for 64 kbps
UE DL Bit Rate UL (Mbps) Multiplexing Supported
Category (Mbps) Bit Rate on DL in UL
1 10.295 5.160 1 No
2 51.024 25.456 2 No
3 102.048 51.024 3 No
4 150.752 51.024 4 No
5 299.552 75.376 5 Yes

3.14 UE Categories
The 3GPP specifications Radio Transmission and Reception, 3GPP TS 36.306
Release 8, have specified five categories of user equipment as given in Table 3.8.
Each category is specified by a number of downlink and uplink physical layer
parameter values as listed in the table. TS 36.101 specified one power class, UE
power class 3, that has a maximum output power of 23 dBm.

References
[1] 3GPP TS 36.300, “E-UTRA and E-UTRAN Overall Description; Stage 2, v8.0.0.”
[2] 3GPP TS 36.201, “LTE Physical Layer – General Description, v1.0.0.”
[3] 3GPP TS 36.212, “Multiplexing and Channel Coding.”
[4] 3GPP TR 25.814, “Physical Layer Aspects for Evolved Universal Terrestrial Radio Access.”
[5] 3GPP TS 36.211, v1.0.0, “Physical Channels and Modulation.”
[6] Smith, S. W., “The Discrete Fourier Transform,” Ch. 8 in The Scientist and Engineer’s
Guide to Digital Signal Processing, 2nd ed., San Diego, CA: California Technical Publish-
ing, 1999.
[7] Benvenuto, N., and S. Tomasin, “On the Comparison Between OFDM and Single Car-
rier Modulation with a DFE Using a Frequency Domain Feed Forward Filter,” IEEE
Trans. on Communication, Vol. 50, No. 6, June 2002, pp. 947–955.
[8] 3GPP TS 36.803, “UE Radio Transmission and Reception.”
[9] Van Nee, R., and R. Prasad, OFDM for Wireless Multimedia Communications, Norwood,
MA: Artech House, 2000.
80 From LTE to LTE-Advanced Pro and 5G

[10] Chase, D., “Code Combining – A Maximum Likelihood Decoding Approach for
Combining an Arbitrary Number of Noisy Packets,” IEEE Trans. on Communications,
Vol. 33, May 1985, pp. 385–393.
[11] Sandrasegaran, K., et al., “Analysis of Hybrid ARQ in 3GPP LTE Systems,” 16th Asia-
Pacific Conference on Communications (APCC), November 2011, pp. 418–423.
[12] Larmo, A., et al., “The LTE Link-Layer Design,” IEEE Communications Magazine, April
2009.
[13] 3GPP TS 36.321, V12.0.0, “Medium Access Control (MAC) Protocol Specification,
Release 2013-12.”
[14] 3GPP TS 36.322, “Radio Link Control Protocol (RLC) Specifications, V12.0.0,” 2014.
[15] 3GPP TS 36.323, “Packet Data Convergence Protocol (PDCP) Specification, V12.0.0,”
2014.
[16] 3GPP TS 36.331, “Evolved Universal Terrestrial Radio Access (E-UTRA) Radio Resource
Control (RRC); Protocol Specification.”
[17] 3gpp ts 33.401, “3gpp system architecture evolution: security architecture.”
[18] 3GPP TS 36.213, “Release 8.”
[19] Salman, M. I., et al., “CQI-MCS Mapping for Green LTE Downlink Transmission,”
Proceedings of the Asia-Pacific Advanced Network, Vol. 36, 2013, pp. 74–82.
[20] 3GPP TR 25.912, “Feasibility Study for Evolved Universal Terrestrial Radio Access
(UTRA) and Universal Terrestrial Radio Access Network (UTRAN), Release 11,” 2012.
[21] Alamouti, S. M., “A Simple Transmit Diversity Technique for Wireless Communications,”
IEEE Journal on Selected Areas on Communications, Vol. 16, No. 8, October 1998, pp.
1451–1458.
[22] Gesbert, D., et al., “From Theory to Practice: An Overview of Space-Time Coded MIMO
Wireless Systems,” IEEE Journal on Selected Areas on Communications, April 2003, special
issue on MIMO systems.
[23] Agilent Application Note, “MIMO Channel Modeling and Emulation Test Challenges,”
Literature Number 5989-8973EN, October 2008.
[24] Kreyszig, E., Advanced Engineering Mathematics, 6th Edition, Englewood Cliffs, NJ:
Prentice Hall 1988, pp. 1025–1026.
[25] 3GPP TS 36.214, “Physical Layer-Measurements, V8.7.0,” 2009.
[26] 3GPP TS 36.459, “Evolved Universal Terrestrial Radio Access (E-UTRA); SLm
Application Protocol (SLmAP).”
[27] 3GPP TS 36.111, “Evolved Universal Terrestrial Radio Access (E-UTRA); Location
Measurement Unit (LMU) Performance Specification; Network Based Positioning
Systems in E-UTRAN.”
4
Coverage-Capacity Planning and
Analysis
Generally, the first step in the process of planning a network is the coverage
planning. This gives an estimate of the resources needed to provide adequate
SINR between the user equipment and the radio access node (eNodeB) for
service deployment in the area without detailed capacity concerns. Coverage
planning uses link budgeting for the DL and UL radio links to calculate the
maximum allowed path loss based on the required SINRs to achieve certain
operator specified bit rates (throughputs) at the cell edge with a certain amount
of PRBs (radio transmission resources). This process will assume an overall level
interference load caused by the traffic based on expected capacity requirement
from the network. The maximum allowed path loss is then converted into a
cell radius using an appropriate semi-empirical propagation model for the area.
With a rough estimate of the number of cells and sites obtained in this way,
subsequent checking is made to see whether, with the given site density, the net-
work can handle the specified traffic load or new sites needs to be added. The
main indicator for assessment of the system capacity in LTE is the SINR distri-
bution within the cell obtained through system level simulation. The SINR dis-
tributions are mapped into supported bit rates at each simulated point (pixel).
The system-level simulation will take into account realistic terrain geometries
for signal propagation and loss assessment, and as well as the antenna configura-
tions, interference levels from neighboring cells, supported MCS, and vendors
resource scheduler algorithms. Capacity-based site counts are then compared
with the results from coverage analysis, and the greater of the two is selected as
the final required site count to provide adequate coverage and system capacity.

81
82 From LTE to LTE-Advanced Pro and 5G

4.1 Radio Link Budgeting for Coverage Dimensioning


Radio link budgeting (RLB) is the first step in planning the radio access network
to provide service coverage for the intended service area. RLB is performed on
both the DL and UL to estimates the power received by the receiver given a
specified transmitted power from the transmitter. The link budgeting process
account for all the gains and losses in the path of the signal from transmitter to
receiver such as antenna gains, cable losses, propagation losses, building pen-
etration losses, diversity gains such as stemming from multiple antennas, and
coverage probabilities, which translate into the margins required to compensate
for the effects of signal shadowing. The outcome of link budgeting is the maxi-
mum allowed propagation loss that may be experienced by the signal as it trav-
els from transmitted to the receiver and still be able to meet a required service
SINR level. The maximum allowed path loss is calculated for each link, DL and
UL, to meet the necessary SINRS for specified bit rates at the cell edge for DL
and UL. The minimum of the maximum path losses in UL and DL directions is
then converted into distance (i.e., the cell range) by using a propagation model
appropriate to the area. The calculated cell range is then used to determine the
coverage-limited number of sites. This estimate needs to be verified for the ca-
pacity requirement as discussed later in the chapter.
The criteria behind the link budgeting process are the coverage quality re-
quirements. This specifies the bit rates to be achieved on the UL and DL at the
cell edge with a certain probability within the assigned transmission bandwidth.
For a given transmission bandwidth, the required bit rate will specify the mod-
ulation-coding scheme (MCS) that will achieve the result where the bit rates
for each MCS are derived from the OFDM parameters for LTE. Each MCS
is specified by an index which specifies the modulation order and the coding
rate used. The code rate here is defined as the ratio between the transport block
size and the total number of physical layer bits per subframe that are available
for transmission of that transport block. The MCS, in turn, will determine the
signal to noise+interference ratio (SINR), which will be required at the input
to the receiver to achieve a certain bit error rate quality. The higher the modu-
lation order or the coding rate, the higher the signal-to-noise ratio required to
achieve the same bit error rate performance. Section 3.10 presented the bit rates
achieved with each MCS (excluding the overheads due to control channels).
The SINR values to support each MCS are derived from look-up tables
that are generated from link-level simulations and normally provided by the
equipment vendors. Therefore, the accuracy of link-level simulation results
mapping the MCSs to the required SINR is central to the authenticity of the
RLB and the processes of radio access dimensioning. The relationship of the
MCS to the required SINR depends on the propagation channel model used
generally and the antenna configurations such as to provide diversity gains.
Coverage-Capacity Planning and Analysis 83

However, the effect of antenna configurations if not incorporated in the SINR


tables can be explicitly taken into account in the link budgeting process through
antenna diversity gain parameters. In the link-level simulation models, the in-
terference is usually modeled as AWGN. The link budgeting generally requires
that the antenna configurations as well as the RBS power class such as transmit-
ting 20W or 40W, the UE output power and the transmission bandwidth to
be specified.
The coverage planning is based on link budgeting to determine the cell
radius and, from that, the site count. However, the cell radius may be deter-
mined based on either of three methods. These include user-defined maximum
throughput at the cell edge, maximum coverage with respect to the lowest MCS
(giving the minimum possible site count), and use of a predefined cell radius.
With a predefined cell radius, the parameters can be varied to check the data
rate achieved at the specified cell size. This option provides the flexibility to
optimize the transmitted power to achieve a suitable data rate. The cell-edge
criteria are based on the specification of achievable bit rates on UL and DL at
the cell edge with a certain amount of transmission bandwidth. Most often, the
cell radius obtained in the UL budgeting is the limiting criteria. Based on the
cell radius obtained in the uplink dimensioning process, the downlink coverage
is calculated. If the downlink quality requirements are met, the cell radius cal-
culated in the uplink is the limiting. If the downlink requirements are not met,
the cell radius must be reduced until the downlink requirements are met. We
will also note in passing that the control channel performance at the cell edge
should also be verified against the calculated cell radius and make sure that the
control channel performance is not limiting the cell-edge performance. If con-
trol channel performance does not fulfill the quality requirement, the cell size
must be reduced until the requirements are met, to provide the final outcome.
The link budgeting process may also be performed as an iterative process
with the specification of the cell-edge performance. It can start out as a small
cell radius and with a certain power control setting and load level where the
quality requirements are fulfilled. Next, the cell radius is gradually increased as
long as the requirements are still fulfilled. When the coverage or capacity starts
to reduce to below the requirement level, the power control target and load level
are optimized in a second iteration loop, with the purpose of increasing the site-
to-site distance further. A suitable level for the load level is in the range of 80%
to 100% when no values are specified.

4.1.1 Link Budgeting Formulations


In the following sections, the general formulations that can be used to perform
link budgeting under various scenarios as discussed earlier are provided. Since
in most cases, the UL is the limiting factor, the equations can be used to deter-
84 From LTE to LTE-Advanced Pro and 5G

mine the UL limited cell range to support a specified UL coverage requirement


in the form of a specified UL bit rate at the cell edge. Then the downlink sup-
ported bit rate is checked at the UL limited cell range obtained in the previous
step. The cell range is then reduced if the downlink requirement is not met and
the DL calculations are repeated until the requirement is satisfied. In other sce-
narios, the operator may specify a cell range and need to determine the coverage
quality in the form of the maximum bit rates that can be supported on the UL
and DL at the cell edge. In either case, the process will include the estimation
of the path loss from the UL and use it in the iterative estimation of the noise
rise on the DL and the receiver sensitivity in the UE. We will first define the
following notations:

GTMA: TMA gain, linear scale;


Isc,ul: uplink interference on PUSCHs per subcarrier, dBm;
Pue,SC: the per subcarrier transmitted power by UE, dBm;
Pnom, Ref: the sum of nominal power from all radio units in a cell, dBm;
Pue: the UE transmit power, dBm;
Lf : the feeder loss between the radio unit connector and the transmitter
reference point, dB;
LJ: jumper and TMA insertion loss, dB;
PB,SC: the per subcarrier transmitted power at the transmitter reference
point by eNodeB, dBm;
WSC: number of subcarriers contained in the deployed transmission band-
width;
NSC,DL: the thermal noise power per subcarrier in the UE receiver, dBm;
Fc: the average ratio between the received power from other cells to that of
its own cell at the cell edge, conveying the cell plan quality;
Fu: the average ratio between the received power from other cells to that of
its own cell on uplink, and conveys the cell plan quality;
Lpmax: the maximum allowed path loss, dBm;
Lce: path loss at cell edge (from Node B), in the linear scale, anti-log of
Lpmax;
MLNF: the margin required to compensate for slow lognormal fading (de-
termined by coverage probability requirement for the propagation envi-
ronment), dB, and probability (for UL and DL same), dBm;
LBP: Building penetration loss, dB;
LB: Body loss (if any);
Gtx B: transmit antenna gain at the base station, dBm;
Coverage-Capacity Planning and Analysis 85

GRx B: receiver antenna gain at the base station, dBm;


Gtx ue: transmitter antenna gain at the UE, dBm;
GRx ue: receiver antenna gain at the UE, dBm;
Nt: thermal noise power density, dBm/Hz;
Nf,ue: UE receiver noise figure, dB;
Nf,RU: the base station radio unit noise figure (without the TMA), dB;
Nf,B: Base station receiver noise figure (with TMA mounted), dB;
Nt,B: Effective thermal noise per subcarrier at the base station receiver
input, dB;
Nt,UE: Effective thermal noise per subcarrier at the UE receiver input, dB;
Rul: the UL cell-edge bit rate, kbps;
Rdl: the DL cell-edge bit rate, kbps;
Sue: the UE sensitivity, dBm;
SB: the eNodeB receiver sensitivity, dBm;
γcch: the fraction of PDSCH resources interfered by control channels;
ρpdsch: the PDSCH load, defined as the fraction of occasions a PDSCH
carries data (transmits power);
ρcch: defines the fraction of occasions when PDSCHs are interfered by
control channels;
ρpusch: the PUSCH load, defined as the fraction of occasions; a PUSCH
resource is carrying data and models the PUSCH load;
ρpdcch: the PDCCH load, defined as the fraction of occasions a PDCCH
resource is used;
ρpucch: the number of simultaneously transmitted ACK/NACK on
PUCCH in a cell;
β: uplink interference cancellation/rejection factor (ranges between 0 and
1);
α: the nonorthogonality factor to model intracell PUCCH interference;
NRdl,ce: downlink noise rise at the cell edge, dB;
NRul: uplink noise rise at the base station, dB;
SINRue: signal to noise+interference ratio at input to the UE (this can be
an output or an input to the calculation depending on what link budget-
ing scenario is under consideration, dB;
SINRB: signal to noise+interference ratio at input to the base station re-
ceiver (this can be an input to or an output from the calculation depend-
ing on what link budgeting scenario is under consideration), dB.
86 From LTE to LTE-Advanced Pro and 5G

The receivers’ signal to noise (+interference) ratios SINRs requirements


are obtained from vendor-provided link-level simulations whose value depend
on the choice of the modulation coding scheme (MCS), which, in turn, de-
pends on the data rate desired and the number of subcarriers within the de-
ployed bandwidth. The shadowing margin, Mshad , compensates for the effect of
slow signal fading and its value is determined by the coverage probability and
the standard deviation of fading for the environment. A fast fading margin is
also required in WCDMA due to the frequency selective fading over the chan-
nel band of 5 MHz. However, this is not necessary in LTE as the multicarrier
transmission uses narrow subcarriers of width 15 kHz, which results in symbol
durations of 66.6 µs which is much larger than the typical delay spreads of a few
microseconds arising in the mobile communication environment. The small
delay spread, say, of the order of 3 to 4 µs results in a coherent bandwidth of
300 to 250 kHz, which is much larger than the subcarrier bandwidth and hence
results in flat fading. This fading (due to multipath) is spread over a number of
interleaved coded symbols (code bits), which are then corrected by the channel
error correction coding. The feeder loss, Lf, depends on the length and type of
the cable connecting the base station antenna and the low noise amplifiers in
the base station transmitter, and the frequency band used. This can be offset if
a masthead amplifier (MHA) is used. The interference margin MIul on the UL
accounts for the increase in the base station receiver noise beyond the thermal
level. This is smaller in LTE compared with WCDMA due to the fact that there
is basically no intracell interference due to subcarriers’ orthogonality. However,
the margin for the uplink accounts for the interference from the neighboring
cells from frequency reuse. The interference margin or the noise rise on the
downlink at cell edge NRdl,ce will be larger due to the lower distance to neigh-
boring base stations (at cell edge) and the formulation and the data needed for
its calculation will be given in a later section. The base station noise figure at
the receiver reference point depends on the radio receiver unit noise figure, the
tower-mounted amplifier (TMA) gain if any present, and the feeder loss. It is
calculated by the following formula based on cascaded noise figure analysis (i.e.,
Friis equation [1]):

 N f ,RU L f − 1 
N f ,B = 10 * LOG N f ,TMA +  (4.1)
 GTMA 

expressed in decibels, in which all the quantities in the bracket must be convert-
ed from decibels to linear scale first before the computation. Note that the base
station noise figure calculated by (4.1) has already accounted for the feeder cable
loss, Lf, and this loss should not be considered again in the UL link budgeting
equation again. The TMA is a low-noise amplifier (LNA), which is mounted
Coverage-Capacity Planning and Analysis 87

as close as practical to the antenna, and helps to reduce the radio receiver noise
figure and improves its overall sensitivity. The improved sensitivity enables
the base station to receive weaker signals.
The effective thermal noise per subcarrier at the base station receiver refer-
ence point, Nt,B, is then calculated as

N t ,B = N t + 10 * log(15000) + N f ,B (4.2)

and the effective thermal noise per subcarrier at the UE receiver input is

N t ,ue = N t + 10 * log(15000) + N f ,ue (4.3)

The per-carrier transmitted power by the user equipment, Pue,SC, is simply


obtained by the following equation, assuming that the transmitted power is
equally distributed over the allocated number of subcarriers,

Pue ,sc = Pue − 10 * log(WSC ) (4.4)

Likewise, the power transmitted per subcarrier by the base station on the
DL, PB,SC, is given by

PB ,sc = Pnom ,Re f − L f − 10 * log(W sc ) (4.5)

It is noted that (4.4) and (4.5) can be easily modified to accommodate for
any power boosting option on the reference symbols.

4.1.1.1 UL Noise Rise Analysis


The UL noise rise models the interference due to users from neighboring cells
and is used to set a margin to compensate for it in the link budgeting. The noise
rise can be calculated by the following equation:

 N + ρPUSCH (1 − β )I SC ,ul 
N R ,ul = 10 * log  t ,B  (4.6)
 N t ,B 

where all quantities in the bracket must be converted from dB or dBm to lin-
ear scale before the computation. The uplink interference per subcarrier, ISC,ul,
defines the interference on PUSCH when the network is fully loaded and de-
pends on the cell size, the cell layout geometry, and the uplink power control
target parameter P0 (see Chapter 5) with typical values in the low −100-dBm
ranges (−140 to −110). More specific values may be provided from system
88 From LTE to LTE-Advanced Pro and 5G

level simulation results provided by vendors. The uplink load parameter load,
ρpusch defines the fraction of the PUSCHs carrying user data. The operator may
specify a load level for which the network is to be dimensioned. Otherwise,
different values of load level may be tested for achieving acceptable trade-offs
between coverage and capacity. The parameter β accounts for the positive im-
pact of any interference cancellation algorithm in the base station receiver and
ranges between 0 and 1, with the value 0 used when no interference rejection is
implemented and 1 for perfect interference cancellation. The reduce noise from
interference rejection algorithms can be used to either increase the coverage (the
cell range) or obtain higher capacity and throughput. The value of the noise rise
is used as an estimated value for interference margin in the link budgeting and
recommended values are usually available in tabulated format versus the cell
load factor ρpusch for typical site configurations from vendors.
A cell load is also defined in LTE for both the UL and DL but different
from WCDMA system. The cell load in LTE is defined as the average PRB
resource utilization with the averaging defined over long periods of the order of
minutes or even hours. The higher the cell load, the higher will be the interfer-
ence from neighboring cells. With this definition, the cell load can be set to
100%. The cell load on uplink and downlink is what will determine the respec-
tive interference margins that will be required in the link budgeting process and
is the alternative means to the noise rise calculation for this purpose. The calcu-
lation of UL noise rise and the DL noise rises as described in the above and in
the following section for the DL are complicated and require parameters whose
values cannot be easily estimated for each network design scenario or require
iterative calculations. Therefore, the load factor is often the more simple practi-
cal means for setting the interference margins. Normally based on system simu-
lations on various network models and measurements obtained from practical
networks, vendors provide tabulated data that recommend the necessary link
margins for various targeted network loads. However, we will proceed with the
analysis and formulation of the DL noise rise in the next section for the sake of
more exactness and providing insight for the interacting factors and the impact
of the network layout, site configuration, and traffic load to the noise rise and
the interference margins required.

4.1.1.2 DL Noise Rise Analysis


Using the basic definition of noise rise [2], the downlink noise rise NRdl,ce due
to interference is calculated from the following equation

 N + Fc PB ,SC Lce  γCCH +  


N Rdl ,ce = 10 * log  SC ,DL    , dB (4.7)
 N SC ,DL (1 − γCCH ) ρPDSCH  
Coverage-Capacity Planning and Analysis 89

where the weighting in the bracket accounts for the fact that when a control
channel is the interferer, it is always transmitting data (i.e., control informa-
tion), whereas when a PDSCH from a neighboring cell is the interferer, it
transmits power only by ρpdsch fraction of the occasions, that is its load factor.
The PDSCH load may be set as a design goal to achieve a certain coverage
requirement. The path loss factor at the cell edge, Lce , is same as the linear
scaled version of the downlink path loss obtained from the link budgeting and
hence ties the two values into an iterative loop until convergence starting with
the best estimates (using, for instance, the path loss obtained from the UL) for
Lce. The Fc factor reflects the cell design quality and depends on the cell layout
geometry and the site configuration such as antenna height and tilts. Values for
this parameter in various scenarios for cell-edge areas can be obtained via system
level simulations. Typical values can fall in the range from around 1.3 to 3. The
parameter γCCH defines the fraction of times when PDSCHs are interfered by
control channels, and its value depends on allocated system transmission band-
width, and weather the network is time synchronized or not (adjacent cells are
time-synchronized or not). Typical values can range from a few percentages to
the percentages in the low 20s, with the higher ends applicable for nontime-
synchronized networks, and may be obtained from the equipment vendor. The
value of the noise rise is used as an estimated value for interference margin
in the link budgeting. However, recommended values are usually available in
tabulated format versus the cell load factor ρpdsch for typical site configurations
from vendors.

4.1.1.3 Channel and Protocol Overheads


Before presenting the formulations for the link budgeting, we will estimate the
overheads due to the physical channel and the protocols used in layer 2 up to the
application layer. The link budgeting procedure that will be presented will not
require the user knowledge of the physical layer overheads in the transmission
process. However, the user must add layer 1 and the above protocol overheads
before inputting the data rates into the link budgeting process for coverage and
capacity dimensioning. The physical layer overheads will be referred to here as
the channel overheads and will consist of the overheads due to the transmission
of the reference symbols, and the in-band control channels. The overheads at
layer 2 and above will arise from the MAC, the RLC, and the IP, TCP or UDP,
and PDCP, and in the case of VoIP, the RTP, which is used as the application
layer protocol for transmission of real-time data. The application layers for data
will consist of protocols such as FTP and http.

Channel Overheads on the Downlink Side


The DL channel overheads will consist of the overheads from the reference
symbols and the control channels transmitted on the DL side. The number of
90 From LTE to LTE-Advanced Pro and 5G

downlink reference symbols depends on the antenna configuration. For single-


stream transmission with one transmission antenna and a single TX antenna
diversity, the reference symbols for antenna 0 are transmitted on equally spaced
subcarriers within the first and third from last OFDM symbol of each 0.5-ms
slot and on the sixth and 12th subcarriers for the first symbol location and on
the third and ninth subcarriers for the third from last symbol location within
each PRB. Likewise, the reference symbols for antenna 1 are transmitted on
equally spaced subcarriers within the first and third from last OFDM symbol
of each 0.5-ms slot but on the third and ninth subcarriers for the first symbol
location and on the sixth and 12th subcarriers for the third from last symbol
location within each PRB as can be seen from Figure 3.11, as given in Chapter
3. Since the UE must get an accurate channel estimate from each transmitting
antenna, when a reference signal is transmitted from one antenna port, the
other antenna ports in the cell are idle. This leads to a total of 16 symbols taken
off from each PRB carrying 168 OFDM symbols. However, up to the first
three symbols of each 1-ms subframe can be occupied by the downlink control
channels, PDCCH, excluding the four symbols occupied by the antenna 0 and
1 reference symbols. This results in another 3 × 12 − 4 = 32 control channel
symbols. Adding this up to the 16 reference symbols just calculated for antenna
port 0 and 1 result in a total of 48 OFDM symbols, leaving 168 − 48 = 120
OFDM symbols for the PDSCH.
By similar consideration, using the illustration given in Figure 3.11, for
the reference symbol locations for antenna ports 0, 1, 2, and 3, we can find that
the 2 × 2 MIMO configuration will consume a total of 24 reference symbols,
which contain 8 symbols on in the first three OFDM symbol locations on
four subcarriers within each PRB. This leaves up to 36 − 8 = 28 symbols for
the PDCCH. Thus, this configuration leaves a total of 168 − (28 + 24) = 116
symbols for the PDSCH. For the 4 × 4 MIMO configuration, which involve
use of antenna 3 as well but in alternate use other antennas (see Figure 3.11),
the same result stays valid as for the 2 × 2 MIMO configuration. These are the
worse cases when the control symbols take up the first three symbols on each
subcarrier and are summarized in Table 4.1. There are cases when only the first
symbol may be taken per subcarrier for the control channel and will result in
lower overheads than indicated in Table 4.1.

Channel Overheads on the Uplink Side


Similarly, the UL channel overheads will consist of the overheads from the refer-
ence symbols and the control channels transmitted on the UL side. The uplink
reference signals consist of the demodulation and the sounding symbols, which
take 2 symbols out of 14 symbols in a subframe on every subcarrier resulting in
an overhead of 24 symbols per PRB. This leaves 144 symbols for data and con-
trol channels when there is signaling information to be transmitted. The uplink
Coverage-Capacity Planning and Analysis 91

Table 4.1
Worst-Case PDSCH Overheads Due to Reference Symbols and
Control Signaling
Number of DL Symbols
(Slots) Taken Up by the
DL Reference+Control Percentage
Antenna Configuration Channels per PRB Overhead
Single-stream 48 28.6%
transmission with one
TX and one TX diversity
antenna
2 × 2 MIMO 52 31%
4 × 4 MIMO 52 31%

control signaling consists of ACK/NACK, CQI, scheduling request indicator,


and MIMO codebook indicator (see Chapter 3), which would vary with the
traffic activity on downlink as well and hence on the transmission bandwidth.
This signaling information whenever present is multiplexed with the user data
prior to the DFT operation to preserve the single-carrier nature of the uplink
modulation scheme. Otherwise, when no data are present, they are carried in a
reserved frequency region on the band edge. Since the uplink control signaling
does not permanently occupy any portion of the uplink bandwidth and its pres-
ence cannot be predicted for the purpose, we cannot systematically provide a
precise way for estimating the overhead incurred. However, based on experience
and historical data, the following average overhead for the uplink control chan-
nel is normally assumed, which is dependent on the transmission bandwidth
allocated to the network.
The averaged overhead given in Table 4.2 for the PUCCH translates into
12*14*(2/6, 4/15, 4/25, 6/50, 8/75, 10/100) = (56, 45, 27, 20, 18, 17) sym-
bols out of each PRB for the system bandwidths of 1.4, 3, 5, 10, 15, and 20
MHz, respectively. Combining this with the 24 symbols out of each PRB for
the uplink reference symbols, we get the following average overhead from the
uplink reference symbols and the control channels.
The results summarized in Tables 4.2 and 4.3 can be used to provide a
feeling for how much transmission overhead is involved at the physical layer. It
is seen that the channel overhead reduces as bandwidth increases and realizes
capacity gain. However, this is achieved with an associated power efficiency cost
(A-to-D overhead for wider bandwidth channels).

Overheads from Protocols at Layer 2 and Above


The overheads from the protocols used in layer 2 and above (starting from the
MAC sublayer) should be added to the user data rates in the form of a per-
centage increase before performing the link budget described in the next two
92 From LTE to LTE-Advanced Pro and 5G

Table 4.2
Average Number of PRBs Consumed by the Uplink
Control Channels
PRBs Taken on the
Transmission Total Number Average by the Uplink
Bandwidth of PRBs Control Channels
1.4 6 2
3 15 4
5 25 4
10 50 6
15 75 8
20 100 10

Table 4.3
Average Total Overhead from the Reference Symbols and the
UL Control Channels in PUSCH
Transmission Bandwidth, MHz
Overhead 1.4 3 5 10 15 20
Average number symbols 80 69 51 44 42 41
taken per PRB
% overhead 48% 41% 30% 26% 25% 24%

sections. These protocols consist of MAC, RLC, PDCP, IP, TCP, or UDP, and
the application layer protocols such as RTP for VoIP and FTP/http for data.
The header sizes for these protocols are given in Table 4.4. To calculate the total
overheads from the stated protocols, we note that at the bottom are the 1-ms
radio subframes, which are the entity that contains the transport block. Within
the transport block is the MAC header and any extra space filled by padding.
Then follows the RLC header, and then within the RLC PDU (protocol data
unit) there can be one or a number of PDCPs that take up the IP packets that
form the service data units (SDUs) coming from the top of the protocol stack.

Table 4.4
Typical Layer 2 Protocol Header Sizes
TCP (or RTP (for
MAC RLC PDCP IP (v6) UDP) VoIP)
Header Size (bytes) 2 to 3* 2* 2* 40 20 (or 8) 12
*The size of the MAC and the RLC headers can range from 2 to 3, and from 1 to 2, respectively
depending on what the size of the length field is in case of MAC and the sequence number in case
of the RLC. Likewise the size of the PDCP protocol can vary from 1 to 2 bytes depending on weather
a short (5-bit) sequence number is used or a long (12-bit) sequence number is used. These are all
configurable by the operator.
Coverage-Capacity Planning and Analysis 93

Multiple PDCPs are used within one RLC if more than one user or application
is multiplexed within the 1 subframe.
The RLC PDU size is not fixed because it is based on the transport block
size, which depends on the conditions of channels which the eNodeB assigns
to the UE on the downlink. The transport block size can vary based on the
bandwidth allocation and the modulation-coding scheme. The RLC PDU size
will also depend on the size of the packets (e.g., large packets for video or small
packets for voice over IP). If one RLC SDU cannot accommodate the data
packet, or the available radio data rate is low resulting in small transport blocks,
the RLC SDU may be split among several RLC PDUs. If the RLC SDU is
small, or the available radio data rate is high, several RLC SDUs may be packed
into a single PDU. In many cases, both splitting and packing may be present.
As a simple example, we will assume single PDCP layer per RLC PDU,
and an average IP packet size of 300 bytes, which is based on the statistics often
indicated from the Internet traffic. Then a 300-byte data from the application
data will incur 20 byes for TCP header, 40 bytes for IP v6, 2 bytes for RLC
header, and 2 bytes for the MAC header resulting in a total of 65 bytes of pro-
tocol overhead, or 100 × 64/300 = 21.3%. Thus, the desired user data from the
application layer for which the link budgeting should be performed must be
boosted by a factor of 1.23. This is assuming that no header compressions of
any kind are applied (the worst case).
In the case of VoIP, with the 3GPP AMR codec rate of 12.2 kbps, voice
frames are collected every 20 ms, which results in 32-byte frames. In the case
of voice, the ROHC protocol (described in RFC 3095) is used to compress the
protocols above PDCP to only 3 bytes in total header size. Adding to that 1
byte from RLC (for unacknowledged mode operation using a 5-bit sequence
number), 2 bytes from MAC and a PDCP header of 1 byte using short sequence
number result in a total frame size of 39 bytes, of which the percentage header
overhead is 100 × 7/39 = 18%. In other words, the link layer protocol headers
add 100 × 7/32 = 22% to the voice frames. Thus, each voice connection will
bear a data rate of 1.22 × 12.2 kbps = 14.88 kbps in the link budgeting. Simula-
tion results presented in [3] show the LTE capacity for voice over IP using the
AMR codec at a rate of 12.2 kbps and a cell bandwidth of 5 MHz varies from
289 to 317 calls on DL and from 123 to 241 calls on UL under various voice
scheduling, control channel limitations, and channel conditions with averages
obtained such as at around 320 calls on downlink and 240 calls on uplink per
sector in [4], indicating that the voice service is limited by the uplink. The VoIP
based on VOLTE (voice over LTE) is covered in detail in Chapter 12.

4.1.1.4 UL Link Budgeting


Most often the uplink is the limiting link, and hence the link budgeting starts
out with budgeting this link first. The objective will be to obtain the maximum
94 From LTE to LTE-Advanced Pro and 5G

allowed path loss, Lpmax , from the UE to the eNodeB receiver that will meet
an operator specified uplink bit rate (throughput) at the cell edge. The path
loss, Lpmax , obtained from the uplink budgeting is also the starting point of
the downlink calculations and is used to obtain the downlink noise rise esti-
mate formulated in previous section. The required cell-edge bit rate and the
transmission resources (number of physical resource blocks (PRBs)] that may
be allocated to cell-edge users translate into the MCS needed to support the
specified bit rate. This process was basically described in Chapter 3. The uplink
bit rates at the physical layer is first found by adding all relevant protocol layers
at L1 and above (excluding reference symbols and control channels) such as
MAC, RLC, PDCP, TCP, or UDP to the desired net user data rate in bps. Then
this is scaled down by 1,000 to find the necessary transport block size (TBS) in
bits (the bits transmitted in 1 ms). With the TBS thus found and the number
of PRBs allocated for the connection, the 3GPP tables mentioned in Chapter
3 are used as explained there to find the TBS index associated with the TBS.
Then other tables mentioned in Chapter 3 are used to map the TBS index into
the MCS index. From the MCS index, the coding rate is found with the help
of the relevant table given in Chapter 3.
The MCS and the coding rate thus found translate into the required
SINR using link level simulation data from the vendor. The higher the allocated
transmission bandwidth, that is the number of PRBs allocated, the more robust
will be the required MCS (more coding and lower order modulation) and hence
a smaller SINR that will be needed. The smaller SINR will help to increase
the coverage by allowing higher path losses. The allocation of the transmission
resources (resource blocks) to the cell-edge users which then determines the
necessary MCS provides the trade-offs between coverage and resource usage.
The more robust MCS, requiring less SINR, will consume more transmission
resources and hence leave less for other scheduled users. However, the cell-edge
criteria may be set to be the cell-edge throughput meaning the bit rate that
could be achieved by a single UE placed within the cell at the cell edge loca-
tion and consuming the resources. In that case, there is still a similar trade-off
between coverage (the cell range) and the coverage quality that is the cell edge
throughput that can be achieved. The more robust MCS will increase the cell
range but incur more overheads (due to coding and lower modulation order)
in the resource usage and thus leaving less for transmitting the actual user data.
The SINR requirement and the uplink noise rise due to interference al-
low the calculation of the minimum signal power required per subcarrier at the
base station receiver reference point. The uplink maximum allowed path loss
is then obtained by subtracting from the UE transmitted power per subcarrier
the receiver minimum signal power required and then adding all diversity and
antenna gains and subtracting the various losses due to feeder, building or car
Coverage-Capacity Planning and Analysis 95

penetration, body loss if any, and the margin for lognormal slow fading. The re-
ceiver sensitivity per subcarrier basis excluding the noise rise is calculated from

SB = SINRul + N t ,B

where Nt,B is the effective total thermal noise at the input to the eNode B re-
ceiver given by (4.2) as

N t ,B = N t + 10 * log(15000) + N f ,B

In which Nf,B is the TMA accounted noise figure of the base station re-
ceiver given by (4.1), substituting in the above expression for the eNodeB sen-
sitivity, we obtain

 N f ,RU L f − 1 
SB = SINRul + N t + 10 * log(15000) + 10 * log N f ,TMA +  (4.8)
 GTMA 

The maximum allowed path loss on the uplink is then obtained by the
following expression

L p max,ul = Pue ,sc + Gtx ,ue − SB − N Rul − M LNF − LBP − LB − L J + G Rx ,B (4.9)

Alternatively, instead of using the noise rise NRul in the above equation,
one may set an interference margin based on the targeted UL load. Tabulated
data from vendors based on system simulations performed on typical network
layouts and measurements collected from practical networks are usually avail-
able to guide set the interference margin required for the targeted load. In that
case, (4.9) takes the form:

L p max,ul = Pue ,sc + Gtx ,ue − SB − IM ul − M LNF − LBP − LB − L J + G Rx ,B (4.10)

in which IMul is the interference margin for the targeted load as obtained from
vendors. Next, the maximum allowed path loss from link budgeting the down-
link side is obtained and compared with the above value from UL. The small-
est of the two is then used to dimension the cell range and thereby obtain the
number of sites necessary to meet the specified coverage requirements.

4.1.1.5�  DL Link Budgeting


The objective of link budgeting on DL will be similarly to obtain the maximum
path loss that can still allow supporting a certain operator specified bit rate
(throughput) at cell edge with a certain amount of transmission bandwidth (or
96 From LTE to LTE-Advanced Pro and 5G

resource blocks). The required cell-edge bit rate is translated into the equivalent
bit rate in bps at the physical layer by incorporating protocol overheads from
MAC, RLC, PDCP, TCP, or UDP (excluding reference symbols and control
channel overheads) as applicable. Then the result is scaled down by 1,000 to
find out the transport block size in bits. The TBS thus found and the transmis-
sion bandwidth in terms of number of PRBs are mapped into the TBS index
and then into the MCS index in the same manner that was explained for uplink
link budgeting (and as explained in Chapter 3).
The MCS and the coding rate required then determines the necessary
SINR at input to the user equipment receiver using the tabulated link level
simulation results provided by the vendor. The tradeoffs involved between
transmission bandwidth resources that may be allocated to cell edge users and
the resulting coverage and overall cell capacity are similar to what was explained
for the Uplink case. From the SINR, the UE receiver sensitivity excluding the
interference caused noise is obtained by subtracting the thermal noise calcu-
lated in (4.3). The maximum allowed path loss is then obtained by subtracting
from the base station transmitted subcarrier power, PB,SC, given by (4.5), the
UE sensitivity, the noise rise due to DL interference, the various losses and add-
ing and subtracting the various gains and losses. The UE sensitivity excluding
noise rise due to interference is

Sue ,sc = SINRdl + N t + 10 * log(15000) + N f ,ue (4.11)

Then the maximum allowed path loss, Lpmax,dl, is calculated from the
expression
L p max,dl = PB ,sc − Sue ,sc − N Rdl ,ce − N LNF − L f
−L J − LBP − LB + GTx ,B + G Rx ,ue (4.12)

in which the DL noise rise, N, was given in (4.7) as

L p max,ul = Pue ,sc + Gtx ,ue − SB − IM ul − M LNF − LBP − LB − L J + G Rx ,B


N SC ,DL + Fc PB ,SC / Lce
N Rdl ,ce =
N SC ,DL
[ γCCH + (1 − γCCH ) ρPDSCH ]

where Lce is just the linear scale version of Lpmax, that is

Lce = 10Lp max 10 (4.13)


Coverage-Capacity Planning and Analysis 97

Alternatively, instead of using the noise rise NRdl,ce in (4.12), one may set
an interference margin based on the targeted DL cell edge load. Tabulated data
from vendors based on system simulations performed on typical network lay-
outs and measurements collected from practical networks are usually available
to guide the interference margin required for the targeted load. In that case,
(4.12) takes the form:
L p max,dl = PB ,sc − Sue ,sc − M dl − N LNF − L f
−L J − LBP − LB + GTx ,B + G Rx ,ue (4.14)

in which IMdl is the interference margin for the targeted load as obtained from
vendors.
We notice from the expression for the noise rise, NRdl,ce, in the formula
for the maximum allowed path loss that it itself depends on the path loss to be
calculated (the Lce term). Hence, the above two equations tie each other into
an iterative loop for convergence of an initial best estimate for Lpmax,dl. The
best starting value for this iteration will be the maximum allowed path loss Lp-
maxul obtained from the uplink budgeting from (4.9). On convergence, the final
value obtained for Lpmax,dl is compared with the value obtained on the uplink
and the smallest of the two is used as input to an RF propagation model suited
to the environment to estimate the cell range. Figure 4.1 provides the schematic
illustration of this process of network dimensioning for coverage using link
budgeting on UL and DL. A simplified template for calculation of the uplink
and downlink path losses in simplified Excel format is given in Table 4.5.
The control channels performance against the cell range decided on this
basis, should also be checked, and should make sure that the required SINRs for
them are satisfied at the cell edge. Otherwise, the cell edge should be reduced
until the control channel performance is not a limitation. The final value decid-
ed is then used to obtain the number of sites required using site configuration
and geometry as discussed in later sections. Simulation results in [4] indicate
that for low bit rate services uplink random access channel coverage may be the
limiting factor, instead of the PDSCH/PUSCH. The following two sections
present the analysis and formulations for checking the coverage on the down-
link and uplink control channels.
4.1.1.6 Downlink Control Channel Coverage Verification
In order to be able to meet the cell edge performance obtained on the DL and
UL data channels, the UE must be able to properly receive and decode the sig-
naling information transmitted on the control channels as well. On the down-
link, the PDCCH is the limiting control channel. The PDCCH is interfered
by both the PDSCH data channels and the control channels from neighbor-
98 From LTE to LTE-Advanced Pro and 5G

Figure 4.1 Flowchart for coverage based radio access network dimensioning (using the
noise rise estimation approach).

ing cells in a nonsynchronized network. In a time synchronized network, the


PDCCH is only interfered by control channels from other cells.
Coverage-Capacity Planning and Analysis 99

Table 4.5
Simplified Link Budgeting Template
Parameter or Catogory type naming Uplink Downlink Comment
Channel PUSCH PDSCH Formulat used when a value is
calculated
User Environment Indoor Indoor Indoor
System Bandwidth (MHz), a 10.0 10.0
Cell Edge Rate (kbps) 512.00 2048.00 512.00
MCS QPSK QPSK 0.30
0.36
Tx
Max Total Tx Power (dBm), b 23.00 49.00
Allocated RB 6 30
RB to Distribute Power, c 6 50
Subcarriers to Distribute Power, d 72 600 c*12
Subcarrier Power (dBm), e 4.43 21.2 b–10*log(d)
Tx Antenna Gain (dBi), f 0.00 18.00
Tx Cable Loss (dB), g1 0.00 0.50
TMA insersion and jumper loss, g2 0.00 0.00
Tx Body loss (dB), h 0.00 0.00
EIRP per Subcarrier (dBm), i 4.43 38.72
Rx
SINR (dB), j –4.83 –1.84
Rx Noise Figure (dB), k 2.00 7.00 On the base station side, this
figure should account for any
TMA gains and feeder cable loss
using forumla 4.1
TMA gain 0.00 0.00
Receiver Sensitivity (dBm), l –135.06 –127.08 j–174+10*log(15000)+k
Rx Antenna Gain (dBi), m 18.00 0.00
Rx Cable Loss (dB), n 0.00 0.00 On base station side this has
already been taken care of by
calculation of the noise figure
(formula 4.1)
Rx Body Loss (dB), o 0.00 0.00
Noise rise (dB0 due to interference, or 0.50 5.00 Formula 4.9 or 4.10 for UL and
the interference margin (estimated from 4.12 or 4.13 for DL
assumed load parameters), p
Min Signal Reception Strength (dBm), q –134.56 –122.08 1+p
Path Loss & Cell Radius
Penetration Loss (dB), r 19.00 19.00
Std. of Shadow Fading (dB), s 11.70 11.70
Area of Coverage Probability, t 95.00% 95.00^ 95.00%
Shadow Fading Margine (dB), u 9.43 9.43 Func(s,t) obtained from tables
Path Loss (dB), v 128.56 132.36 i–g1–g2+m–q–r-u
100 From LTE to LTE-Advanced Pro and 5G

The control channel SINR at the cell edge must therefore be calculated
and checked against the vendor recommended values to ensure adequate per-
formance. If the calculated cell-edge SINR is lower, the cell-edge throughput
will be degraded. If the SINR is considerably lower than the recommended
value, the site-to-site distance cell range needs to be reduced. To calculate the
PDCCH SINR at cell edge, we will first calculate the noise rise on the control
channel at the cell edge as follows:

N t ,ue + PB ,SC .Fc ρPDCCH / Lce


N R ,dl ,PDCCH = (4.15)
N t ,ue

in which all the quantities must be converted into linear scale (from dB or
dBm) before the calculation. The ρPDCCH was defined earlier at the beginning
of this chapter as the PDCCH load, that is, the fraction of occasions a PDCCH
resource is used. By setting this parameter to 100%, the noise rise for the ex-
treme case of fully loaded PDCCH can be calculated.
The control channel SINR at the cell edge is then calculated by

SINRPDCCH ,celledge = PB ,SC − N t ,UE − N Rdl ,PDCCH − L p max − N LNF − L f


(4.16)
−L J − LBP − LB + GTx ,B + G Rx ,ue

where Lpmx is just the maximum allowed path loss result from link budgeting
the UL and DL data channels. The parameter Lce in the noise rise equation is
the linear scale version of Lpmax.

4.1.1.7 Uplink Control Channel Coverage Verification


On the uplink, the limiting control channel is the ACK/NACK signaling mes-
sages transmitted on the PUCCH. If the decoding of the signaling messages
repeatedly fail, it will result in degraded downlink cell-edge throughput. Since
the UEs multiplex the ACK/NACK messages on the same resource block, the
PUCCH ACK/NACK messages are interfered with by both the intracell and
intercell signaling. The intracell interference will be modeled by the nonor-
thogonality factor α.
In order for the performance of the ACK/NACK transmissions not to
limit the downlink cell-edge bit rate, the SINR for the PUCCH ACK/NACK
signaling must meet the requirement recommended by the vendor. This SINR
can be calculated from the following formula:

{ }
SINRPUCCH , Ack /Nack = Minimum P0, pucch ,sc , PUE ,SC − Lce − N t ,B
(4.17)
−10 * log ( µ + Fu ) ρPUCCH , A /N .10
P0,PUCCH ,SC /10 

Coverage-Capacity Planning and Analysis 101

The PUCCH power control algorithm adjusts the received signal strength
towards the target P0,pucch,SC defined here on a per subcarrier basis (on 15-kHz
bandwidth) in decibels. The parameter ρpucch A/N is the number of simultane-
ously transmitted ACK/NACK on PUCCH in a cell. A value of 2 is recom-
mended. The parameter µ as mentioned earlier conveys the nonorthogonality
factor to model intracell PUCCH interference. A value of 0.2 is recommended
for dimensioning.

4.1.1.8 Estimating the Number of Sites


With a given cell radius, the actual site coverage area depends on the site
configuration.
Assuming a hexagonal cell geometry, three different site models, namely
omnidirectional, bisector, and trisector sites, are considered. The areas for these
sites can be calculated from the formulas given in Table 4.6 [2].
The number of sites that need to be deployed is then easily calculated
from the following simple formula:
serviceArea
Numberofsitesrequired = (4.18)
SiteArea

4.1.1.9 Propagation Path Loss Models


The radio propagation path loss models play a significant role in the planning
of wireless communication systems. The path loss models are used to estimate
the expected (mean) value of the signal path loss at a certain distance from the
transmitter and are used in the link budgeting based dimensioning to calculate
the cell range from the maximum allowed path loss, as well as in system capacity
simulations. The path loss depends on the signal frequency, the antenna heights
of the transmitter and base station, and the specifics of the terrain and the clut-
ter such as urban, dense urban, suburban, rural, vegetation, forestry, water, and
the building types and density at a finer level. Generally, the mean path loss,
L–p(d), can be expressed with reference to the path loss at a reference distance
d0 (located in the far field of the radiating antenna) and a path-loss distance
exponent (which would depend on the environment) in the form [5]:

L−p (d ) = L−p (d 0 ) + 10n log10 (d d 0 ) all in dB (4.19)

Table 4.6
Site Area Calculation Formulas
Site Configuration Omnidirectional Bisector Trisector Six-Sector
Site Area formula 2.6 (cell radius)2 1.3 (cell radius) 2 1.95 (cell radius) 2 2.6 (cell radius) 2
102 From LTE to LTE-Advanced Pro and 5G

where n is the path-loss distance exponent (how the loss varies with distance),
and L–p(d0) is the mean path loss at a reference distance d0, which captures an-
tenna heights and the frequency and environmental dependencies.
There are basically three different categories of path loss models, referred
to as statistical or empirical, deterministic, and semideterministic. The statisti-
cal models are formulas that describe the path loss versus distance on an average
scale. They are derived from statistical analysis of a large number of measure-
ments obtained in typically distinct environment such as urban, suburban, and
rural, which are incorporated in the form of tabulated data or best-fit formulas
for average path loss calculation versus distance in the particular environment.
They do not require detailed site morphological information. Examples are
the Okumura-Hata models. The deterministic models such as those based on
ray tracing apply RF signal propagation techniques to a detailed site morphol-
ogy description (using building and terrain databases) to estimate the signal
strength resulting from multiple reflections and line of site at various pixels in
the area. The simple, rather idealized free-space and two-path ground reflec-
tion models may also be classified under the deterministic models. Because
of the complexities involved in such models, they are only developed mainly
for simple environments such as indoor and small micro cell. The ray-tracing
examples of such models are used for modeling small micro cells in urban and
dense urban areas. The semideterministic or semistatistical models are based
on a mixture of deterministic method of following individual signal propaga-
tion effects from site-specific morphology and a statistical generalization and
calibration of model parameters based on collected path loss measurements.
These models require more information than the empirical models but less than
the deterministic models. Examples are COST 231 Walfisch-Ikegami and the
generalized tuned Hata models.

Okumura-Hata Path Loss Model


The Okumura-Hata formula [6] for the propagation loss has the following
form:

L p (d ) = 69.55 + 26.16 log10 ( f ) − 13.82 log10 hb


+ ( 44.9 − 6.55log10 hb ) log10 d − a (hm ) (4.20)
−Qr dB

where 150 MHz < f < 1,500 MHz carrier frequency, hb = 30m to 200m is the
base station antenna height, hm = 1m to 10m mobile antenna height, d = 1
km to 20 km distance from the transmitter, and a(hm) depends on whether the
model is used on the size scale of the city given as follows.
Coverage-Capacity Planning and Analysis 103

For a medium or small city:

a (hm ) = (1.1log10 f − 0.7 ) hm − (1.56 log f − 0.8) (4.21a)

and for a large city:

a (hm ) = 8.29  log10 (1.54hm ) − 1.1 f ≤ 200 MHz


2
(4.21b)

a (hm ) = 3.2  log10 (11.75hm ) − 4.97 f ≥ 400 MHz


2
(4.21c)

Qr is a correction factor for open areas given as

Qr = 4.78 ( log10 f ) 2 − 18.33 log10 f + 40.94 all frequencies (in MHz) (4.21d)

and is zero for all other area types.


In practice, the height used for the mobile station antenna height is usu-
ally set at 1.5m (hm = 1.5m). In that case the expressions a(hm) will become very
close to zero and not very sensitive to variations of the mobile antenna height.

Cost 231 Hata Path Loss Model


The Okumura-Hata model was extended in the Cost 231 Project [7] to the
frequency bands of 1,500 to 2,000 MHz which includes the band allocated
to the third-generation networks. The modified model is refereed to Cost 231
Hata model and is given here:

L p (d ) = 46.3 + 33.9 log10 ( f ) − 13.82 log10 hb


(4.22)
+ ( 44.9 − 6.55log10 hb ) log10 d − a (hm ) + C clutter dB

1,500 MHz ≤ f ≤ 2,000 MHz


where

a (hm ) = (1.1log10 f − 0.7 ) hm − (1.56 log f − 0.8) (4.23)

and Cclutter is a clutter loss correction given by Cclutter = 0 dB for a medium-sized


city and suburban centers with moderate tree density and Cclutter = 3 dB for
metropolitan centers.
The modeler may use measurements for his or her particular environment
to find the best value for Cclutter. Model tuning is discussed in [2].
There is also a variation of Cost 231 Hata model referred to as the Cost
231 Walfisch-Ikegami model [8], which uses more site-specific building clutter
104 From LTE to LTE-Advanced Pro and 5G

data to capture the propagation effects due to reflections and rooftop diffrac-
tions. Therefore, the model has deterministic elements than being purely sta-
tistical. It is more complex than the Okumura-Hata models and is more suited
to smaller macro cells in urban areas or microcells where the antenna is placed
rather at the rooftop levels. The model presents different formulas for cases
when there is a line of site (LOS) signal component, and for when there is no
LOS component and is discussed further in [6, 8].

Generalized Path Loss Models and Model Tuning


The generalized path loss models are the semideterministic models that try to
capture more of the specifics of the terrain and morphology into consideration.
They are particularly useful in the detailed system-level simulation for capacity
analysis where more accurate RF propagation predictions and signal coverage
validations are necessary in various locations within the cells.
A generalized form of the Hata statistical prediction model is usually used
in the form of:

(
PRx = PTx + K 1 + K 2 log (d ) + K 3 log H eff )
(4.24)
( ) ( )
+ K 4.D + K 5log H eff log (d ) + K 6 log hmeff + K clutter

where K1, K2, K3, K5, and K6 are coefficients that are also present in the origi-
nal Hata formula, but are left here unspecified for further tuning for the area.
Their values should not be changed much from the original values used in the
Hata formula to keep the model structure reliable. The coefficient K4, which
multiplies a diffraction loss D, is included to adjust diffraction losses, which
are caused by buildings or terrain irregularities in the line of site of the receiver.
The effective antenna heights for the base station and mobiles, Heff, and hmeff,
are calculated by considering the terrain profile and the area to be covered. The
effective antenna height values can vary, for instance, whether the entire area is
to be modeled or just a hilly road with antennas placed along it. For an oscilla-
tory hilly area, the effective antenna heights are normally calculated relative to
the average terrain height as referenced to the sea level.
A new variable clutter correction parameter, Kclutter, is included to adapt
the equation to each morphological class. This parameter allows the same for-
mula structure to be used for each different environment and land usage class
such as urban, suburban, open, and rural by simply shifting the basic curve to
fit the particular clutter type.
The model resolution defines the smallest bin size over which RF propa-
gation predictions and signal coverage validations can be performed. The work
reported in [9] concludes that the optimum model prediction resolution de-
pends on the intended cell coverage radius in the following way
Coverage-Capacity Planning and Analysis 105

Optimum RF prediction resolution = R/40 (4.25)

where R is the intended cell coverage radius.


More on model tuning can be found in [2].

Winner Path Loss Models


The Wireless World Initiative New Radio (WINNER) Consortium has devel-
oped propagation models for several different propagation scenarios in the fre-
quency range of 2 to 6 GHz. The models are ray-based double-directional mul-
tilink models that are antenna-independent and scalable. These models allow
statistical distributions and channel parameters extracted from measurements
in any propagation scenarios be fitted to the model. The latest model, known
as the Phase II model, forms the basis for the ITU IMT-Advanced models. It
extends the frequency range to cover frequencies from 2 to 6 GHz and covers
13 propagation scenarios including indoor, outdoor-to-indoor (and vice versa),
urban micro cell and macro cell, and corresponding difficult urban scenarios,
suburban and rural macro cell. It is possible to vary the number of antennas, the
antenna configurations, the geometry and the antenna beam pattern without
changing the basic propagation model. Further information on these models is
found in [10].

Lognormal Fade Margin and Coverage Probability


Since most path loss models are statistical models that are derived from lim-
ited measurements carried out in the typical environments for which they are
planned, they have limited accuracy as well as limited area resolutions. For the
same typical environment of an urban area, not all locations at a given distance
from a transmitter will have the same exact radio path characteristics. This con-
sideration is in addition to the fact that the statistical path loss models predict
path losses at a certain distance with limited area or distance resolutions.
Measurements have shown that at a given distance r from a transmitter,
the path loss Lp(r) is a random variable having a lognormal distribution about
the mean distant-dependent value predicted by a distance-dependent path loss
model [4]. Thus, path loss Lp(r) can be expressed in terms of L–p(r) plus a ran-
dom variable Xσ,

L p (r ) = L−p (r ) + X σ all in dB

In which X is normally distributed in the logarithm domain (in the decibel


scale) with zero mean, where we already discussed models for predicting the L–
p(r). The log-normally distributed variation to the mean path loss predicted by
a path loss model; that is, the Xs component, is referred to as the slow fading,
the shadowing or the lognormal fading component. The standard deviation of
106 From LTE to LTE-Advanced Pro and 5G

this lognormal distribution depends on the environment and the accuracy of


the path loss model used to predict the distant dependent mean path loss. In
theory, this standard deviation can be reduced to zero if prediction methods
using topographical data bases with unlimited resolution and details are used
to predict the signal mean path losses at each distance from the transmitter. In
practice, the typical values of the lognormal fade standard distribution is about
8 dB in urban areas, 10 dB in dense urban areas, and 6 dB in suburban and
rural areas with the usual Okumura-Hata path loss models. A simple path loss
model based on only distance with no use of the terrain-specific data will result
in very large values for the standard deviation of the lognormal fade component.
The lognormal variations imposed on the mean path losses estimated by a
path loss model requires certain margins to be added to the radio link budgets
to ensure desired level of signal coverage. The required margin will depend on
the desired coverage probability, the standard deviation of the lognormal fade
distribution (which will depend on the environment such as typical urban or
suburban area) for the area, and the path loss distant exponent, which also
depends on the environment as well as on the signal frequency. The coverage
probability is defined as the probability that the signal level in the area is above
a certain threshold. The standard deviation is higher for the urban area than for
the more simple rural areas. The typical value ranges from 5 to 12 dB with a
typical value of 8 dB. The propagation constant for urban areas varies from 2.7
to 5, with a typical value of 5 for both the 850 MHz and the 1,900-MHz band.
The derivation of the formulation for the lognormal fade margin is given in [4,
11, 12], and typical values for standard fade deviations are 6, 9, and 12 dB and
for propagation path constants of 3 and 4 are taken form [12] and provided in
Table 4.7 for a single cell coverage area probability of 95%.
The lognormal fade margin at the cell edge for link budgeting purposes
may incorporate the gains obtained through handover. This gain can be eval-
uated by considering that at the cell edge there is coverage due to multiple
neighboring cells from the coverage overlap. That gain is basically based on

Table 4.7
Typical Values for Lognormal Fade
Margin for a Single-Cell Coverage
Area Probability of 95%
Propagation Path Constant
σLNF, dB 3 4
6 6.25 5.5
9 11 9.8
12 15 14
Coverage-Capacity Planning and Analysis 107

the reduced likelihood that signals from all the overlapping cells go into fading
at the same time. Assuming for simplicity that coverage is provided through
two overlapping cells, we can evaluate the handover gain to the lognormal fade
margin as follows.
From the sum probability formula, we can write:

Cell coverage probability from either of


two overlapping cells = p1 + p 2 − p1 p 2

where p1 and p2 are the coverage probabilities provided by each of the two cells
for a given lognormal fade margin for the propagation environment. Assuming
p1 = p2, which is reasonable, we obtain cell coverage probability from either of
the two overlapping cells

2 p − p . p = pp (2 − p ) (4.26)

Since p < 1, then 2 − p > 1, and hence the coverage probability from the
overlapping area at cell edge will be larger than the coverage probability of one
single cell by a factor of 2 − p. This will translate to a lower fade margin require-
ment for the same original coverage probability requirement of p. That is, it
would require the fade margin for what would be required for a coverage prob-
ability of p/(2 – p). As an example, for an environment with a standard devia-
tion of 8 dB, and for a cell-edge coverage probability of 95%, the fade margin
required is around 5 dB without the handover gain. With the handover gain
based on two overlapping cells, the coverage probability requirement of 0.95%
with one cell would translate into a coverage probability requirement of 90%
only and hence the fade margin required would reduce to 3 dB (corresponding
to the coverage probability of 90%), achieving a gain of 3 dB.

4.2 Capacity Quality Analysis and Verification


The coverage dimensioning discussed in the previous section used certain over-
all estimates for the uplink and downlink load parameters based on the capacity
requirements within the network. Link budgeting, the basis of coverage di-
mensioning, assumes uniform landform and path loss and ideal site locations
to calculate a semitheoretical cell radius that can then be used as a starting
point in the detailed system simulations to complete the coverage planning and
verify the desired capacity. The capacity quality dimensioning requires that the
number of sites estimated in the coverage planning and the assumed overall
load estimates be verified against the capacity requirements and the results from
108 From LTE to LTE-Advanced Pro and 5G

system-level simulation, and further iterations for the cell radius and hence the
number of sites to be performed if necessary.
The capacity quality requirement of the radio network is expressed in
terms of the traffic handling capability either on a per cell (for instance, 10
Mbps per cell) or per unit area. In LTE, the main indicator of capacity is the
SINR distribution in the cell, which is obtained by carrying out system-level
simulations. The SINR distribution is then directly mapped into system capac-
ity that is the achievable data rate distribution within the area. Therefore, the
cell capacity in LTE is impacted by several factors such as the supported MCSs,
the antenna configurations, the detailed signal propagation environment, and
the site geometry, as well as specific resource and interference management al-
gorithms implemented within the network nodes. The capacity analysis based
on detailed system-level simulation will result in new updates of the values of
load and interference-related parameters assumed in the link budgeting dimen-
sioning. Then the updated values can be used to reiterate the coverage-based di-
mensioning and obtain new values for the cell radius and the iteration repeated
until convergence is achieved.
The cell range obtained from final coverage dimensioning and capacity
analysis can be used to calculate the number of sites necessary for coverage re-
quirement in the service area. Then with the traffic model, and the cell capacity,
the number of sites needed based on the traffic requirement can be calculated.
The larger of the two values provides the final output for the number of sites
necessary to meet both the coverage and the capacity requirements. Normally,
the necessary site count based on capacity requirement will eventually exceed
what is obtained in the coverage dimensioning in the initial phase of the net-
work deployment when the number of users is still small. As the demand in-
creases and more users are added to the service, the capacity based site count
takes the lead resulting in smaller cell size. The flowchart for the combined
coverage and capacity dimensioning is provided in Figure 4.2.

4.2.1 Capacity Estimation


To perform the system-level simulation needed for capacity analysis, a more
detailed and accurate radio path loss model suited to the RF propagation envi-
ronment is required to estimate the path losses within various locations in the
cells. So generalized path loss models as discussed earlier should be tuned using
path loss measurements obtained through drive tests in the service area and the
necessary model resolution achieved. The simulation will use the internally cal-
culated path loss distributions to derive the SINR distribution within the cells
and the associated probabilities. The SINR probability is obtained by calculat-
ing the probability of occurrence of the SINR value. Each SINR value translates
Coverage-Capacity Planning and Analysis 109

Figure 4.2 Flowchart for combined coverage-capacity based dimensioning.�

into the best MCS that can be supported using tabulated link-level simulation
results and hence the achievable bit rate. The average cell throughput is then
the probability weighted summation of the achievable bit rates at various pixels
within the cell and is calculated by
110 From LTE to LTE-Advanced Pro and 5G

CellAverageThroughput = ∑ (SINROccurenceProbability .AverageAssociatedBitRate ) (4.27)


allSINRValues

The mean cell throughput obtained from (4.25) is then checked against
the expected mean traffic demand per cell. If the capacity requirement is not
met, additional sites are added until the network capacity provided can meet
the traffic demand. The term under the summation sign in (4.27) may be mul-
tiplied by the local traffic density normalized by the expected cell traffic to get a
more realistic estimate of the cell capacity. This is because the cell capacity will
vary depending on where in the cell the users use the services from, their mobil-
ity activity (which can takes up more signaling channels) and the actual service
profile and geographic distribution of users within the cell. For example, the
capacity will be greater if some of the mobiles are in better propagation condi-
tions than in less favorable radio conditions.

4.2.2 Traffic Demand Estimation and Site Calculation


It is important to obtain a reasonable estimate of the service mix and the traffic
demand to properly dimension the network for capacity. The traffic demand
estimate should incorporate a measure of the traffic profile and the demand at
peak hours as well as the near future anticipated growth. Moreover, the sub-
scriber distribution and the traffic per subscriber and service profile must be
estimated to know where the traffic is concentrated and help as a guide for the
placement of sites to minimize interference in the network. This must be done
separately for different service areas such as dense urban, suburban, and rural,
as was done in the coverage planning phase. The collection and analysis of the
traffic statistics from the operators existing 3G network can also provide addi-
tional input for obtaining a good feeling for the traffic demand.
Normally, the peak hour traffic may be used to provide a measure for
overbooking the capacity than using it to dimension the network. Dimension-
ing the network based on the peak traffic can better ensure quality of service but
is a conservative approach that can lead to an overdimensioning of the network
and wastage of the network resources in other hours of the day and increasing
the network cost and the power consumption. Therefore, the degree of capacity
overbooking and the cost implications should be considered in the trade-offs
important to the operator. However, since the actual traffic information and the
varying trends cannot always be well predicted, it is recommended that several
traffic scenarios be generated based upon whatever information is available in
order to test the sensitivity of designs to changes in network traffic distribu-
tions. This should also account for the anticipated service traffic growth for at
least a year. Once a reasonable value for the traffic demand is obtained, and the
overheads due to protocols at various layers are also taken into account for each
Coverage-Capacity Planning and Analysis 111

service, the number of sites based on the overall services throughput thus ob-
tained and the cell capacity indicated in the system simulations is estimated by

# sites for capacity = Overall throughput in service area/site capacity (4.28)

The site count estimation should be performed for each type of service
area, as was done in the case of coverage dimensioning. The larger of the two
site counts is then chosen as the final output from the radio access dimension-
ing task.

4.3 Optimum System Bandwidth for Coverage


For a given bit rate on downlink and a fixed amount of total transmit power,
there is a certain system bandwidth that maximizes the coverage. This is expect-
ed on the grounds that with more bandwidth the number of allocated resource
blocks increases, which allows for a reduction in the coding rate (more coding
possible) and hence an increase in the coding gain. The increase in the coding
gain results in a lower required SINR and hence in an increase in the cell range
capable of supporting the bit rate. However, as the system bandwidth increases
(more resource blocks) the power per PRB or the power density reduces also,
which then results in reduced coverage (smaller cell range) for a fixed total
transmit power. Therefore, we can expect an optimum system bandwidth where
the benefits achieved through increased coding gain starts to get offset by the
reduced power density. This is illustrated for the downlink side in Figures 4.3
and 4.4, where the receiver sensitivities were calculated from the required SINR
for various system bandwidths. The data used were based on interpolation of

Figure 4.3 Optimum system bandwidth analysis for coverage of 1 Mbps on DL.
112 From LTE to LTE-Advanced Pro and 5G

Figure 4.4 Optimum system bandwidth analysis for coverage of 2 Mbps on DL.

the data derived in Chapter 3 for downlink bit rates of 1 Mbps and 2 Mbps.
The figures show that for a downlink bit rate of 1 Mbps, the optimum system
bandwidth is 3 MHz, and for a bit rate of 2 Mbps the optimum system band-
width required is 5 MHz.
There is an optimum system bandwidth for the uplink also. However, the
trends are less complex as the uplink data rate is almost unaffected by changes
in the uplink transmission bandwidth. As the uplink transmission bandwidth
increases, it does reduce the SINR requirement for the same bit rate because
of the lower coding rate that can be used. However, the increase in the uplink
bandwidth results also in increased noise bandwidth and hence more total noise
power while the signal power stays the same. The difference to downlink is that
the symbols are detected in time-domain (SC-FDMA) and not on a per sub-
carrier basis as in the downlink. Hence, on the uplink the interplaying factors
are the gains achieved through the increased uplink bandwidth against the loss
resulting from the increased total noise power.

4.4 Trade-Offs Between Network Load and Coverage Performance


To obtain insight into the relationships between network load, and the cover-
age performance under various scenarios, we will examine a generic form for
the SINR which is a main performance metric in LTE. By definition, the SINR
can be written as

S
SINR = (4.29)
I +N
Coverage-Capacity Planning and Analysis 113

In which S denotes the average received signal power, and I and N denote
the interference and noise at the input to the receiver, respectively. The inter-
ference averaging is assumed over small scale fading and many TTIs and the
impact of the interplay of HARQ and the interference bursts is neglected in the
overall considerations made here. The interference term I is generally composed
of an intracell component and intercell part. In LTE, the intracell contribution
is assumed insignificant due to the subcarriers mutual orthogonality, although
excess phase noise, transmitter nonlinearity measured in error vector magni-
tude, and excess delay spread beyond the cyclic prefix length can result in non-
negligible interference from in-cell users.
At the cell edge, the interference from K neighboring cells can be written
as

I = ∑ γk .I max,k
K (4.30)

where Imax,k is the maximum interference received from cell k when it is fully
loaded, and γk is the subcarrier activity factor (i.e., load) of cell k. Assuming
the average cell load is the same for all cells and equal to γ, (4.30) simplifies to

I = γ ∑ I max,k = γ.I max


k

Substituting in (4.29) and reexpressing result in

S S
SINR = = (4.31)
γ.I max + N γ.S .Fc + N

where

I max
Fc =
S

as was defined earlier in the chapter and N is the total thermal noise power in
the receiver and in dB is given by

N = −174 + 10 ∗ log (15,000) + N f

and Nf is the UE receiver noise figure normally set to around 7 dB. All powers
in (4.31) are expressed on per subcarrier basis.
114 From LTE to LTE-Advanced Pro and 5G

The value of Fc depends on network geometry and antenna configura-


tion but not on the cell range and is practically found from system simulations
or network measurements. Typical values are Fc = 4 ... 1 dB, as also concluded
from a somewhat similar analysis given in [13]. The values of both Fc and Nf
must be converted to linear scale and then used in (4.31).
Equation (4.31) has been plotted in decibels for an intercell load factor of
Fc = 3, and for various average network load γ in Figure 4.5.
From the data plotted in Figure 4.5, it is seen that the intercell interfer-
ence will limit the achievable cell edge SINR no matter how much the power
is increased beyond a limit. It can be seen that even a low network load of 5%
to 10% will saturate the SINR, and hence the throughput. Therefore, intercell
interference coordination and mitigation as discussed in Chapter 7 is required
to optimize the cell-edge throughputs.
To see the impact of network load to cell range for a fixed throughput, the
received power S in (4.31) is expressed as

Tx ,eff
S= (4.32)
M .L

where M is the fade margin and L is the path loss all in linear scale. Tx,eff is
the effective transmitted power (after antenna gains) in watts. Substituting in
(4.31) and solving for the path loss L gives

Figure 4.5 Cell-edge SINR versus Rx power for various average network load parameter γ.
A value of 3 was used for the intercell interference parameter Fc.
Coverage-Capacity Planning and Analysis 115

Figure 4.6 Cell range versus network load for various values of cell-edge throughputs.

Tx ,eff (1 − SINR . γ.Fc )


L= (4.33)
M .N .SINR

For the path loss L, we will use the model provided by 3GPP in [14] for
the 2-GHz band which is

L = 10(128.1+ 37.6LOG (R ))/10 (4.34)

where R is the cell range in kilometers and L is in linear scale.


Using a fade margin of 8.7 for a standard fade duration of 8 dB to provide
95% cell area coverage probabaility and an effective transmit power (EIRP) of
62 dBm, a system bandwidth of 5 MHz, and an Fc value of 3, we have plot-
ted in Figure 4.6 and (4.33) as R versus network load γ for various cell-edge
throughputs. The data derived in Table 3.6 was used to obtain the mapping
from the throughput to SINR for use in (4.33).
It is seen that as the network load increases, the cell range decreases for
fixed cell-edge throughput.

References
[1] Pozar, D. M., Microwave Engineering, New York: John Wiley & Sons, 1998.
[2] Rahnema, M., UMTS Network Planning, Optimization and Inter-Operation with GSM,
New York: John Wiley & Sons, 2007.
[3] 3GPP R1-072570, “Performance Evaluation Checkpoint: VoIP Summary,” 2007.
116 From LTE to LTE-Advanced Pro and 5G

[4] Holma, H., et al., (eds.), LTE for UMTS, New York: John Wiley & Sons, 2009.
[5] Sklar, B., “Rayleigh Fading Channels in Mobile Digital Communication Systems Part
I: Characterization,” IEEE Communications Magazine, Vol. 35, No. 7, July 1997, pp.
90–100.
[6] Hata, M., “Empirical Formulae for Propagation Loss in Land Mobile Radio Services,”
IEEE Trans. Vehic. Tech., Vol. VT-29, No. 3, 1980, pp. 317–325.
[7] COST 231, Damasso, E., and L. M. Correia, (eds.), Digital Mobile Radio Towards Future
Generation Systems, Final Report, COST Telecom Secretariat, Brussels, Belgium, 1999.
[8] Walfisch, J., and H. L. Bertoni, “A Theoretical Model of UHF Propagation in Urban En-
vironment,” IEEE Transactions on Antennas and Propagation, Vol. 36, No. 12, December
1988, pp. 1788–1796.
[9] Mckown, J., and R. Hamilton, “Ray Tracing as a Design Tool for Radio Networks,” IEEE
Networks Magazine, Vol. 5, No. 6, November 1991, pp. 27–30.
[10] Wireless World Initiative New Radio, “WINNER II Channel Models, WINNER II
Deliverable D1.1.2, Version 1.2,” http://www.ist-winner.org/WINNER2Deliverables/
D1.1.2.zip, 2008.
[11] Jakes, W. C., (ed.), Microwave Mobile Communications, New York: John Wiley and Sons,
1974.
[12] Hämäläinen, J., “Cellular Network Planning and Optimization Part II: Fading,”
Communications and Networking Department, TKK, 17.1, 2007, www.comlab.hut.fi/
studies/3275/Cellular_network_planning.
[13] Salo, J., M. Nur-Alam, and K. Chang, “Practical Introduction to LTE Radio Planning,”
web listed white paper, November 2010.
[14] 3GPP TR 25.814, “Physical Layer Aspects for Evolved UTRA Release 7, V2.0.0,” 2006.
5
Prelaunch Parameter Planning and
Resource Allocation
The three important tasks in the LTE prelaunch parameters planning are allo-
cation of physical cell identities (PCIs), uplink reference signal sequence plan-
ning, and the physical random access channel (PRACH) parameters planning.
These are discussed in the following sections.

5.1 PCI Allocation


The PCI allocation is similar to scrambling code allocation in wideband code
division multiple access (WCDMA). The PCI is transmitted within the physi-
cal layer synchronization signal and is used by the UE for neighbor cell han-
dover measurement reports. Thus, as in WCDMA, the PCI should uniquely
identify the neighboring cell to the serving eNodeB, within a certain reuse dis-
tance. Therefore, the PCI reuse distance should be large enough so that the UE
cannot measure and report two cells with the same PCI. This should not pose
a problem since there are 504 PCIs defined. However, an additional level of
complexity can arise if the allocation scheme of the DL reference symbols is tied
to the PCI allocation scheme. The reference signals (RSs) are always transmit-
ted in the same orthogonal frequency division multiplexing (OFDM) symbol,
but in the frequency domain, each cell has a different shift that can be defined
by modulo-3 of the PCI. In that case, the RS of different cells may or may not
overlap in frequency depending on the PCI allocation scheme used. Therefore,
a good coordination of PCI allocation among neighboring cells can help to
facilitate the allocation design for the reference symbols. However, if the neigh-

117
118 From LTE to LTE-Advanced Pro and 5G

boring sites are not frame-synchronized or the frame timing offset among the
neighboring cells are random, PCI coordination among sites is not practical.
A PCI-tied reference symbol allocation scheme can result in an overlap-
ping of RS with the control and data resource elements of the neighboring cells.
Therefore, a design choice must be made between RS to RS interference and RS
to PDSCH/PDCCH interference. The current engineering consensus seems
to be that the latter choice is favored on the consideration that a PCI module
3 allocation of reference symbols helps to avoid interference with the primary
synchronization signals and hence avoid problems in cell search and handover
measurements.

5.2 Uplink Reference Signal Allocation


In the case of uplink, the LTE uplink shared data channel (PUSCH) carries
the demodulation reference signal (DM RS). Optionally, the sounding ref-
erence signal is also transmitted in the uplink as was discussed in an earlier
chapter. The uplink DM RS, which are constructed from Zadoff-Chu (ZC)
sequences1, are divided into 30 groups. This means that there are 30 different
base sequences that can be used as the reference signal for any given number of
PRBs allocated in the uplink. Since the cross-correlation between the base se-
quences is, on average, low, the neighboring cell can be allocated different base
sequences to avoid intercell interference. The simplest method is to allocate the
base sequences according to a PCI mod30 scheme, by setting the DM RS base
sequence index, u, equal to PCI mod30, where u = 0 ... 29, and make sure that
the potentially interfering cells are allocated different sequences. However, in a
practical network deployment, this simple criterion cannot always be ensured,
as the same base sequence is reused in every 30th cell which cannot ensure
sufficient cell separation in all situations. As alternative solutions, 3GPP [1, 2]
has defined additional sequence allocation schemes which consist of defining
u independently from PCI, cyclic shift planning, and pseudo-random base se-
quence hopping (u-hopping) as discussed in the following.

5.2.1 Using Cyclic Shifts of the ZC Sequences


This scheme utilizes the orthogonality property of the ZC sequences by assign-
ing a different cyclic shift between two cells that use the same base sequence
u. The cell-specific static cyclic shift is broadcast on the BCCH channel. This
scheme can particularly be applied to the cells of one site, which can be easily

1. These are complex valued mathematical sequences whereby cyclically shifted versions of the
sequence result in zero autocorrelation.
Prelaunch Parameter Planning and Resource Allocation 119

frame-synchronized. It should be noted that cyclic shift planning can also be


combined with the other two schemes discussed next.

5.2.2 Defining u Independently from PCI


This scheme bypasses the simple PCI-based sequence allocation scheme by ex-
plicitly defining the base sequence used in the cell. This brings additional flex-
ibility to base sequence allocation and effectively decouples the PCI planning
from uplink DM RS base sequence planning. The simplest implementation of
this concept introduces an offset Dss to the former modulo-30 of PCI assign-
ment scheme. Thus the base sequence is assigned to a cell as u = (PCI + Dss)
mod 30, where Dss = 0 ... 29 is an offset parameter signaled on BCCH. In the
simple PCI-based scheme Dss = 0. With Dss offset, it is possible to avoid col-
lisions in cells that would otherwise use the same u due to PCI-tied allocation.

5.2.3 Pseudo-Random u-Hopping


In this scheme, the index of the base sequence as obtained in the previous sec-
tion, (PCI+Dss) mod 30, changes randomly at every time slot through some
random hoping fashion. The index of base sequence in time slot n is then ob-
tained from

un = (vn + PCI + Dss) mod 30 (5.1)

where vn = 0 ... 29 is a pseudo-random integer defined by the hopping-pattern.


The hopping pattern used in a cell is defined by the index s = PCI/30. Since
there are 504 different PCIs defined in THE LTE, there will be 17 u-hopping
patterns, that is, vn = 0 ... 16. If in the planning process nearby cells are assigned
nearby PCIs, the hopping pattern defined by the index s =PCI/30 can end up
being the same, leading to systematic collisions in the hoping process. To pre-
vent this, the static part of the base sequence assignment as obtained by the Dss
parameter should be set differently for adjacent cells that are particularly frame
synchronized.

5.3 Random Access Planning


The random access procedure is handled by the MAC protocol sublayer and
the physical layer. It uses a combination of the PRACH on the uplink and the
PDCCH channels on the downlink. The UEs are informed of the range of
random access preambles available in system information and the related con-
tention management parameters. The PRACH configurations such as the loca-
120 From LTE to LTE-Advanced Pro and 5G

tion and periodicity of PRACH resources in the uplink subframes are found by
checking the information broadcast on the BCCH channel.

5.3.1 PRACH Preamble Format Selection and Parameter Setting


On accessing the network, the UE needs to transmit a preamble to the eNodeB.
The UE randomly selects a preamble from a set of 64 preamble sequences, some
of which may be assigned to UEs based on reservation and some based on a
contention basis. When users have a reserved signature (assigned PRACH in-
dexes) to access the system, they are said to be using contention-free random ac-
cess (CFRA). On the contrary, when users do not have a reserved signature for
access, they are said to use contention-based random access. CFRA is typically
used during handover. Normally, a subset of the 64 preambles is reserved for
handover only. The random access transmission uses 6 central resource blocks
with a total bandwidth of 1.08 MHz.
To prevent collision interferences among neighboring cells, each cell has
its own set of 64 preambles, and the information for deriving the specific set
in each cell is broadcast on BCCH. The 3GPP specifications define 4 different
preamble burst formats for LTE FDD that are labeled from 0 to 3, in which
the preamble sequences used all have a sequence length of 839. The preamble
burst format is decided by eNodeB. Format 0 is used for normal cell. Format 1,
also known as the extended format, is used for large cell. To allow a lower data
rate at the cell edge and have power balancing, preamble repetition is required.
Therefore, formats 2 and 3 are called repeated formats in which the preamble
sequence is repeated 2 times. Format 2 is used for a maximum cell size of 30
km and format 3 is used for a maximum cell size of 100 km. The preamble for-
mat is therefore a parameter that the operator must decide based on the range
of desired cell coverage. The physical layer random access preamble consists
of a cyclic prefix of length cp and a preamble sequence part of the length se-
quence, which is transmitted with appropriate guard bands through subframes
of lengths of 1 ms. The relative variations in the lengths of the CP, preamble
sequence, and guard time (GT) between the preamble formats are intended
to cater for a wide variety of coverage scenarios. These parameters are given in
Table 5.1 for the FDD preamble burst formats together with the expected cell
coverage range design for each.

5.3.2 Derivation and Assignment of Preamble Sequences


The 64 PRACH preambles in each cell are cyclically shifted versions of Zadoff-
Chu (CS-ZC) root sequences, which have constant amplitude and excellent
auto/cross-correlation properties as described in [1]. These sequences are ob-
tained from:
Prelaunch Parameter Planning and Resource Allocation 121

Table 5.1
FDD Preamble Burst Formats
Sequence, Guard Cell
Format CP, ms ms Subframes Time, ms Radius, km
0 103.125 800 1 96.875 ~14
1 684.375 800 2 515.625 ~75
2 206.25 1,600 2 193.75 ~28
3 684.375 1,600 3 715.625 ~103

 n (n + 1) 
x u (n ) = exp  − j πu  , n = O N ZC − 1 (5.2)
 N ZC 

where NZC denotes the length of the uth root sequence. From the uth root
Zadoff-Chu sequence, random access preambles with the zero correlation zone
are defined by cyclic shifts of multiples of Ncs according to

x u ,v (n ) = x u ((n + vN CS ) mod N ZC ) (5.3)

where NCS is given in Table 5.2.

Table 5.2
NCS for Preamble Generation (Preamble Formats 0 to 3)
ZeroCorrelation NCS value
ZoneConfig Unrestricted Set Restricted Set
0 0 15
1 13 18
2 15 22
3 18 26
4 22 32
5 26 38
6 32 46
7 38 55
8 46 68
9 59 82
10 76 100
11 93 128
12 119 158
13 167 202
14 279 237
15 419 —
122 From LTE to LTE-Advanced Pro and 5G

The degree of cyclic shifting of the ZC root sequences, NCS, should guar-
antee distinct sequences for the cell’s radio propagation geometry. Therefore, a
ZC root sequence may be shifted by an integer multiple of the cell’s maximum
round-trip delay plus the delay spread, 64 times to generate the set of 64 dis-
tinct orthogonal sequences for the cell’s radio environment. The relationship
between the cyclic shift and the cell size is given by

(N CS − 1) ∗ (800 µs 839) ≥ RTD + Delay spread (5.4)

in which RTD stands for round-trip delay given by RTD = 2R/c, where R and
C are the cell radius and the speed of the light in free space, respectively. In
the case of remote sites’ deployment, the length of the fiber to the remote cells
must be considered as part of the cell radius based on the speed of light in fibers
(about 2/3 the speed of light in free space). In the case in which microwave links
are used, the speed of light in free space is used for the calculation. The delay
spread is typically different for rural, suburban, urban, and dense urban envi-
ronments and its value should be obtained through drive test measurements in
the cell.
Another factor which determines the degree of cyclic shifts required is
the Doppler shift experienced by the UE’s mobility. In the presence of Doppler
shifts, the CS-ZC sequences lose their zero auto-correlation properties. Indeed,
high Doppler shifts induce offsets in the receiver’s bank of correlators from
the desired peak, and hence can lead to preamble confusions. To avoid this, a
1-bit flag from the eNodeB signals whether the current cell is a high-speed cell
or not so that proper ranges for cyclic shifts can be selected in the generation
of the preamble sequences for the cell. Hence, the cyclic shifts can either be
restricted or unrestricted. The restricted cyclic shifts limit the available cyclic
shifts for preambles that are used in high-mobility scenarios. The upper bound
for PRACH performance without applying cyclic shift restriction is within the
range of 150 to 200 km/hour.
As a result, the cyclic shift and corresponding number of root sequences
used in a cell are a function of the cell size and the UE speed. The cyclic shift is
indirectly given to the UE by a parameter called ZeroCorrelationZoneConfig,
as shown in Table 5.2 [1]. A small value for cyclic shift (NCS) allows for more
sequences to be derived from the root ZC sequence and higher values produce
less but are more resistant to the effects of delay spread and Doppler in the
preamble. Based on simulation results discussed in [2], the upper bound for
PRACH performance for the unrestricted cyclic shifts given in Table 5.2 are in
the range of 150 ~ 200 km/hour for typical AWGN environments [2].
The root sequences assigned are signaled to the cell by a single base logical
root sequence index (RootSequence Index parameter), regardless of the actual
number of root sequences required in a cell to derive the 64 preambles. The
Prelaunch Parameter Planning and Resource Allocation 123

logical root sequence index order is cyclic, that is, the logical index 0 is consecu-
tive to 837. The UE then derives the subsequent root sequence indexes in accor-
dance with a predefined ordering. The parameter RootSequenceIndex informs
the UE via SIB2 which root sequence is to be used. The UE starts with the
broadcast root index and applies cyclic shifts to generate the 64 preambles. The
cyclic shift value (or increment) is taken from among 16 predefined values. The
parameter called ZeroCorrelationZoneConfig points to a table from which the
cyclic shift is obtained. The smaller the cyclic shift, the more preambles can be
generated from a root sequence. Hence, the number of root sequences needed
to generate the 64 preambles in a given cell is:

( (
No. of root sequences = ceiling 64 integer (sequence length cycle shift ) )) (5.5)

For example, if the RootSequenceIndex is 300 and the cyclic shift is 119,
then the number of root sequences needed to generate the 64 preambles in a
cell is:

( (
No. of root sequences = ceiling 64 integer (839 119) = 10 ))
This means that if we allocated RootSequenceIndex 300 to sector 1, then
sector 2 must have RootSequenceIndex 310 and sector 3 must have RootSe-
quenceIndex 320.

5.3.3 RootSequenceIndex Planning


Since the parameter RootSequenceIndex defines the root sequence within the
839 Zadoff-Chu sequences from which the 64 preambles for a cell are gener-
ated, each cell must be assigned a different RootSequenceIndex to generate a
unique set of preamble sequences (signatures) to avoid collision with UEs in
neighboring cells. If root sequences used in neighboring cells overlap, the trans-
mitted preamble can be detected in multiple cells. The drawback are the ghost
preambles, which consequently result in unnecessary PDCCH and PUSCH
resource reservation in those cells whose random access responses are then ig-
nored by the nonrequesting UEs. Therefore, the preamble sequence planning
for cells through the proper assignment of RootSequenceindex leads to the con-
cept of preamble sequence reuse distance and reuse cluster as used in GSM
frequency planning and in UMTS Scrambling code planning. For example,
for a cell coverage range of 7.4 km, and a delay spread of 6 ms, the cyclic shift
(Ncs) allowed to obtain 64 preambles from the 839 Zadoff-Chu root sequences
is 59 from (5.1) (which corresponds to a ZeroCorrelationZonConfig parameter
value of 9 for the unrestricted scenario from Table 5.2). Then using the value
of 59 for Ncs in (5.2) gives 5 as the number of required root sequences for each
124 From LTE to LTE-Advanced Pro and 5G

cell. This results in a sequence reuse cluster of at least 839/5 ≈ 167 cells, which
allows for easy planning process. The design criterion to consider here is that
root sequence indices of cells must not overlap within the reuse distance. As
for typical scenarios, there are plenty of ZC sequences available. However, if
root sequences used in neighboring cells, which can hear each other overlap,
the transmitted preamble can be detected in multiple cells and result in ghost
preambles. The preamble planning process therefore consists of determining
the cell range desired and then using equations 1 and 2 to determine the proper
the root sequence reuse cluster and then the proper assignment of cells to the
reuse clusters.

5.3.4 PRACH Capacity Planning


There is considerable flexibility in setting the resources to be used for random
access transmission. A random access preamble sequence burst is transmitted
in an allocated PRACH timeslot. This special time slot period corresponds to
one, two, or three subframe periods depending on the preamble format used,
which may be set as types 0, 1, 2, or 3 as discussed in the previous section. The
PRACH timeslots are allocated with a regular repetition period known as the
PRACH burst period. This ranges from fractions of a frame to multiples of
a frame. A 10-ms period is typical with a 5-MHz bandwidth and a preamble
format of 0. More than one PRACH timeslot can be allocated at the same time
but this would increase the processing load for the eNodeB. It is noted that
this allocation of resource does not have to be for exclusive use by PRACH as
it may also be allocated for a UE using the UL-SCH. The key parameters that
define the PRACH resource in the uplink are contained within the informa-
tion element type RadioResourceConfigSIB. The preamble format to be used
on the cell is indicated to UEs with an index that references a table in the
standards documentation. The reference in the table will also indicate which
subframes may be used for the start of a preamble transmission. The mini-
mum configuration for this allows just one starting point that is available only
in even-numbered frames. The maximum configuration provides five starting
positions in every frame. In addition, a parameter is included that indicates the
starting index in terms of frequency allocation for PRACH, and indicates the
lowest RB index that can be used. The allocation will always be for six RBs in
the frequency domain.

5.4 PRACH Procedure


To transmit a preamble sequence, the UE calculates an initial power and uses
this for the first transmission. Then starting from the last subframe in which the
preamble sequence was transmitted, the UE will begin to monitor the PDCCH
Prelaunch Parameter Planning and Resource Allocation 125

for a random access response (RAR). The RAR will be identified with a random
access radio network temporary identifier (RA-RNTI) that is related to the pre-
amble transmission, as shown. The UE continues to monitor the PDCCH for a
number of subframes given by the parameter ra-responseWindowSize. If it does
not receive a RAR in this time with the corresponding RA-RNTI, then it will
initiate a retransmission with a power increment through the open loop power
control. This process continues until either a successful preamble transmission
or the number of retransmissions reaches preambleTransMax. If this occurs,
then the procedure will fail and an indication is provided to the higher layer.
Upon receipt of a successful uplink PRACH preamble, the eNodeB calcu-
lates the power adjustment and timing advance parameters for the UE based on
the strength and delay of the received signal and transmits an uplink capacity
grant to the UE to enable it to send further details of its request. This will take
the form of the initial layer 3 message. If necessary, the eNodeB will also assign
a temporary cell radio network temporary identifier (C-RNTI) for the UE to
use for ongoing communication. Once received, the eNodeB reflects the initial
layer 3 message back to the UE in a subsequent uplink resource grant message
to enable unambiguous contention resolution. After this, further resource al-
locations may be required for signaling or traffic exchange, which will be ad-
dressed to the C-RNTI.

5.5 PRACH Optimization


The random access performance is evaluated by its delay and success rate. These
depend on the population under the cell coverage, the session arrival rate, the
incoming handovers to the cell, and whether the cell is at the edge of a tracking
area as it affects the traffic pattern and hence the need to use RACH. These fac-
tors, in turn, are affected by network configurations, such as antenna tilt, trans-
mission power, handover threshold, and the network load. If network configu-
rations or load is changed, the random access performance may change greatly,
which influences the performance of other KPIs, such as call setup and hando-
ver. The configurations of RACH include the RACH physical resources, the
RACH preamble allocation, the RACH persistence level and backoff control,
and the RACH transmission power control. The measurements collected by the
eNodeB can include the number of preambles that the mobile sent before re-
ceiving a reply, the random access delay, the random access success rate, and the
random access load. The random access load can be indicated by the number of
received preambles in a cell in a time interval. It is measured per preamble range
[4] and averaged over the PRACHs configured in a cell. Thresholds can be set
separately for the random access delay and success rate. If either of the thresh-
olds is reached, RACH optimization is performed. First, the load is analyzed to
126 From LTE to LTE-Advanced Pro and 5G

check if the random access is overloaded in any of the preamble ranges. If one
of them is overloaded, RACH preambles are reallocated among the ranges. If all
of them are overloaded, more physical resources need to be reserved for RACH.
If none of them is overloaded, other parameters need to be adjusted, such as the
power ramping step, the preambleTransmax, the ContentionResolutionTimer,
and the PreambleReceivedTargetPower [4].

References
[1] 3GPP TS 36.211 v 1.0.0 (2007-03), “Technical Specification Group Radio Access Net-
work; Physical Channels and Modulation, Release 8.”
[2] Panasonic and NTT DoCoMo, “R1-073624: Limitation of RACH Sequence Allocation
for High Mobility Cell,” 3GPP TSG RAN WG1 Meeting, No. 50, Athens, Greece, August
2007.
[3] Salo, J., M. Nur-Alam, and K. Chang, “Practical Introduction to LTE Radio Planning,”
White Paper, November 2010.
[4] 3GPP TS36.300, “Technical Specification Group Radio Access Network; Evolved Univer-
sal Terrestrial Radio Access Overall description; Stage 2 Release 13,” V13.0.0, 2015.
6
Radio Resource Control and Mobility
Management
The radio resource control (RRC) performs the overall control of radio resourc-
es in each cell and is responsible for collecting and managing all relevant infor-
mation related to the active user equipment (UE) in its area. The RRC protocol
layer [1] works very closely with the layer 2 protocol medium access control
(MAC) inside the UE and the eNodeB. It is part of the LTE air interface control
plane and the brain of the radio access network. The main services and func-
tions of the RRC sublayer include the UE power state management, broadcast
of system information related to the nonaccess and the access stratum, paging,
setup, maintenance, and release of an RRC connection between the UE and
the E-UTRAN, UE measurement reporting and the control of the reporting,
resource scheduling, mobility functions, security, and the QoS management.

6.1 RRC State Model in LTE


The functionality and complexity of RRC has been significantly reduced rela-
tive to that in Universal Mobile Telecommunications Service (UMTS) with
the RRC state model simplified to two states consisting of the idle and the
connected states as defined in [1]. In the RRC idle state, the UE is known in
the evolved packet core (EPC) and has IP address but is not known in the E-
UTRAN/eNodeB. In the LTE RRC connected state, the UE is known to both
the EPC and the E-UTRAN eNodeB. In this state, the UE has established RRC
connection with the E-UTRAN, its location is known at the cell level, and
mobility is UE assisted and network controlled. The transition time (excluding

127
128 From LTE to LTE-Advanced Pro and 5G

downlink paging delay and NAS signaling delay) to the connected state should
be less than 100 ms according to 3GPP TS 36.913. The state transitions be-
tween these two states are shown in Figure 6.1.
These two states define the RRC state machine implemented in the UE
and the eNodeB. Likewise, the EPC [2] maintains two different contexts for the
UE known as the EPC mobility management (EMM) (mobility management
context) and EPC connection management (ECM) (connected management
context), each of which is handled by state machines located in the UE
and the MME. The EMM ensures that the MME maintains the location data
necessary to offer service to the UE when required. The two EMM states main-
tained by the MME are EMM-deregistered and EMM-registered. The ECM
states describe a UE’s current connectivity status with the EPC, for example,
whether an S1 connection exists between the UE and EPC. There are two ECM
states, ECM-idle and ECM-connected. However, these are not the subject of
discussion in this chapter.

6.1.1 RRC Idle State


The idle state referees to terminal states when they are powered on, have per-
formed public land mobile nework (PLMN) selection and network access, but
have not established any RRC context with the eNodeB and therefore have no
C-RNTI assigned. In the idle state, the UR monitors the system information
broadcast on the BCCH monitors the paging channel to detect incoming calls,
and performs cell selection and reselection and the necessary neighbor cell mea-
surements for that. The MME assigns the UE a tracking area ID (TAID), as
defined in 3GPP TS 36.300 [3], and performs the necessary updates. However,
the precise behavior of the UE when performing these tasks will depend upon
the camped-on cell’s channel configuration and upon the setting of several re-
lated parameters in the system information. In the idle state, the UE is assigned
a TMSI which is used for paging purposes by the mobility management entity
(MME).

Figure 6.1 The LTE RRC state transition model.


Radio Resource Control and Mobility Management 129

6.1.1.1 Cell Selection and Reselection


Prior to cell selection, the UE in the idle state first performs a PLMN selection.
This process is performed by the nonaccess stratum (NAS), and can consider
input form the UE such as priorities settings for different PLMNs. The UE
subsequently selects a suitable cell belonging to the selected PLMN and per-
forms registration through the camped-on cell. Once camped on a cell, the UE
continues to reassess the suitability of its serving cell and in some circumstances
its serving network in terms of radio performance and signaling information to
evaluate the need for cell reselection. The radio performance measurements are
done on the basis of radio signal strength and radio signal quality, which is done
for both the serving cell and its neighbors. The objective is to ensure that the
UE is always served by the cell capable to give the most reliable service should
information transfer of any kind be required. The cell reselection measurements
on neighboring cells begin when the serving cell signal strength and signal qual-
ity fall below set thresholds. There are different thresholds defined for intrafre-
quency, interfrequency, and inter-RAT neighbor cell measurements [4].
The UE calculates the cell ranking R based on either the RSRP or the
RSRQ (see Chapter 3) for the serving cell and neighbor cells and selects the cell
with the highest R value. The formulation is defined in [4] as

R s = Qmeas ,s + Q hysts
Rn = Qmeas ,n − Qoffsets ,n (6.1)

where Qmeas,s and Qmeas,n are either the RSRP or the RSRQ measured by UE for
the serving cell and the neighbor cells, respectively. Qhysts specifies the hysteresis
value for the ranking criteria. Qoffsets,n specifies the offset between the serving cell
and the neighbor cell. If Qhysts is changed, it will impact the selection relation
between the serving cell and all the neighbor cells. So if only reselection be-
tween one pair of cell needs to be adjusted, the related offset parameter Qoffsets,n.
should be tuned.
The UE also monitors the serving cell system information messages and
paging or notification messages. The system information messages convey all
the cell and system parameters. Any changes in these parameters that may affect
the service level provided by the cell, or access rights to the cell can provoke a
cell reselection or a PLMN reselection. Paging or notification messages result in
connection establishment. The information that the UE uses for cell reselection
includes the frequencies, technologies, and cells that it should consider for rese-
lection. This information is provided in system information blocks (SIBs) 3 to
8 [1]. The specific SIBs used depend on the frequency and technology options
130 From LTE to LTE-Advanced Pro and 5G

that will be considered. The information regarding which SIBs are used for each
technology and neighbor cell types are provided in [1]. However, the informa-
tion relating to intrafrequency LTE cells is split between SIB Type 3 and SIB
Type 4, and that relating to LTE interfrequency cells are provided in SIB Type
5. Such information may optionally include a neighbor list. For each neighbor
in the neighbor cell list, the physical layer ID and a cell-specific reselection
offset are provided. Note that since the UE is required to scan and detect neigh-
bors on a given frequency, the operator may choose not to include the neighbor
cell list. Additionally, a black cell list may be included as well. Each entry in the
black cell list is either a single physical cell ID or a range of physical cell IDs.
This list can be used by an operator to prevent reselection to cells detected by
the UE on the given interfrequency. SIB Type 7 carries the information for
GSM cells, and it consists of a list of absolute radio frequency channel numbers
(ARFCNs) and an accompanying bit map identifying allowed values for the
NCC element in the BSIC. Thus, unlike a standard GSM neighbor cell list, it
does not list specific BSICs. The SIB Type 8 includes information specific to
frequency layers and reselection parameters for non-3GPP technology.
In the strategy for cell reselection, LTE allows for the use of RAT/fre-
quency prioritization. Each frequency layer belonging to either E-UTRA or any
other radio access technology (RAT) that the UE may be required to measure is
assigned a priority. Priority levels are allocated a value between 0 and 7, where 7
is the highest priority. This priority information is cell-specific and is conveyed
to the UEs via system information messages. The different frequency layers
within LTE may be assigned the same priority, but priorities may not be equal
for different radio access technologies. Additionally, UE-specific values can be
assigned by the user, which then take priority over the values provided in the
system information messages. The UE will then not consider any frequency
layers that do not have a priority in the cell reselection process. These measure-
ment rules are utilized for reducing unnecessary neighbor cell measurements.
The measurements defined depend on the RAT; for example, this would be
RSRP or RSRQ for LTE and RSCP or Ec/No for UMTS. The measurement re-
porting can be set as either periodical or event-based and the setting will include
the details of appropriate events such as thresholds and timers. Each defined
reporting configuration is tagged with a report configuration ID.
The frequency/RAT priority level and system thresholds are used to en-
sure that the UE measures cells in a higher priority layer in the cell reselection
process unless the quality of the currently selected layer becomes unacceptably
poor. The UE will also apply scaling to Treselection, hysteresis, and offset values
dependent on an assessment of its mobility state, which may be high, medium,
or low. The Treselection parameter defines the delay in the preselection to a
better ranked cell to avoid instability. This is based on consideration of resent
reselection frequency through operator specified thresholds.
Radio Resource Control and Mobility Management 131

6.1.1.2 RRC Connected State


In the RRC connected state, the UE maintains an active RRC context with
an eNodeB and is able to transmit and receive data to and from the network.
Its location is known down to the serving-cell level and is assigned a C-RNTI
[2]. The C-RNTI provides a unique UE identification at the cell level with
the RRC connection. In the RRC context establishment process, the eNodeB
contacts the HSS via the MME and receives the security and authentication
vectors for the UE. Therefore in this state, the ciphering and integrity keys will
be in place, but not necessarily any active EPS bearer. In the RRC state, the
UE monitors the control channels associated with the PDSCH to determine
if data are scheduled for it. When in this state the mobile has data waiting for
transmission on the PUSCH, it requests a scheduling grant by composing a
1-bit scheduling request for transmission on the PUCCH. Since this channel is
shared with other mobiles, it cannot always send the request right away. Instead,
it transmits the scheduling request in a subframe that is configured by RRC
signaling, which recurs with a period between 5 and 80 ms.
In the connected state, the UE performs neighbor cell measurements and
reporting, makes channel measurements, and feeds back the channel quality to
the eNodeB for network controlled handover executions.

6.2 Handovers in LTE


When the UE is in the RRC connected state, intrasystem/intercell and inter-
RAT handovers are used to handle the radio interface mobility. Each eNodeB
is responsible for managing intercell handovers between all the cells it controls
as the access stratum (AS), much of which for UMTS resided in the RNC, is
located in the eNodeB for LTE. The handovers in LTE are of the Break-Before-
Connect (BBC) type, also referred to as hard handover. The BBC handovers
require disconnecting from source eNodeB before establishing connection to
a target cell. When handover to a cell on another eNodeB site is required, the
eNodeB passes the details of the current UE context to its neighbor. This in-
cludes details of identities used, historical measurements taken, and the ac-
tive bearers. LTE supports inter-RAT handovers to and from GSM/GPRS and
UMTS. However, the UMTS RRC connected state has a number of substates
that are not a feature of LTE. Therefore, the state transition between the two
systems in the RRC connected state varies dependent on traffic activity and
direction. Handover is supported both to and from the UMTS CELL_DCH
state from the LTE RRC_Connected state irrespective of packet activity. In the
reverse direction a UE in the UMTS RRC_Connected state but in the substate
CELL_PCH or URA_PCH would return to LTE through cell reselection. Sim-
ilarly, transitions for RRC connected UEs to and from GSM/GPRS are also ef-
132 From LTE to LTE-Advanced Pro and 5G

fected by the traffic or signaling activity. Real-time traffic is likely to be handed


over between LTE and GSM, but for GPRS, operation options for cell change
order (CCO) or CCO with optional network-assisted cell change (NACC) ex-
ist. For all inter-RAT handovers, the handover preparation is done in the target
network side and final handover decision is done in the source network side.

6.2.1 Intrasystem Handover


Intrasystem handover in LTE is used to hand over a UE from a source eNodeB
to a target eNodeB using X2 when the MME element is unchanged. In case
there is no X2 interface between the two base stations, then the handover is
carried out through the MME, using messages that are exchanged on the S1-C
interface instead of on X2. If the mobile moves into a new MME pool area,
then the S1-based handover procedure is mandatory. Then the current base
station requests a handover by contacting the associated MME, which hands
control of the mobile over to a new MME, and the new MME forwards the
handover request to the new base station. The handover in RRC_Connected
state is UE-assisted but network-controlled. The intrasystem/intercell hando-
ver procedure starts with the measurement reporting of a handover event by
the UE to the serving eNodeB. The UE periodically performs downlink radio
channel measurements on the reference symbols (RS) received power (RSRP)
and the received quality (RSRQ) [5]. If certain network configured conditions
are satisfied, the UE sends the corresponding measurement report indicating
the target cell to which the UE has to be handed over. Based on these measure-
ment reports, the serving eNodeB starts the handover preparation. The hando-
ver preparation involves exchanging of signaling between the serving and target
eNodeBs over the X2 interface and the admission control of the UE in the
target cell [1]. Upon successful handover (HO) preparation, the HO decision is
made and consequently the handover is sent to the UE and the connection be-
tween UE and the serving cell is released. Then the UE attempts to synchronize
and access the target eNodeB, by using the random access channel (RACH).
To speed up the handover procedure, the target cell can allocate a dedicated
RACH preamble through the handover command to the UE [3]. Upon suc-
cessful synchronization at the target eNodeB, an uplink scheduling grant is sent
to the UE, which responds with a handover confirm message and notifies the
completion of the handover procedure in the radio access network.
The handover process is initiated in the UE on the basis of a set of network
configured triggers on measurements reported by the UE itself. The RRC con-
nection reconfiguration message, which replaces the measurement control mes-
sage used in the UMTS system, is used to configure UE measurement report-
ing. The UE measurement reporting can be made periodical or event driven.
The 3GPP standards [1] have defined a set of six trigger events for intrasystem
Radio Resource Control and Mobility Management 133

handover-related measurement reporting, which are based on the RSRP and


the RSRQ defined in Chapter 3. The criteria for triggering and the subsequent
cancelling of each event are evaluated after application of the layer-3 filtering
as configured in the mobility measurement configuration. The criteria for each
event reporting must be satisfied during at least the time to trigger. The time to
trigger can be configured independently for each reporting event and can take
values from the set {0, 40, 64, 80, 100, 128, 160, 256, 320, 480, 512, 640,
1,024, 1,280} ms. The operator can choose which measurement event to enable
and what decision criteria to use on the reported event to trigger the handover
in the optimization process. By increasing the values for TimeToTrigger and
the Hysteresis values (Hyst), it is possible to delay the handover’s impact for the
handover borders for all neighboring cells. These are part of the optimization
parameters. The layer 3 filtering performed on the measurements of RSRP and
RSRQ is in the form of

Fn = (1 − a ) ∗ Fn −1 + (a ∗ Measn ) (6.2)

where

Fn = updated filtered measurement result.


Fn–1 = previous filtered measurement result.
a = 1/2(k/4), where k is the filter coefficient for the corresponding measure-
ment quantity received by the quantity Config. If k is set to 0, no layer 3
filtering is used.
Measn = latest measurement result received from the physical layer.

The filter coefficient can be configured independently for the RSRP and
the RSRQ. The five events defined as the criteria to choose for intrasystem
handovers are as follows:

• Event A1: Event A1 is triggered when the serving cell becomes better
than a threshold. The event is triggered when the following condition
is true:

Meas_Serv − Hyst > Threshold

Triggering of the event is subsequently cancelled when the following


condition is true:

Meas_Serv + Hyst < Threshold


134 From LTE to LTE-Advanced Pro and 5G

The hysteresis can be configured with a value between 0 and 30 dB.


• Event A2: Event A2 is triggered when the serving cell becomes worse
than a threshold. The event is triggered when the following condition
is true:

Meas_Serv + Hyst < Threshold

Triggering of the event is subsequently cancelled when the following


condition is true.

Meas_Serv − Hyst > Threshold

The hysteresis can be configured with a value between 0 and 30 dB.


• Event A3: Event A3 is triggered when a neighboring cell becomes bet-
ter than the serving cell by an offset. The offset can be either positive or
negative. The event is triggered when the following condition is true:

Meas_Neigh + Oneigh,freq + Oneigh,cell


− Hyst > Meas_Serv + Oserv,freq + Oserv,cell + Offset

and subsequently cancelled when:

Meas_Neigh + Oneigh,freq + Oneigh,cell + Hyst < Meas_Serv


+ Oserv,freq + Oserv,cell + Offset

• Event A4: Event A4 is triggered when a neighboring cell becomes better


than a threshold.

Meas_Neigh + Oneigh,freq + Oneigh,cell − Hyst > Threshold

Triggering of the event is subsequently cancelled when the following


condition is true:

Meas_Neigh + Oneigh,freq + Oneigh,cell + Hyst < Threshold

• Event A5: Event A5 is triggered when the serving cell becomes worse
than threshold 1 while a neighboring cell becomes better than threshold
2. The event is triggered when both of the following conditions are true:
Radio Resource Control and Mobility Management 135

Meas_Serv + Hyst < Threshold 1


Meas_Neigh + Oneigh,freq + Oneigh,cell − Hyst > Threshold 2

Triggering of the event is subsequently cancelled when either of the fol-


lowing conditions is true:

Meas_Serv − Hyst > Threshold 1


Meas_Neigh + Oneigh,freq + Oneigh,cell + Hyst <Threshold 2
Meas_Neigh + Oneigh,cell – Hyst > Meas_Serv + Oserv,cell + Offset

and the leaving condition when:

Meas_Neigh + Oneigh,cell + Hyst < Meas_Serv + Oserv,cell + Offset

In the above listed criteria, the following notations have been defined.

• Oneigh,freq is the frequency specific offset of the frequency of the neighbor


cell. For intrafrequency measurements, a measurement object is a single
E-UTRA carrier frequency. Associated with this carrier frequency, the
eNodeB can configure a list of cell specific offsets using this parameter.
• Oneigh,cell is the cell specific offset of the neighbor cell. If not configured,
zero offset is applied (same as cell individual offset).
• Oserv,freq (dB) is the frequency specific offset of the serving frequency
• Oserv,cell (dB) is the cell specific offset of the serving cell(cell individual
offset)

The Meas_Neigh and Meas_Serv are expressed in dBm in case of


RSRP or in decibels in case of RSRQ. The other parameters are all expressed
in decibels.

6.2.2 Inter-RAT Handover


LTE supports inter-RAT handovers to and from GSM/GPRS and UMTS. The
mobility procedures for inter-RAT handovers are handled by interactions via
the MME. Once suitable resources are allocated on the target cell, handover
information is forwarded to the source eNodeB, which forwards them to the
UE in an RRCMobilityFromEUTRACommand message. On reception of this
message, the UE changes the RAT mode and implements the new channel as
instructed. Handover acceptance and confirmation after this point are depen-
dent on the RAT concerned. For GSM or UMTS, this will involve the trans-
136 From LTE to LTE-Advanced Pro and 5G

mission of a RR or RRC handover complete message. For handover from LTE


to UMTS, the handover is mostly done through the S3 interface between the
source MME and the target SGSN, and if an SGW relocation is required, the
S5 interface as is used as well.
The 3GPP standards [1] have defined a set of two trigger events for inter-
system handover-related measurement reporting, which are events B1 and B2
defined in the following.

• Event B1: This measurement reporting event is triggered when a neigh-


boring intersystem cell becomes better than a threshold. The event is
triggered when the following condition is true:

Meas_Neigh + Oneigh,freq − Hyst > Threshold

and the event is cancelled when the following condition is true:

Meas_Neigh + Oneigh,freq + Hyst < Threshold

• Event B2: This event is triggered when the serving cell becomes worse
than threshold 1 while a neighboring intersystem cell becomes better
than threshold 2. The event is triggered when both of the following
conditions are true:

Meas_Serv + Hyst < Threshold 1


Meas_Neigh + Oneigh,freq − Hyst > Threshold 2

Triggering of the event is subsequently cancelled when either of the fol-


lowing conditions is true:

Meas_Serv − Hyst > Threshold 1


Meas_Neigh + Oneigh,freq + Hyst < Threshold 2

The signal strength or quality measurements, Meas_Serv and Meas_


Neigh, are based on the RSRP and RSRQ for LTE, the CPICH RSCP or CPI-
CH Ec/Io for UMTS system, the RSSI for GSM, and the pilot strength for
CDMA2000. The filter coefficient can be configured independently for LTE
RSRP, LTE RSRQ, UMTS CPICH RSCP, UMTS CPICH Ec/Io, and GSM
RSSI.
Radio Resource Control and Mobility Management 137

6.2.3 Handover Performance and Optimization


Since in LTE only hard handover is supported to simplify the network archi-
tecture, both the handover success rate and execution time, which results in
an interruption time in the user plane, are important performance indicators.
The LTE intrasystem handover performance in terms of the handover failure
rate and the overall delay was investigated in [6] based on system simulations
in typical urban propagation environment, with different UE speeds, cell ra-
dii, and traffic loads per cell. The simulations use the 3GPP event A3 and the
RSRP for handover triggering and take into account the entire layer 3 signaling
exchanged via air interface as well as the errors at the Layer 1 control chan-
nels. The results in [6] have shown that the for cell radii up to 1 km and for
UE speeds up to 120 km/hr, the HO failure rate lies within the range of 0%
to 2.2% even in high loaded systems while for medium and low loads even at
speeds of 250 km/hr, the handover failures is below 1.3%. The results also show
that the handover overall delays fall in the range 84 to 97 ms under 0 to 15% L1
control channel errors, showing robustness against L1 control channel errors.
The simulation results in [6] under various scenarios of load and user
speed as explained show that the 3GPP LTE handover procedure satisfies the
high mobility requirements. However, it is important to understand the criti-
cal parameters and settings that influence the LTE handover performance and
provide a good understanding for optimization in different situations. There
are a number of parameters that influence the handover performance in terms
of handover failures, latency, and reducing the number of unnecessary hando-
ver triggers due, for instance, to the short term and sudden variations in signal
strength caused by shadowing and fast fading.

6.2.4 Impact of Measurement Filtering


The LTE handover measurements are made on the downlink reference symbols
received signal strength RSRP or their quality, the RSRQ, as explained before.
These measurements include lognormal shadowing and fast fading averaged
over all the reference symbols within the measurement bandwidth. This averag-
ing is done at the physical layer and hence we refer to it as the L1 filtering. On
top of this, layer 3 filtering is performed through an infinite impulse response
or equivalently an exponential filtering in the form of

Fn = (1 − a ) ∗ Fn −1 + (a ∗ Measn ) (6.3)

where Measn is the measured quantity (RSRP or RSRQ) after the L1 filtering at
measurement instant n, and Fn is its L3 filtered value at measurement instant
n, as also discussed in Section 6.1.3.1. The parameter a (filter coefficient) de-
termines the relative influence of the recent and the older measurements and
138 From LTE to LTE-Advanced Pro and 5G

is also called the forgetting factor and determines the filter length. This coef-
ficient can be adaptively chosen depending on the degree of correlation present
in successive measurement samples to average out the fast fading and follow
only the lognormal shadowing. At high speed, for example, the samples are not
highly correlated; therefore, it would be more accurate to have a shorter filtering
length than for slow-speed users in order to follow the lognormal shadowing.
In this way the ping-pong handovers (i.e., the unnecessary handovers) can be
eliminated through sufficient filtering of the handover trigger measurements.
Similar effects are achieved in case of increased downlink bandwidth, which
results in increased layer 1 filtering. In fact, the simulation results presented
in [7] have shown a 30% decrease in the average number of handovers when
the downlink bandwidth is increased from 1.25 MHz to 5 MHz at the mobile
speed of 3 km/hr while with a negligible change in the number of handovers at
mobile speed of 120 km/hr. The results also show that the gain of using larger
measurement bandwidth at 3 km/hr can also be achieved by using longer L3
filtering period. This improvement in reduced number of handovers has also
resulted in a small reduction, around 0.5 dB in the downlink signal quality (due
to delayed handovers). Nevertheless, the study presented in [7] confirm that
the amount of filtering as set by the filter coefficient a can affect the number of
handovers and the probability of unnecessary as well as delayed handovers and
hence must be tuned up taking into account the mobile speed and the propaga-
tion environment.
The L3 filtering can be performed in either the decibel or the linear do-
main. The filtering is said to be done in decibel or linear domain when the
measurements used are expressed in decibels or linear units, respectively. How-
ever, the results in [7] showed that there is negligible difference observed in the
number of handovers between linear and decibel filtering at slow mobile speeds
of around 3 km/h, but with a small reduction achieved in the number of han-
dovers with linear filtering at the higher speeds of 120 km/hr. The results also
show that better gains are achieved in reducing the number of handovers when
the signal strength RSRP is used instead of the signal quality for the handover
trigger measurements.

6.2.5 Impact of Measurement Hysteresis and Time-to-Trigger Parameters


The success and failure of handovers depend on the decision and the timeliness
to trigger the process and select the target cell. The parameters hysteresis and
time-to-trigger (TTT) heavily influence the triggering instant and the radio
propagation conditions when the handover signaling messages are transmitted.
Hence, they play a major role in the success of the handovers and in reduc-
ing the unnecessary handovers due to short-term and sudden signal variations
caused by fading. The same can be said about the handover threshold param-
Radio Resource Control and Mobility Management 139

eters as they also determine when the handover is initiated. When the handover
threshold decreases, the probability of a late handover decreases and the ping-
pong effect increases. It can be varied according to different scenarios and prop-
agation conditions to make these trade-offs and obtain a better performance.
The hysteresis margin is the main parameter that governs the HO algo-
rithm between two eNodeBs. The handover is initiated if the link quality of a
neighbor cell is better than that of the serving cell by the hysteresis value. It is
used to avoid ping-pong effects. However, it can increase handover failure since
it can also prevent timely necessary handovers. The TTT acts to reduce the
number of unnecessary handovers and effectively avoid ping-pong effects, but
it can also delay the handover, which then increases the probability of hando-
ver failures. These handover parameters need to be optimized for good perfor-
mance. Too low hysteresis and TTT values in fading conditions result in back
and forth ping-pong handovers between the cells. Too high values can cause call
drops during handovers as the radio conditions get too bad for transmission in
the serving cell. Therefore, the optimal setting depends on UE speed, cell plan,
propagation conditions, and the system load.
The impact of the hysteresis and TTT parameters on the handover
performance have been investigated for the most common 3GPP scenarios
through simulations based on the downlink RSRP measurements in [8, 9]. In
the simulations, the optimal settings for each scenario have been investigated
and performance evaluation carried out using the number of handovers, SINR,
throughput, delay, and packet loss. The results have confirmed that with rela-
tively large cells, smaller hysteresis and a larger TTT trigger more handovers.
This is expected as the larger hysteresis margin would be harder to meet for
bigger cells. For slow-moving UEs, the setting with small hysteresis and long
TTT is easier to trigger handovers since it takes a long time for the slow UE to
meet the large hysteresis condition. The results in [8] have shown that for UE
speeds of 3 km/hr and 30 km/hr, the setting of 3 dB and 960 ms for the hys-
teresis and TTT and a cell radius of 500m provide good performance, whereas
this changes to 6 dB and 960 ms for the UE speed of 120 km/hr. However, the
study concludes that some medium triggering settings, such as 3 dB and 960
ms, usually result in good performances for all the scenarios and can be widely
used as a simple way to improve the handover performance. Nevertheless, a very
important factor that influences the handover performance is the UE speed.
For high-speed railway networks, trains go through the cells frequently and
will perform handover frequently. That can increase call drops and handover
failure rate. A SON-based algorithm is presented in [10], which picks the best
hysteresis and time-to-trigger combination for the changing radio propagation
condition due to UE speed and the environment. The results show an improve-
ment from the static value settings.
140 From LTE to LTE-Advanced Pro and 5G

6.2.6 Impact of RLC/MAC Protocol Resets and Packet Forwarding


In a handover, the layer 2 protocol endpoints within the node B need to be
moved from the source to the target eNodeB. It is an option weather to reini-
tialize the protocol state and discard the HARQ window state (which is part of
the RLC/MAC) after handover or transfer the full protocol status to the target
node. Since it would be overly complex and not always feasible to transfer the
whole protocol state, it is normally assumed that the RLC/MAC protocols are
reset after a handover. This results in a loss of some MAC PDUs, which means
the corresponding SDU cannot be delivered to the upper layers even though
some of its PDUs may have been received and acknowledged by the UE. In
this case, either the serving eNodeB or the TCP source end needs to retransmit
the SDU to the target eNodeB. The option is left to vendor implementation.
However, the option to have packet forwarding from the serving node to the
target node will make a significant difference in the throughput efficiency de-
pending on the operating radio link bit rate. For lower radio link rates around
2 Mbps, the lack of packet forwarding between eNodeBs will not significantly
degrade the data throughput as TCP normally requires two times of the band-
width delay product (see Chapter 11) to be able to fully utilize the link rate.
This is different at higher link rates such as approaching 20 Mbps as shown in
[11] with much higher bandwidth-delay product and the massive packets losses
that can occur in the radio interruption time (L1/L2 access procedures), which
can take around 30 ms, and significant reductions to the TCP window size in
reaction to the losses. In that case, it is recommended to have packet forward-
ing from the source to the target cell and to ensure the correct delivery order of
packets in order to maintain high TCP throughput performance. With packet
forwarding, the throughput can be kept at the maximum available link rate to
achieve high link utilization.

6.2.7 E-UTRAN to E-UTRAN Handover Delay (FDD)


The handover delay, Dhandover, for intrafrequency or interfrequency handovers
equals the maximum RRC procedure delay plus the interruption time. The
RRC procedure delay is given in clause 11.2 of 3GPP TS 36.331 and is about
20 ms. Dhandover is defined to be the time from the end of the last TTI contain-
ing the RRC handover command message (to the UE) until when the UE is
ready to start the transmission of the new uplink PRACH.
The interruption time is the time between end of the last TTI containing
the RRC command on the old PDSCH and the time the UE starts transmission
of the new PRACH, excluding the RRC procedure delay. This requirement ap-
plies when UE is not required to perform any synchronization procedure before
transmitting on the new PRACH.
Radio Resource Control and Mobility Management 141

When intrafrequency or interfrequency handover is commanded, the


maximum interruption time is given by

Tint errupt = Tsearch + TIU + 20ms (6.4)

where Tsearch is the time required to search the target cell when the target cell is
not already known when the handover command is received by the UE. If the
target cell is known, then Tsearch = 0 ms. If the target cell is unknown and signal
quality is sufficient for successful cell detection on the first attempt, then Tsearch
= 80 ms. Regardless of whether DRX is in use by the UE, Tsearch is still based on
non-DRX target cell search times.
TIU is the interruption uncertainty in acquiring the first available PRACH
occasion in the new cell. TIU can be up to 30 ms. The actual value of TIU will
depend upon the PRACH configuration used in the target cell.
In the above, a cell is known if it has been meeting the relevant cell iden-
tification requirement during the last 5 seconds; otherwise, it is unknown [12].

6.2.8 E-UTRAN to UTRAN Handover Delay


The inter-RAT handover from E-UTRAN (i.e., LTE) to UTRAN (i.e., 3G
UMTS) FDD is to change the radio access mode from E-UTRAN to UTRAN
FDD. The handover procedure is initiated from E-UTRAN with a RRC mes-
sage that implies a hard handover. When the UE receives a RRC message im-
plying handover to UTRAN, the UE starts the transmission of the new UTRA
uplink DPCCH within the handover delay seconds from the end of the last
E-UTRAN TTI containing the RRC MOBILITY FROM E-UTRA command.
The handover delay equals the RRC procedure delay, which is 50 ms plus the
interruption time. The handover delay for EUTRAN to UTRAN FDD hando-
ver is defined as the RRC procedure delay (50 ms) plus the interruption time
(Tinterrupt). The interruption time is the time between the end of the last TTI
containing the RRC command on the E-UTRAN PDSCH and the time the
UE starts transmission on the uplink DPCCH in UTRAN FDD, excluding the
RRC procedure delay. The interruption time depends on whether the target cell
is known to the UE or not. The target cell is known if it has been measured by
the UE during the last 5 seconds; otherwise, it is unknown. The UE shall always
perform a UTRA synchronization procedure as part of the handover procedure
[12]. The interruption time should be less than 300 ms for the real-time ser-
vices and 500 ms for nonreal-time services according to 3GPP TS 25.913, and
is given by

Tinterrupt = Tiu + Tsynch + 50 + 10 ∗ Fmax ms if target cell is known (6.5)


142 From LTE to LTE-Advanced Pro and 5G

Tinterrupt = Tiu + Tsynch + 150 + 10 ∗ Fmax if target cell is not known (6.6)

The Tiu = interruption uncertainty when changing the timing from E-


UTRAN to the new UTRAN and can last up to 10 ms and Fmax = maximum
number of radio frames within the transmission time which are multiplexed
into the same CCTrCH on the UTRA target cell. The length of the Transmis-
sion Time Interval (TTI) determines the value of Fmax. As per UMTS Release
99, the TTI can have values of 10 ms, 20 ms,40 ms, or 80 ms, which are mul-
tiples of the standard radio frame, which is 10 ms. As defined, Fmax then have
values of 1, 2, 4, and 8 (i.e., TTI/radio frame length). This concept is used for
radio frame segmentation and multiplexing to form the CCTrCH. The term
Tsynch is given by Tsync = time required for measuring the downlink DPCCH
channel. It usually has a value of 40 ms, but higher layers can indicate if the
postverification period is used, making Tsync = 0 ms.
In older 3GPP releases, L1 DCH synchronization is unnecessarily delayed
by a mandatory 40-ms DL quality check, which results in a value of 40 ms
for Tsynch. However, with the use of the postverification period, the physical
channel is considered established on L1 (but not yet on L3), removing the un-
necessary 40-ms quality delay check [13]. The physical channel establishment
procedure with postverification period is detailed here.
When a physical dedicated channel establishment is initiated by the UE,
the UE starts the timer T312, and if the IE postverification is set to TRUE, the
UE perform the following:

1. Indicate to layer 1 that the physical channel is established.


2. Still consider the physical channel as not being established at Layer 3.
3. If after 40 ms no postverification failure is indicated from lower layers,
the physical channel is considered to be established at layer 3.
4. If after 40 ms a postverification failure is indicated from lower layers:
a. Wait for layer 1 to indicate N312 in sync indications. On receiv-
ing N312 in sync indications, the physical channel is considered
established at layer 3 and timer T312 is stopped and reset.
b. If the timer T312 expires before the physical channel is established
at layer 3, consider the physical channel as not being established at
layer 1 and consider this as a physical channel failure.

The handover delay estimations for handover from E-UTRAN to


UTRAN TDD and to GSM are given in 3GPP TS 36.133 [11].
Radio Resource Control and Mobility Management 143

6.3 Paging and Resource Allocation


This function provides the capability to request an UE to contact the E-UTRAN
when UE is in the ECM_IDLE state or to be addressed of an incoming warn-
ing message (PWS) when UE is in the ECM_CONNECTED state [2]. The
paging in LTE network is used to generally inform and notify the UE about
various events. The purpose of paging procedure include notifying the UE of
an incoming call in the RRC_Idle mode, to inform the UE in RRC_Idle and
RRC_connected mode about a system information change, to inform (warn)
the UE about Earthquake and Tsunami Warning System (ETWS), and to in-
form the UE about Commercial Mobile Alert Service (CMAS).
The paging cycle defines the period over which paging messages are spread
and includes the discontinuous reception (DRX) in the idle mode in order to
reduce power consumption. A given UE will have one and only one paging oc-
casion (PO) during the paging cycle. If the occasion is missed (the S1 paging
arrives after the occasion or too many paging messages buffered for the given
paging occasion), the eNodeB buffers the paging until the next paging cycle.
One PO is a subframe where a P-RNTI is transmitted on the PDCCH address-
ing the paging message. When DRX is used, the UE needs only to monitor one
PO per DRX cycle. The length of the paging cycle is a trade-off between the
mobile terminating call establishment delay (the shorter the cycle the faster the
mobile is paged) and the paging capacity (the longer the cycle, the more paging
occasions are provided). The DRX paging cycle is denoted by T in the 3GPP
specifications [3] and can be configured to 32, 64, 128, or 256 radio frames
by upper layers and received in the S1 Paging message. Otherwise the default
value (defaultPagingCycle) broadcast in the system information, SIB2, is used.
E-UTRAN initiates the paging procedure by transmitting the paging message
in the UE’s paging occasion. E-UTRAN may address multiple UEs within a
paging message by including one PagingRecord for each. One paging frame
(PF) is one radio frame, which may contain one or multiple paging occasion(s)
and is given by SFN mod(T).
The ratio of paging occasions to the number of radio frames also called
the paging density is defined by the parameter nB. This parameter is expressed
as a multiple or divisor of the paging cycle, T. The parameters nB can take
values in the range 4, 2, 1, half, quarter, one-eighth, one-sixteenth, one-thirty-
second. If, for example, nB is set to 2, there will be two paging occasions in each
radio frame; if it is set to quarter, there will be one paging occasion every four
radio frames. The trade-off here is between radio resources (the smaller nB is,
the less radio resources may be consumed for paging) and paging capacity (the
bigger nB, the more paging occasions there are for a given paging cycle). The
parameter nB is transmitted in the System Information, SIB2.
144 From LTE to LTE-Advanced Pro and 5G

The three parameters T, nB, and UE_ID, equal to the UE IMSI modulo
1,024 are used in creating time diversity for the sending of paging messages.
In other words, they spread out in time the opportunities for paging and in
this way limit the scheduling conflicts while allowing the UEs to go into DRX
mode and reduce their power consumption. The UE_ID splits the UE popula-
tion into groups with identical paging occasions. All UEs with the same UE_ID
(defined as IMSI modulo 1,024) are paged within the same unique paging
occasion. However a given paging occasion is always shared by at least four
UE_ID groups, and generally much more (in the worst case, all UEs share a
single paging occasion).
The paging occasion is determined by an index called i_s, which points to
PO from the subframe pattern defined in the following tables (Tables 6.1 and
6.2) for LTE FDD and LTE TDD.
In these tables, i_s, N, and Ns are calculated from the following formulas:
N = min (T ,nB )
Ns = max (1,nB T )
(6.7)
i _ s = floor (UE _ ID N ) mod Ns
UE _ ID = IMSI mod 1,024

The IMSI is given as a sequence of digits of type Integer (0..9) and in the
formula above is interpreted as a decimal integer number, where the first digit
given in the sequence represents the highest order digit. The System Informa-
tion DRX (discontinuous reception mode) parameters, T and nB, stored in
the UE are updated locally in the UE whenever they are changed in the system
information messages. If the UE has no IMSI, for instance, when making an

Table 6.1
Subframe Pattern to Determine PO in LTE FDD Mode

Ns PO when i_s = 0 PO when i_s = 1 PO when i_s = 2 PO when i_s = 3


1 9 N/A N/A N/A
2 4 9 N/A N/A
4 0 4 5 9

Table 6.2
Subframe Pattern to Determine PO in the LTE TDD Mode

Ns PO when i_s = 0 PO when i_s = 1 PO when i_s = 2 PO when i_s = 3


1 0 N/A N/A N/A
2 0 5 N/A N/A
4 0 1 5 6
Radio Resource Control and Mobility Management 145

emergency call without USIM, the UE uses as default identity UE_ID = 0 in


the formulas above.

6.4 DRX
In addition to the DRX for UEs in the idle mode, LTE also supports DRX
for UEs in the RRC connected state. The DRX function is characterized by a
DRX cycle, an on-duration period, and an inactivity timer. The UE wakes up
and monitors the PDCCH at the beginning of every DRX cycle for the entire
on-duration period. If no scheduling assignment is received, the UE falls asleep
again. Whenever the UE receives an assignment from the network, it starts (or
restarts) the inactivity timer and continues to monitor the PDCCH until the
timer expires. This process is controlled by the MAC and RRC protocols. The
parameters are set by RRC but it is the MAC layer that operates the process
itself. The onDurationTimer defines the length of time that the UE is active
and monitoring the downlink control channels when DRX is running. This
operates in conjunction with a DRX cycle that defines the amount of time that
the UE can be off. It is to be noted that the HARQ operation overrides the
DRX function. Thus, the UE wakes up for possible HARQ feedback, as well as
for possible retransmissions during a configurable amount of time as soon as a
retransmission can be expected.
There are two DRX cycles defined for a UE known as the long DRX-Cy-
cle and the short DRX-Cycle. The long DRX-Cycle is the default value. When
a period of activity is started through the scheduling of resources for the UE’s
C-RNTI, the UE starts the DRX-InactivityTimer. If the UE remains active
long enough for the DRX-InactivityTimer to expire, or if it receives a MAC CE
on which it may have to act, then when the activity stops, the UE uses the short
DRX-Cycle period and start also the DRXShortCycleTimer. If no further activ-
ity takes place before the DRXShortCycleTimer expires, then the UE reverts to
the long DRX-Cycle period.

6.5 Uplink Power Control and Optimization Parameters


In LTE, the orthogonality between the subcarriers is expected to basically elim-
inate interuser interference. This requires perfect alignment of transmissions
from the different terminals in the eNodeB receivers. It is therefore necessary
to control the uplink transmission timing of each terminal which is performed
by the transmission-timing control function. This adjusts the transmit timing
of each terminal to ensure that uplink transmissions arrive approximately time
aligned at the base station. However imperfect timing adjustments as well as
frequency errors and Doppler shifts will cause transmissions from the different
146 From LTE to LTE-Advanced Pro and 5G

terminals to arrive at the base station with a timing misalignment less than the
length of the cyclic prefix and hence result in the loss of orthogonality between
subcarriers and cause some interuser interference. Such interuser interference
will worsen when the different users transmission are received at the base sta-
tion with different power levels due to different paths losses from terminals in
different locations and with different radio conditions. If two terminals with
different radio link conditions transmit with the same power, the received sig-
nal strengths may thus differ significantly, causing a potentially significant in-
terference from the stronger signal to the weaker signal unless the subcarrier
orthogonality is perfectly retained. To avoid this, at least some degree of uplink
transmission-power control is needed on the uplink, to help, for instance, re-
duce the transmission power of user terminals close to the base station and
ensure all user signals are received at the base station with approximately the
same level as needed. Hence, LTE provides for uplink power control on both
the PUSCH and PUUCCH.�

6.5.1 Power Control on PUSCH


The uplink power control on the PUSCH is based on the following and pro-
vides parameters that the operator can set to obtain some degree of control on
the UE transmitted power in different scenarios.

• Open-loop power control with slow aperiodic closed loop correction


factor;
• Fractional path loss compensation with (path loss compensation) factor;
• Power control command is embedded in the UL scheduling grant in
PDCCH;
• Accumulated UE-specific closed-loop correction.

The LTE uplink power control equation on the PUSCH in simplified


form is abstracted from [14] and is expressed as

P = min {Pmax − PPUCCH , P 0 + 10 log M + α.PL + δMCS + f ( ∆i )} (6.8)

where Pmax is the maximum UE power, which is 23 dBm, and P0 is the required
power in the Node B in a single resource block for a reference modulation cod-
ing scheme, which is broadcast to the UE and used for initial power setting;
this is cell-defined and will generally depend on the instantaneous uplink noise/
interference level and therefore vary with time. M is the number of resource
blocks, α is a cell specific parameter between 0 and 1 that enables use of frac-
tional power control and is broadcast to the UE, PL is the estimated downlink
Radio Resource Control and Mobility Management 147

path loss calculated at the UE, and δMCS is an MCS dependent offset that is
required from the P0 value for the actual MCS being used, which also depends
on the transport format selected.
ƒ(∆i) is a function that can be relative, cumulative, or absolute correc-
tions to the UE power. The UE calculates the transmit power to be used in each
subframe in which it has a resource allocation according to the above formula.
The use of Pmax − Ppucch reflects the fact that the transmit power available
for PUSCH on a carrier is the maximum allowed per-carrier transmit power
after power has been assigned to any PUCCH transmission on that carrier [14].
This ensures priority of L1/L2 signaling on PUCCH over data transmission on
PUSCH in the power assignment. The term 10*log10(M) is used to scale the
controlled power per source block P0,pusch to the power required to transmit
the total number of resource blocks contained within the PUSCH. The initial
open loop path loss (PL) is estimated using the transmit reference symbols,
measured at the UE. The closed loop component is contained in ∆i, where ∆i
= (SINRTarget − SINREstimated). At the eNodeB, SINR is measured and the UE
transmit power adjusted to meet the target SINR. Adjustments are sent to the
UE via the transmit power control (TPC) commands.
The variation of the parameters P0 and α provides a trade-off between
absolute cell performance and overall system performance. The initial power
setting P0 is composed of an 8-bit cell specific nominal component P0Nominal
and a 4-bit UE specific value P0UE. These two parameters and the path loss
compensation parameter α are key RF optimization parameters. Higher set-
tings of the P0 parameters improves PUSCH reception, but result in higher
UE transmit power leading to more interference to neighboring cells and vice
versa. The parameter α, can take a value of 1 or smaller in which case the power
control will operate with partial path loss compensation. With partial path loss
compensation, an increased path loss is not fully compensated for by a cor-
responding increase in the uplink transmit power. That would then result in
inadequate SINR at the receiver, which, in turn, causes the receiver to vary the
scheduled modulation-coding scheme accordingly. This means that the partial
PL compensation favors more the UEs that are in good radio propagation con-
dition and allows them to obtain a higher spectral efficiency, whereas less power
is spent on UEs in less favorable condition and that way mitigate the interfer-
ence to UEs in neighbor cells [15]. In the case of partial path loss compensa-
tion, the ∆MCS term should then be disabled to prevent from further reduction
of the terminal power, which would otherwise occur as the base station would
try to reduce the offset to a value consistent with the reduced modulation-
coding scheme.
The reduced modulation-coding scheme with partial path loss results in
reduced data rates in places such as the cell border. However, the benefit is a rel-
atively lower transmit power for terminals close to the cell border and hence less
148 From LTE to LTE-Advanced Pro and 5G

interference to other cells. Nevertheless, a similar effect can be achieved with


full path-loss compensation with reliance on a power control headroom, and
reduce the relative terminal transmit power for terminals with higher path loss
by lowering the modulation-coding rate (and hence lowering the offset ∆MCS).
If a UE was allocated an uplink bandwidth that resulted in a calculated
power higher than the allowed maximum, Pmax (that is higher than 23 dBm)
in (6.7), the UE would be unable to use the full resource. To avoid that, the
UE sends power headroom reports to the eNodeB. These represent the UE’s
estimate of its power control requirements in the current subframe, and based
on this, the eNodeB can schedule the subcarrier resources efficiently between
UEs in a cell. The eNodeB uses the power headroom reports to determine how
much uplink bandwidth per subframe a UE is capable of using. This can help
to avoid allocating uplink transmission resources to UEs which are unable to
use them. The power headroom is rounded to the closest value in the range −23
to 40 dB with steps of 1 dB and is delivered by the physical layer to the higher
layers.

6.5.2 Power Control on PUCCH


Power control is also implemented on the PUCCH to help guarantee the re-
quired error rates on the control channel. However, full path loss compensation
factor is used (α = 1), and the power control loop tries to adjust the transmit
power for achieving the required target SINR on the channel. The parameter
that the operator may use for adjustment is basically the maximum power incre-
ments on each adjustment [16]. However, the power control for PUCCH can
be expressed as

{
Ppucch = min Pmax , P0, pucch + PLDL + ∆ format + σ } (6.9)

In which Ppucch is the transmit power to use in a given subframe, P0,PUCCH


is a cell-specific parameter that is broadcast as part of the cell system informa-
tion, PLDL is the downlink path loss as estimated by the terminal from the
downlink path loss, Pcmax,c is the per carrier maximum power limit to use (in
a carrier aggregated LTE scenario as defined in LTE specifications Release 10),
and the term δ is a network explicit power control command that can be set
by the operator. The power control commands δ are accumulative, as each re-
ceived power-control command increases or decreases the term δ by a certain
amount. The min{Pmax, …} term ensures that the PUCCH transmit power
as determined by the power control will not exceed the per-carrier maximum
power Pmax. The term ∆format is an offset used to account for the different SINR
requirements for different PUCCH formats. The power offsets are defined such
Radio Resource Control and Mobility Management 149

that a baseline PUCCH format, specifically the format corresponding to the


transmission of a single hybrid-ARQ acknowledgment (format 1 with BPSK
modulation), has an offset equal to 0 dB, while the offsets for the remaining
formats are explicitly configured by the network.
The required received power P0,PUCCH in the eNode, as discussed earlier,
is broadcast by the network and will generally depend on the instantaneous
uplink noise/interference level and therefore vary with time. However, the mea-
sured P0,PUCCH may in practice reflect the average interference or a relatively
constant noise level. Moreover, the uplink path loss estimated by the downlink
path loss, PLDL, will not be accurate, not to mention the measurement inaccu-
racies. Therefore, for the network to directly adjust the PUCCH transmit pow-
er to the correct value, the term δ is provided to close the power control loop
and provide explicit power control commands which are cumulative over time.
The power-control for PUCCH can be expressed as

{
Ppucch = min Pmax , P0, pucch + PLDL + ∆ format + σ } (6.10)

In which Ppucch is the transmit power to use in a given subframe, P0,PUCCH


is a cell-specific parameter that is broadcast as part of the cell system informa-
tion, PLDL is the downlink path loss as estimated by the terminal from the
downlink path loss, and the term δ is a network explicit power control com-
mand, which can be set by the operator. The power control commands δ are ac-
cumulative, as each received power-control command increases or decreases the
term δ by a certain amount. The Min {Pmax…} term ensures that the PUCCH
transmit power as determined by the power control will not exceed the UE
maximum transmit power Pmax. The term ∆Format is an offset used to account
for the different SINR requirements for different PUCCH formats. The power
offsets are defined such that a baseline PUCCH format, specifically the format
corresponding to the transmission of a single hybrid-ARQ acknowledgment
(format 1 with BPSK modulation), has an offset equal to 0 dB, while the offsets
for the remaining formats are explicitly configured by the network.
The required received power P0,PUCCH in the eNode, as discussed earlier,
is broadcast by the network and will generally depend on the instantaneous
uplink noise/interference level and therefore vary with time. However, the mea-
sured P0,PUCCH may in practice reflect the average interference or a relatively
constant noise level. Moreover, the uplink path loss estimated by the downlink
path loss, PLDL, will not be accurate, not to mention the measurement inaccu-
racies. Therefore, for the network to directly adjust the PUCCH transmit pow-
er to the correct value, the term δ is provided to close the power control loop
and provide explicit power control commands that are cumulative over time.
150 From LTE to LTE-Advanced Pro and 5G

6.6 CQI Measurement and Reporting


Channel quality measurement is used by the link adaptation which is an
important part of the LTE air interface and involves the variation of modula-
tion and coding schemes to maximize the throughput on the air interface. The
CQIs reported by the UE is used by the eNodeB for the link adaptation on the
downlink. The UE assesses the quality of the downlink signal through measure-
ments of the received signal and consideration of the error correction scheme. It
then calculates the maximum modulation and coding scheme that it estimates
will maintain an error rate better than 10%. This is indicated to the eNodeB
as a CQI value [14]. This index ranges from 0 to 15 and its interpretation as
the proper modulation and coding scheme is given in Table 6.3. This table is
also useful for estimating the likely physical layer throughput in a given radio
configuration as discussed in detail in Chapter 3.
The CQI reporting for UEs can be configured in several ways [16]. The
reporting can be set as periodic or aperiodic. For periodic reporting, the CQI
is carried in the PUCCH at regular intervals that can be configured between 2
ms and 160 ms. For aperiodic reporting, the CQI is transmitted in the PUSCH
only after a specific request from the eNodeB, is received in the PDCCH sched-
uling information.

Table 6.3
The CQI and Its Mapping to MCS
Code Rate
CQI Index Modulation × 1,024 Efficiency
0 Out of range
1 QPSK 78 0.1523
2 QPSK 120 0.2344
3 QPSK 193 0.3770
4 QPSK 308 0.6016
5 QPSK 449 0.8770
6 QPSK 602 1.1758
7 16QAM 378 1.4766
8 16QAM 490 1.9141
9 16QAM 616 2.4063
10 64QAM 466 2.7305
11 64QAM 567 3.3223
12 64QAM 666 3.9023
13 64QAM 772 4.5234
14 64QAM 873 5.1152
15 64QAM 948 5.5547
Radio Resource Control and Mobility Management 151

The CQI feedback may be configured as on a wideband, eNB-configured


subband, or UE-selected subband basis. For the wideband feedback option, the
reported CQI value is based on an assessment across the whole system band-
width. For both subband feedback modes, subbands are defined across the sys-
tem bandwidth as groups of consecutive RBs. The size and number of subbands
is fixed and is dependent on the total system bandwidth and the feedback mode
in use. For the eNB-configured subband feedback mode the UE reports the
wideband CQI and then each subband CQIs as relative offset values. For the
UE-selected subband feedback mode, the UE selects a set of preferred subbands
from the total available subbands and indicates their positions to the eNodeB.
Then it reports an average CQI value for these preferred subbands along with
a wideband CQI value.
All three options are available for aperiodic reporting, but only the wide-
band feedback and the UE-selected subband feedback can be configured for the
periodic reporting.

References
[1] 3GPP TS 36.331, “Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Resource
Control (RRC); Protocol Specification, Release 8, version 8.4.0,” December 2008.
[2] 3GPP TS 23.401, “Evolved Universal Terrestrial Radio Access Network Access, Architec-
ture Description, Release 12, V11.0.0,” 2012.
[3] 3GPP TS 36.300, “Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved
Universal Terrestrial Radio Access Network (E-UTRAN); Overall Description; Stage 2,
Version 8.7.0,” December 2008.
[4] 3GPP TS 36.304, “Evolved Universal Terrestrial Radio Access (E-UTRA), UE Procedures
in Idle Mode, Release 12, V12.5.0,” 2015.
[5] 3GPP TS 36.214, “Evolved Universal Terrestrial Radio Access (E-UTRA) Physical Layer
- Measurements, Version 8.5.0,” December 2008.
[6] Dimou, K., et al., “Handover Within 3GPP LTE: Design Principles and Performance,”
Ericsson Research/IEEE, 2009.
[7] Anas, M., et al., “Performance Analysis of Handover Measurements and Layer 3 Filtering
for UTRAN LTE,” Proc. of PIMRC, September 2007, pp. 1–5.
[8] Iñiguez Chavarría, J. B., “LTE Handover Performance Evaluation Based on Power Bud-
get Handover Algorithm,” Master’s thesis, Universitat Politècnica de Catalunya, February
2014.
[9] Luan, L., et al., “Optimization of Handover Algorithms in LTE High-Speed Railway Net-
works,” March, 2012, www.aicit.org/JDCTA/ppl/JDCTAVol6No5_part10.pdf.
[10] Jansen, T., and I. Balan, “Handover Parameter Optimization in LTE Self-Organizing
Networks,” Vehicular Technology Conference Fall, Fall 2010.
152 From LTE to LTE-Advanced Pro and 5G

[11] Racz, A., A. Temesvary, and N. Reider, “Handover Performance in 3GPP Long Term
Evolution (LTE) Systems,” 16th IST Proc. of Mobile and Wireless Communications Summit
2007, July 2007, pp. 1–5.
[12] 3GPP TS 36.133, “Requirements for Support of Radio Resource Management, Release 8,
V8.9.0,” 2010.
[13 Linked CRs to TS25.214, TS25.331, and TS25.133 for Faster L1 DCH synchronization,
TSG RAN WG1, WG2, and WG4, TSG RAN Meeting #28, Quebec, Canada, Release 6
Category B, June 1–3, 2005.
[14] Dahlman, E., S. Parkvall, and J. Skold, 4G, LTE-LTE-Advanced for Mobile Broadband,
New York: Academic Press, 2011.
[15] Salo, J., M. Nur-Alam, and K. Chang, “Practical Introduction to LTE Radio Planning,”
web listed white paper, November 2010.
[16] 3GPP TS 36.213, “Evolved Universal Terrestrial Radio Access (E-UTRA) Physical Layer
Procedures, Release 8,” 2009.
7
Intercell Interference Management in
LTE
Because LTE is also based on the frequency reuse concept, it is subject to inter-
cell interference. Intercell interference arises when users in neighboring cells are
assigned the same resource blocks in the same time period, that is, the same set
of subcarrier frequencies over the same transit time interval (TTI), and funda-
mentally can happen for all user locations on both the uplink and the downlink.
However, the interference on the downlink side is not expected to be as critical
as it could be on the uplink side. The intercell interference is most critical on
the uplink and particularly for outer cell (cell edge) located UEs. On the up-
link, an outer UE transmits with a significantly higher power than an inner UE
(closer to the antenna) due to the fact that power control is applied to overcome
the larger path loss. The outer cell UEs have also the shortest distance to the in-
terfered base station and can make the situation even worse when there is a high
degree of asymmetry in the radio frequency (RF) geometry of neighboring cells.
The situation is not as bad on the downlink. On the downlink, the interference
is more evenly spread out over the cell than on the uplink as the base stations
transmit with a rather evenly distributed power to all the UEs. This means any
cell border area UE experiences interference from base stations that are typically
at least a cell radius away. Also, the interference that a UE experiences in any
given location is the same regardless if the interfering base station is transmit-
ting to an interior or a cell edge UE in the interfering cell. Nevertheless, a basic
idea that has been considered in dealing with multicell interference at cell edges
on the downlink side is to allow multicell transmissions. In this scheme, cells
interfering with each other at the border areas are used all for transmission of
the same information to a UE as in soft handover in UMTS. This is what is

153
154 From LTE to LTE-Advanced Pro and 5G

done in the case of the MBMS and results in a spatial diversity gain through
soft combining, which helps to reduce the necessary transmit power. Multicell
transmission requires tight coordination and intercell synchronization, in the
order of substantially less than the cyclic prefix, and hence is most easily realized
for cells belong to the same node B. However, the use of beam-forming antenna
solutions at the base station is also a general method that can be seen as a means
for downlink intercell interference mitigation.
The 3GPP standards have specified and investigated three different
mechanisms for dealing with multicell interference on both the uplink and
the downlink in THE LTE systems. These are based on interference cancella-
tion, interference randomization, and interference avoidance through intercell
coordination measures in the resource assignment processes [1]. Each of these
mechanisms can require a different design and the signaling of different infor-
mation between the nodes and/or between the nodes and the UEs depending
on if they are to be implemented for the uplink or the downlink side.

7.1 Interference Cancellation


The interference cancellation is based on using signal processing techniques to
estimate the interfering signal(s) and subtracting it from the desired signal. This
can be done on both the uplink and the downlink through spatial suppression
using multiple antenna techniques or using orthogonal reference signals on the
downlink side for estimating the strong interfering signals and cancelling their
effects from the desired signal.

7.2 Interference Randomization


Interference randomization such as achieved through frequency hopping used
in GSM is based on exploiting the diversity gains that are achieved in random-
izing and spreading the interference over multiple users. This can be achieved
in the LTE systems by, for instance, interleaving the subcarriers used in com-
posing the resource blocks for neighboring cells. Thus, the subcarriers used to
construct each resource block (RB) are spread over a wide range of the channel
band and interleaved with those from RBs constructed for neighboring cells
with a predefined pattern rather than localized over a narrow set. This results in
an averaging effect of the interference experienced by users in neighboring cells.
The interleaver pattern is generated with a pseudo-random method using seeds
that can be reused between far-located cells in a manner similar to frequency re-
use in GSM systems. The base station must signal the interleaver identification
used to the UE so that it can identify the interleaved pattern when the scheme
is used on the downlink side.
Intercell Interference Management in LTE 155

7.3 Intercell Interference Avoidance or Coordination


The intercell interference avoidance or coordination (ICIC) approach focuses
on coordination mechanisms among neighboring nodes in the assignment of
the spectrum resources (resource blocks) to help to prevent or reduce the possi-
bility of the same resource block from being used simultaneously or being used
with the same high power in multiple neighboring cells. Avoiding the use of the
same resources (resource blocks) in neighboring cells, particularly for cell edge
users, will result in a significant increase in SINR and thus the system capacity.
Most such mechanisms normally require a tight coordination and signaling
exchange between cells and hence are referred to as intercell interference coordi-
nation (ICIC) measures and are most widely the ones that have been considered
by 3GPP and the industry.
The interference coordination and avoidance to intercell interference
handling is the approach that has been the most investigated particularly for the
uplink side where intercell interference is most critical, and will be discussed in
details in the remaining of this chapter. However, the benefits of the aforemen-
tioned three different schemes of dealing with intercell interference are mutu-
ally exclusive. That means that a combination of these schemes may basically be
implemented in the LTE systems by the vendors. The ICIC mechanisms typi-
cally require the exchange of signaling information among neighboring nodes
on the load and interference status of the resources. Hence, the 3GPP standards
[2] have specified a proactive and a reactive signaling indicator for each resource
block in a cell referred to as the high interference indicator (HII) and the over-
load indicator (OI) for the uplink. The OI indicates the total sum of interfer-
ence and thermal noise on each RB and can take up three fuzzy levels consisting
of low, medium, and high. The HII is a bit map consisting of 1 bit per RB and
indicates that the node intends to or is using the RB for one of its cells edge
users (requiring high transmit power). Basically, the HII may be used to avoid
scheduling cell-edge UEs in neighboring cells on the same resources, and the
OI may be used as a feedback for fine tuning of scheduling and power-level
allowances on indicated resources. However, it is up to equipment vendors to
develop their own proprietary nonstandardized ICIC algorithms that may use
the aforementioned interference/load information exchanged between nodes.
The 3GPP Standards have also specified a similar load indicator for re-
source blocks on the downlink side, called the relative narrowband Tx power
(RNTP) defined as the ratio of the transmit power to the antenna port by the
maximum base station transmit power. This information element is received in
the load information message, and indicates, per PRB, whether the downlink
transmission power is lower than the value indicated by the RNTP threshold
setting (i.e., the DL power restriction setting per PRB). The receiving eNodeB
takes the information as valid when setting its scheduling policy until receiving
156 From LTE to LTE-Advanced Pro and 5G

a new load information message carrying an update. The HII, OI, and the
RNTP indicators would be updated about every 100 ms [1] to prevent an ex-
cess signaling load.

7.4 Intercell Interference Coordination Mechanisms


To formulate the problem of ICIC and review the trade-offs and the control
parameters, it is modeled as a collision between resource blocks. In the collision
model, the overall system performance is determined by the collision probabili-
ties and the impact of a given collision on the signal-to-interference-plus-noise
(SINR) ratios achievable with the colliding resource blocks [3]. Then the ICIC
mechanisms target to reduce the probability of the collisions and mitigation of
the resulting SINR degradation. The collision probabilities can be reduced by,
for instance, giving specific preferences for different subsets of resource blocks
to neighboring cells. A key issue is then to determine what information and
over what time scale should be reported by UE using the standardized informa-
tion elements that can be exchanged over the X2 interface to prevent collision
or reduce their impact when they occur. There are trade-offs to be made de-
pending on the level of information, the load impact to the network links, and
the time scale on which it is to occur.
The ICIC schemes are classified here basically into a static or quasi-static
type, implemented mostly centrally through OSS, for instance, and a dynamic
type implemented mostly through the network nodes themselves as discussed
in the following sections.

7.4.1 Static/Quasi-Static ICIC Schemes


The most simple static type of ICIC schemes is the approach based on the
concept of fractional frequency reuse (FFR) evolved from the classical reuse
clustering used in GSM and its variation partial frequency reuse (PFR) and
soft frequency reuse (SFR) adopted in Wimax and 3GPP LTE [4, 5]. The PFR
divides the band into a portion, say, f1 (f1 fraction of the total system band) that
is allocated to the inner parts of the cells (users calling from close to the cell cen-
ters) with a reuse of 1 and a fraction f2 allocated to cell-edge users with a reuse of
3, that is, the repeat pattern is over each 3-sectored cell. This results in an overall
effective frequency reuse of 1/(f1 + f2/3). The PFR can be implemented by asso-
ciating start-and-stop indexes as defined in 3GPP [6] with the available pool of
the resource blocks in the cell. The scheduler uses the resource blocks between
the start-and-stop indexes for the cell-edge UEs. If this pool of resource blocks
is depleted, some cell-edge UEs will not get scheduled within a specific TTI.
The remaining resource blocks as well as any resources from the subset allocated
to the cell edge UEs that are not used are utilized by the inner cell UEs. By using
Intercell Interference Management in LTE 157

disjoint subsets of resource blocks defined between the start-and-stop indexes in


neighboring cells, collision between cell edge users can be completely avoided.
The SFR is a slight variation of PFR in which a smaller portion of the
band allocated to cell-edge users can also be used for cell-center users depending
on the required power on each resource allocation case. This allows the reuse
of the cell edge-allocated subcarriers that may happen to be in less interfering
conditions to be partially reused over the cell-center areas as well, resulting in
an overall frequency reuse of between 1 and 3 for trisectored sites. The SFR has
been referred to in [7] and further investigated in [7, 8] under the 3GPP LTE
framework to help provide higher data rates to disadvantaged users such as at
cell edges. The SFR can be implemented easily by the start index geometric
weight (SIGW) proposed by 3GPP [6]. In this scheme, rather than distinguish-
ing the UEs as cell edge or inner cell users, they are sorted using a continuous
measure such as the required power, which is then referred to as the geometric
index (as in a way it has a relation to the user location in the cell). Then the
scheduling algorithm assigns resource blocks to UEs at the top of the sorted list
(i.e., requiring most power) starting from a preset start index first and then pro-
ceeds towards the bottom of the index, which would be representative more of
the inner UEs. In fact, the simplest distributed ICIC schemes are those that do
not use internode signaling exchange and can be implemented by using index-
ing into resource blocks as just explained.
In both the PFR and SFR, the portioning of resources between the cell
edge and the inner uses in neighboring cells, as specified, for instance, by the
start-and-stop indexes, must be planned by a central entity such as the OSS
using the traffic trends over the network and time. Hence, the PFR and SFT
are both based on a static and a semistatic resource partitioning strategy, respec-
tively, which may be planned on a time scale of days or so. Therefore, both are
suboptimal from a more dynamic resource allocation and sharing standpoint.

7.4.2 Dynamic ICIC Schemes


The dynamic ICICs, on the contrary, are responsive to both the spatial and
real-time traffic load distribution and the mutual interference situation. Hence,
they are expected to result in trunking gains that result in a more balanced re-
source sharing between the inner and cell-edge users. Dynamic ICIC schemes
may be implemented on a large time scale of minutes to hours on a centralized
self-configured basis or on a distributed short time scale of the TTI level. In
the centralized implementation as has been considered for the Winner project
[9, 10] a central entity collects the aggregated statistics of the traffic load and
use distributions from sets of neighboring nodes to derive resource allocation
guidelines for each node and account for traffic variations between busy hours
of the day for instance. The centralized schemes for radio resource manage-
158 From LTE to LTE-Advanced Pro and 5G

ment are not encouraged in LTE due to absence of the RNC, although the
mechanisms could be implemented in the OSS/network management center
at the expense of backhaul transmissions and the reduced reliability of a single
centralized architecture.
The distributed dynamic ICIC schemes implement the dynamic resource
scheduling algorithms within the nodes without the use of a central control
entity. This is done in either an autonomous manner by each node or by a co-
ordinated manner using an internode exchange of load and resource usage sta-
tus within neighboring nodes. The distributed dynamic schemes are also more
in line with the 3GPP consensus that dynamic (event-triggered) schemes are
superior to static schemes that limit the applied power level on a subset of the
resource blocks via planning, irrespective of the momentary usage of the same
resource blocks in the neighbor cells.
In the autonomous distributed interference avoidance schemes as inves-
tigated in [11], each node (eNodeB) aims to reduce the interference that it ex-
periences without regard to other nodes. In this scheme, each node periodically
measures the interference across all PRBs on the uplink transmission on a small
time scale of a subframe (1 ms) with no need for signaling with the UE or other
node Bs. The measurements are processed through some exponential averaging
process (i.e., short memory averaging) to derive a measure of the latest interfer-
ence present on each resource block. Then the results are used to replace exist-
ing assigned PRBs with cleaner ones as long as the reduced interference on the
new resource is above a set-defined hysteresis threshold. The latter check helps
to bring some stability to the process against small-scale fluctuations. In order
to reduce the possibility that multiple nodes simultaneously decide to replace a
resource with the same, less interfered PRB and hence generate a severe intercell
interference case, each node performs its decisions over update of an allocated
resource with a small probability of, say, 10%. This probability introduces some
implicit measure of collision avoidance into the resource selections in neigh-
boring nodes and, if chosen properly, results, on average, in only one node in
a neighborhood to switch to a better resource and thus help to avoid conflicts
with high interference.
The autonomous interference coordination scheme cannot guarantee
convergence to the system-wide global minimum due to the simple fact that
interference impacts are not symmetric between cells. Specifically, switching
one UE in one cell in a node from an interfered PRB to a better one might lead
to lower interference in that cell, but the total system-wide interference can
increase dependence on UE positions and resource usage patterns in the cells of
the neighboring nodes. This will happen if the reduced interference in the cell
under consideration does not outweigh any increase in the sum global interfer-
ence experienced by all users. Nevertheless, the scheme as shown by simulation
results in [11] does help to significantly reduce the intercell interference.
Intercell Interference Management in LTE 159

The distributed ICIC schemes can be implemented by a more explicit


coordination among neighboring nodes using internode signaling. Basically,
at each TTI, the node scheduler determines which UEs will get scheduled and
how many PRBs they will get based on the fairness and QoS criteria. Then it
will assign PRBs to the UEs in the frequency domain in accordance with the
vendor implemented resource scheduling algorithm. The scheduling algorithm
uses the latest updates of the information provided by the standardized HII and
OI indicators signaled over the X2 interfaces from neighboring nodes on the
interference sensitivity and the over load status of the resource within the own
cell (cell center or cell edge).
In the resource assignment and update process on the uplink, each node
then computes an interference utility that captures a measure of the interference
impact to all neighboring cells. The update allocation process is the same as in
the autonomous case except that, as has been shown in [10], no exponential av-
eraging process is required and a lower hysteresis threshold may be used in this
case. To account for the interference impact of neighboring cells on a resource,
the HII bit maps and/or the OIs exchanged among neighboring nodes can be
used to compute a global intercell interference utility function over neighboring
cells. Then each node considers the impact to the global intercell interference in
the decision to allocate a resource or replace an allocated resource with a better
one. The time scale for updated resource configurations based on information
received from all neighboring nodes using internode signaling is on the level
of a second or slightly less. This allows tracking and coordination of UEs with
longer continuous activity periods such as file download, streaming, and con-
versational services.
The use of internode information exchange on neighboring nodes re-
source status has been shown to converge in the global minimization of the
intercell interference [11]. However, it does not contribute a significant addi-
tional improvement over the autonomous scheme as the simplified and practi-
cally estimated global interference utility function does not completely reflect
the actual interference situation. This points to the possibility that the autono-
mous schemes may be used as a more simplified intercell interference coordina-
tion mechanism without the need for the internode signaling standardization
requirements.

7.4.3 Dynamic ICIC Based on the PFR and SFR Concepts


In the implementation of the static PFR and SFR schemes based on the start-
and-stop indexes discussed earlier, the indexes can be configured dynamically
by the nodes themselves using the interference and load status information ex-
changed with their neighbors through the HHI and signaling OI. In this way,
the start indexes in the cells can be adjusted according to the load variations in
160 From LTE to LTE-Advanced Pro and 5G

the nodes rather being configured in a planned manner via the network O&M,
for instance. The HII bit map can be used to specify the resource portion that
is intended for cell-edge user scheduling in neighboring nodes and hence avoid
similar usage in the receiving node and help to avoid highly interfering colli-
sions. Similarly, the OI can be used to request neighbor nodes to refrain from
scheduling UE on the high interfered resources at all or at least refrain from
placing the cell-edge users on the RBs depending on the load level indicated on
them by the neighbors.

7.5 Conclusions
It should be evident that the intercell interference coordination and avoidance
is a complex issue with a wide range of possible solutions. Finding a single solu-
tion that would be optimum for all possible situations is not the scope of 3GPP
standardization. Nevertheless, the 3GPP has extensively investigated a range of
intuitively appealing and feasible interference coordination algorithms using
advanced system simulations [12]. The outcome of these efforts are expected to
provide a deep understanding of the trade-offs involved and a broad consensus
regarding the time scale at which practical ICIC schemes should operate. More-
over, the identification of interfaces and a flexible suit of signaling information
elements should allow the continued evolution of the technology. From a sys-
tem design and commercial feasibility perspective, the ICIC mechanisms that
build on markedly low-complexity heuristics without or with intercell commu-
nication should be particularly attractive.

References
[1] 3GPP TR 25.814 V2.0.0, “Physical Layer Aspects of Evolved UTRAN, Rel 7 Physical
Layer Aspects for Evolved UTRA, Release 7.”
[2] 3GPP TS 36.423, “Evolved Universal Terrestrial Radio Access Network (EUTRAN); X2
Application Protocol (X2AP),” June 2008.
[3] Koutsimanis, C., et al., “Intercell Interference Coordination in OFDMA Networks and in
the 3GPP Long Term Evolution,” System Gábor Fodor, Ericsson Research.
[4] 3GPP R1-050738, “Interference Mitigation - Considerations and Results on Frequency
Reuse,” Siemens, September 2005.
[5] 3GPP R1-060291, “OFDMA Downlink Inter-Cell Interference Mitigation,” Nokia, Feb-
ruary 2006.
[6] 3GPP TSG-RAN WG1, “On Inter-Cell Interference Coordination Schemes without/
with Traffic Load Indication,” R1-074444.
Intercell Interference Management in LTE 161

[7] Rahman, M., H. Yanikomeroglu, and W. Wong, “Interference Avoidance with Dynamic
Inter-Cell Coordination for Downlink LTE System,” Department of Systems and Com-
puter Engineering Carleton University, Ottawa, Canada, Communications Research Cen-
tre of Canada (CRC) Ottawa, Canada.
[8] 3GPP R1-050507, “Soft Frequency Reuse Scheme for UTRAN LTE,” Huawei, May
2005.
[9] 3GPP R1-050841, “Further Analysis of Soft Frequency Reuse Scheme,” Huawei, Septem-
ber 2005.
[10] Rahman, M., and H. Yanikomeroglu, “Interference Avoidance Through Dynamic
Downlink OFDMA Subchannel Allocation Using Intercell Coordination,” Proc. IEEE
VTC, May 2008, pp. 1630–1635.
[11] Ellenbeck, J., C. Hartmann, and L. Berlemann, “Decentralized Inter-Cell Interference
Coordination by Autonomous Spectral Reuse Decisions,” Proc. European Wireless
Conference, June 2008.
[12] 3GPP R1-074444, “On ICIC Schemes Without/With Traffic Load Indication,” October
2007.
8
SON Technologies in LTE
The self-organizing network (SON) is a feature specification of LTE networks
to provide the model for next-generation operations and business support
systems (OSS/BSS). The SON can help telecom carriers to reduce operating
expenses (OPEX) by automating the manual steps needed to configure and
operate their networks efficiently in the increasingly high-volume, low-cost
services. The SON concept embraces both automated network configuration
in the deployment phases as well as automated tuning and self-healing in the
network operation phase [1]. In the deployment phases, for instance, when new
radio cells are dropped in, they are able to automatically structure and config-
ure themselves relative to their neighboring cells for proper integration into
the network. This is particularly handy in the flattened and distributed radio
access network of E-UTRAN where the manual configuration of many more
capable radio sites (eNodeBs) can be cumbersome and costly. The SON tech-
nology can also improve the user experience by optimizing the network more
rapidly and mitigating outages as they occur. These are very important capabili-
ties because the time to repair is very critical factor for every network operator.
Furthermore, through automated network performance tuning, the technology
can dynamically respond to changing network traffic distribution and inter-
ference geometry by tuning the equipment parameters and configurations in
real time and result in improved user experience, service quality, and improved
network capacity and resource utilization. The SON concept thus includes a
set of capabilities grouped into the functional domains of self-configuration,
self-optimization, and self-healing with the 3GPP SON agreed use cases and
capabilities with respect to these functional areas are defined in [1]. There has
been and still is a significant amount of research and development carried out in
the defining solutions based on SON for the E-UTRAN. However, as a guiding

163
164 From LTE to LTE-Advanced Pro and 5G

reference, any SON solutions and methods for E-UTRAN should adhere to the
3GPP architectural reference model in [2, 3], which lists the requirements and
objectives for management systems.

8.1 SON Standardization History and Status


The field of self-organizing and SON technologies has been a very active field
in the industry and the standardization groups in recent years and with prod-
ucts available currently. The work definition on the standardization of the SON
use cases and the requirements was started and approved in September 2006,
and resulted in the definition of the first use case of ANRF in 2007. In the first
quarter of 2008, a new TR (TR 36.902) was formed in the 3GPP standardiza-
tion group for LTE in which work on the use cases for coverage and capacity
optimization (CCO) energy savings (ES), interference reduction (IR), auto-
mated configuration of physical cell identity (PCI), mobility robustness op-
timization (MRO), and mobility load balancing optimization (MLB) started.
Later, in the fourth quarter of 2008, the use cases for RACH optimization and
intercell interference coordination were added to the standardization work. The
work on the stated use cases were continued in Releases 9 and 10 with the au-
tomated UE measurements for minimization of drive tests added to the Release
10 work definition as well.
Most of the SON related use cases have been under study by the working
group 5 under the SA5 (system architecture working group 5) in the 3GPP
organization. The coverage of the SON use cases and the standardized require-
ments are spread over a number of the 3GPP specifications which consist main-
ly of the 3GPP TS 36.902, and the 3GPP TS 32.xxx series with various aspects
of the progress and completions provided in different releases (from Releases
8 to 10). The use cases and the requirements for automated PCI and ANRF
were completed in Release 8, which was frozen in the first quarter of 2009. The
MLB, radio coverage and capacity optimization, RACH optimization, MRO,
and energy savings were completed in Release 9, which was frozen in the first
quarter of 2010. Enhancements to energy savings, cell outage compensation
and radio optimization, and other SON use cases covered in earlier releases as
well as the automated UE measurements for minimization of drive tests were
placed under study in Release 10.
However, the development of solutions to meet the specified require-
ments is left to vendors discretions. Based on the standardized requirements and
use cases, the industry continues to develop new solutions and enhancement to
existing products and implementations.
SON Technologies in LTE 165

8.2 Self-Configuration
The self-configuration process is defined as the process in which newly de-
ployed nodes (eNodeBs) are configured by automatic installation procedures to
get the necessary basic configuration for system operation [4, 5]. The SON self-
configuration specifications were completed in Release 8 [4]. Self-configuration
process works in preoperational state, which starts from when the eNodeB is
powered up and has backbone connectivity until the RF transmitter is switched
on. The self-configuration capability allows a newly deployed network element
to automatically establish the necessary security control channel with the servers
in the network to download the available version of the software release. Then
the element performs a self-test to determine that all is working as intended.
Finally, the network element is taken into service, but the configuration may
still be improved using self-optimization. The self-configuration includes also
the processes in which radio planning parameters are assigned to a newly de-
ployed network node. The parameters in scope of self-planning include track-
ing area code, cell global identity, neighbor cell relations; Max TX power values
of UE and the base stations (eNode B in LTE), antenna tilts and azimuths,
HO parameters such as the trigger thresholds, hysteresis, and cell-offsets. The
parameter setting in the self-configuration phase may require negotiating with
the neighboring cells to adapt the new cell to the environment and possibly
also changing of the settings in the neighbors. As part of the self-configuration
process, the base station also runs a procedure known as S1 setup [6] to estab-
lish communications with each of the MMEs to which it is connected. In this
procedure, the base station informs the MME about the tracking area code and
PLMN identities of each of its cells, as well as any closed subscriber groups to
which it may belong. The MME replies with a message that indicates its glob-
ally unique identity, which can then communicate with the base station over
the S1 interface.
The network self-configuration features provided by SON dramatically
reduces the manual steps required when adding or expanding new network
elements by enabling plug and play hardware that is self-locating and self-con-
figuring. The self-configuration can be based on similarity detection, parameter
retrieval, and algorithmic postprocessing.

8.3 Self-Optimization
The self-optimization process is defined as the process where the UE and the
eNodeB load and performance measurements are used to auto-tune the net-
work. This process works in the operational state, which starts when the RF
166 From LTE to LTE-Advanced Pro and 5G

interface is switched on. The tuning actions can include changing parameters,
thresholds, and neighborhood relationships. The self-optimizing mechanisms
include the radio resource management (RRM) processes of traffic and load
balancing, scheduling techniques, and intercell interference coordination. The
intercell interference coordination is a process in LTE where the UE measure-
ment reports can be used to vary power levels between eNodeBs to reduce in-
terference to edge-cell users. The self-optimizing feature contributes to further
increase the spectral efficiency in LTE networks, because they can be used to
allocate capacity where it is needed. The degree of self-optimization that is de-
ployed determines the residual tasks that remain for network operators. In an
ideal case, the operator merely needs to feed the self-optimization methods with
a number of policies, which defines its desired balance in the trade-offs that ex-
ist between the conflicting coverage, capacity, quality, and cost targets.

8.4 Self-Healing
The self-healing is automated fault management [7]. It monitors and analyzes
fault management data, alarms, notifications, and self-test results to automati-
cally trigger corrective action on the affected network node(s) when necessary.
The self-healing processes create models for self-diagnosis to learn and identify
trends from past experiences using for instance statistical naïve Bayesian classi-
fiers and regression techniques on a database of historic solved problems based
on past optimization experience. Then when the most likely cause of a given
symptoms are decided in this way, the necessary corrective actions are triggered
automatically to solve the problem in near real time.

8.5 SON Architectures


The SON architecture can be divided into three classes known as the central-
ized, the distributed, and the hybrid implementation models depending on
where the functions are located.. In the distributed approach, the optimiza-
tion algorithms are executed in the eNodeBs. In this solution, the eNodeBs are
equipped with smart and intelligent SON functionality in an autonomous fash-
ion leveraging decision-making process as well as reducing complexity, load,
and footprint issues with the NMS. The distributed architecture also enables
open interfaces to the network elements by isolating the real-time SON ca-
pabilities within the network element. This simplifies support of multivendor
SON in a single geographic area. In the centralized architecture, the SON al-
gorithm may reside within the element management system/network manage-
ment system or a separate SON server controlling the eNodeBs (base stations).
SON Technologies in LTE 167

A centralized architecture approach can be used for deploying real-time SON


functions, such as automatic neighbor relations and automatic physical cell ID.
In this approach, the EMS/NMS is the key decision-maker in the real-time
SON functionality. An advantage of the centralized SON implementation is
that it has a global view over the network and clusters of cells (without the need
for intercell information exchange), and can support multivendor technologies
as the algorithms are implemented in the network central management level.
There are also a number of disadvantages with the centralized approach.
If the NMS is down, no deployments, SON neighbor updates, or SON RF
optimization can take place. As the network expands to thousands of eNo-
deBs, this task becomes more burdensome on the NMS and is also exposed to
the operator. Because decision-making is in the NMS, localized convergence
is not possible; all data must be forwarded to the NMS, creating burst issues
as SON algorithms at the network elements converge to a steady state. NMS-
based SON functions also discourage multivendor networks and open network
element interfaces. Finally, due to the volume of data and computation required
at the NMS, the NMS can become a bottleneck for SON changes, introduc-
ing significant latency in the commitment of automatic SON changes. In the
hybrid SON model, the more simple and quick optimization functions are
implemented in the eNodeBs while the more complex optimization functions
are placed within the OAM. Hence, this approach provides more flexibility to
support different kinds of optimization cases.

8.6 SON Use Cases


There are already at least a number of SON use cases that have been defined
and approved in 3GPP meetings. These are covered in a number of 3GPP speci-
fications as referenced in this chapter, but the detailed solution implementation
may vary from vendor to vendor. We will discuss these cases in the following
sections.

8.6.1 Automatic Neighbor Relation Setting


The automatic neighbor relation (ANR) allows a cell to automatically identify
and list its neighboring cells [8]. This process involves identifying and elimi-
nating nonsuitable neighbors The ANR function depends on the UE reports
of detected cells that are not in the neighbor list. The Automatic Neighbor
Relations-related specifications have been completed in 3GPP Standards Re-
lease 9. The 3GPP standards require that the UEs to measure and report the
serving cell, the listed neighbor cells (i.e., as measurement object), and the de-
tected cells (i.e., cells that are not indicated by the E-UTRAN but detected by
168 From LTE to LTE-Advanced Pro and 5G

the UE). The detected cells as indicated by their reported frequencies can be
a LTE cell within the same frequency or a LTE cell with a different frequency
or even a cell belonging to another RAT. To detect the interfrequency cells or
inter-RAT cells, the eNodeB needs to instruct the UE to do the measurement
on that frequency. The ANR functionality is divided into three areas which
consist of neighbor removal, neighbor detection, and neighbor relation table
management functions. The first two functions decide whether to remove an
existing neighbor relation or to add a new neighbor relation. The cell removals
are based on statistical analysis of handover behavior and the success rate. The
third function is responsible for updating the neighbor relation table according
to the input of the previous two functions and the OAM.

8.6.2 Coverage and Capacity Optimization


This use case aims at discovering the coverage and capacity problems automati-
cally using the measurements collected by the eNodeB and those by the UEs.
Some requirements on coverage and capacity optimization via SON have been
defined in 3GPP TS 32.521 Release 9 [9]. The automated function minimizes
the human intervention and reduces the feedback delay. The function uses the
UE measurements on the signal strength of current cell and its neighbors, the
UE reports on the timing advance, the radio link failure counters, coverage
triggered mobility counters and statistics on things such as the traffic load dis-
tribution measurement to provide new optimized values for radio configura-
tion parameters. The later include things such as the transmit powers, antenna
downtilts, mobility triggered parameters, and reference symbol power offsets.
The measurements and the problem statistics collected are sent to a planning
tool, which then analyzes the data and deploys updated optimal values for the
parameters to operate the network. The centralized model for coverage and
capacity optimization under SON use case is discussed in [10].

8.6.3 PCI Configuration


The goal of this use case is to automatically configure the PCI of a newly in-
troduced cell. The PCI is an essential configuration parameter for a cell. It is
contained in the synchronization channel (SCH) for the UE to synchronize
with the cell on the downlink. There are 504 unique PCIs in E-UTRAN, so
the reuse of PCIs in different cells is unavoidable. However, an algorithm is de-
signed to assign PCIs to all the cells within the new eNodeB in such a way that
no reuse of the same cell ID is made within the neighboring cells and thus avoid
confusion in the handover and cell selection processes. When a centralized as-
signment is used, the OAM system will have a complete knowledge and control
of the PCIs and sophisticated algorithms can be used to determine optimal PCI
SON Technologies in LTE 169

assignments. An approach is presented in [11] that maps the PCI assignment


problem to the graph coloring method. This colors nodes in a way that ensures
that not any two nodes connected to the same edge are assigned the same color
and at the same time acquire a minimum number of colors. The minimum
number of colors is called the chromatic number and it is considered a NP-
complete problem. When the distributed solution is used the OAM system as-
signs a list of possible PCIs to the newly deployed eNodeB, but the adoption of
the PCI is in control of the eNodeB. The newly deployed eNodeB will request
a report, sent either by UEs over the air interface or by other eNodeBs over the
X2 interface, including already in-use PCIs. The eNodeB will randomly select
its PCI from the remaining values. The interested reader may refer to [12] for
further discussions on this function.

8.6.4 Mobility Robust Optimization


The mobility optimization process first is a self-optimization technique that
first appears in Release 9 [13]. Using this technique, a base station can gather
information about any problems that have arisen due to the use of unsuitable
measurement reporting thresholds to correct the problem. The management
aspects of load balancing and handover parameters have been fully discussed
in Release 9. The other mobility parameters include hysteresis, time to trigger,
relative cell offsets, and cell reselection parameters. The incorrect HO param-
eter settings can negatively affect user experience and wasted network resources
by causing HO ping-pongs, HO failures, and radio link failures (RLF). An
example of such a situation is the incorrect setting of the HO hysteresis, which
can either cause ping-pong effects due to too early handovers or prolonged
connection to a nonoptimal cell and or cause interference to other users. These
parameters are also adjusted to avoid unnecessary or missed handovers, which
result in possible call drops and the inefficient use of the network resources.
This process should also make sure that the cell reselection parameters are made
aligned with the HO parameters to avoid unwanted handovers subsequent to
connection setup. Solutions for this use case are discussed in [14]. There is also
a study presented in [15] based on a HO margin optimization algorithm, which
observes the type of HO failure and tracks the cause of the failures. Then it
adjusts the HO margin either by an increase or a decrease based on the selected
foremost HO failure event, too early HO, too late HO, and HO to wrong cell,
as well as the ping-pong HOs. The proposed HO margin optimization algo-
rithm is robust against any changes in the mobility of the UE, which has been
demonstrated by evaluating the algorithm and performing a parametric study
taking into consideration the changes in the moving direction and velocity of
the UE. The load-balancing and HO parameter optimization-related require-
ments, NRM, and targets have been completed in TS 32.521/522 in Release 9.
170 From LTE to LTE-Advanced Pro and 5G

8.6.5 Load-Balancing Optimization


The targets of load-balancing and HO parameters optimization have been de-
fined in TS 32.522, which can be set by the operator within a value range. Load
balancing can be realized via auto-tuning of HO parameters of mobiles in con-
nected mode or of cell selection/reselection parameters of mobiles in idle mode.
This is done in LTE by the eNodeB monitoring the load in the controlled and
neighbor cells and exchanging related information over the X2 or S1 interface
with neighboring node(s). Then an algorithm identifies the need to distribute
the load of the cell towards an adjacent or a colocated cell that is not congested,
which can include cells from other radio access technologies (RATs), by com-
paring the load among the cells, the type of services, and the cell configurations.
The HO margins and/or the cell reselection parameters between the cell and
one or more of its neighboring cells are then modified in a coordinated man-
ner in both cells to reselect or HO to the less congested cell and thus achieve
load balancing while avoiding any problems such as ping-pong effects. This is
expected to make part of the mobile at the congested cell border to reselect or
HO to the less congested cells and achieve some degree of load balancing. An
example algorithm on this basis is presented in [12]. The definition of load is
something that needs to be decided and it can be the radio load, the transport
load, or the processing load. The exact definition of the load will influence the
algorithm to distribute the load.

8.6.6 Cell Outage Compensation


A mechanism for cell outage compensation (sleeping cell compensation) is an-
other functionality that is intended to be included under SON self-tuning. The
use case and some requirements of cell outage compensation have been com-
pleted and finished in Release 10 [7]. This mechanism assumes that in the event
of a cell outage, cells in proximity may be able to compensate for lost coverage
by adjusting the neighbor relationships and the coverage and HO parameters
for the nearby neighboring cells in an optimization process. This means that
the algorithm should try to achieve a balance between the capacity/coverage
offered to the outage area and the performance degradation experienced in the
neighboring cells to share the load. If the overall business impact of the com-
pensation is worse than with that of the original outage, it would be better not
to take any compensation measures. The cell outage algorithm can detect the
sleeping cells by for instance observing the changes in neighbor reporting pat-
terns. If a cell is no longer reported for a certain period of time, then the cell is
likely to be in outage.
It is worth mentioning that care should be taken in the implementation
of such use case to make sure that the outage compensation is not triggered for
a cell that is put on stand-by mode by the power-saving algorithm optimization
SON Technologies in LTE 171

SON algorithms. This indicates that not all the SON use cases may be able to
run in parallel, and thus there is a need for a SON coordinator/controller [16]
to effectively handle the triggering of the SON use cases.

8.6.7 Dynamic Multiantenna Configuration


In the multiantenna system, some of the antennas can be switched off to save
the power. The transmission schemes can also be adapted among SIMO or
MIMO or beamforming to achieve the maximum capacity with the minimum
transmission power.

8.6.8 Interference Reduction


Since LTE is also based on the concept of frequency reuse of 1 as in WCDMA,
it is vulnerable to intercell interference. The intercell interference is managed
through intercell interference coordination functions. This involves a coordi-
nated usage of the available resources [the physical resource blocks (PRBs)]
in neighboring cells that result in improved signal to interference ratio and
throughput. The coordination is realized by restriction and preference for the
usage of which PRBs in the different neighboring cells.

8.6.9 Energy Saving


An energy-saving solution was introduced in Release 9 for LTE heterogeneous
networks that deploy LTE capacity-booster cells on top of cells that give wide-
area coverage. The function enables the capacity-booster cells to be switched off
when their capacity is no longer needed, and to be reactivated on a per-need
basis. Before switching off a capacity-booster cell, the eNodeB may initiate han-
dover action in order to offload any users in the cell. In that case, it indicates
the reason for handover to facilitate the target eNodeB in taking appropriate
subsequent action, such as not selecting the switched-off cell as a target for
subsequent HO.
Generally, savings in base station power consumption can be made by
monitoring and matching the capacity offered by the network with the traffic
demand in real time. For example, certain cells may be just deployed for certain
special times of the day and then turned off when there is no expected traffic
to be handled. The exchange and coordination of local traffic load information
can also be used among neighboring base stations to determine redundant cells
that can be switched off. In other scenarios, parts of the base station modules
can be turned off according to hourly traffic conditions. In off-peak hours, less
processing capability is needed, and therefore parts of the unit can be trans-
ferred to sleep mode. By switching off those cells that are not needed for traffic
at some point of time, the interference due to control channels is also reduced
172 From LTE to LTE-Advanced Pro and 5G

in the network. More generally, energy-saving solutions can be developed on


the basis of many other considerations such as transmission power adaptation,
multiantenna scheme adaptation, besides the switching on and off of cells. All
these measures can be automated processes within the SON functionality. ��

8.6.10 RACH Parameter Optimization


The RACH performance optimization is very critical in any access technologies
as it affects call setup and handover success rate and the network access delays.
Some requirements of RACH optimization have been defined in 3GPP TS
32.521 Release 9. The RACH performance is dependent on collision probabil-
ity (which is then impacted by the RACH channel configuration parameters),
the uplink interference, and the channel load. The channel load is determined
by the users under the cell coverage, the call arrival rate, the incoming hando-
vers, the traffic pattern as it affects the DRX and the uplink synchronization
states and hence the need to use RACH. Because these and interference are
affected by network configurations, such as antenna tilt, transmission power,
HO threshold, and the network traffic, which can vary widely over the hours of
the day, RACH optimization can be a critical function under the SON suit of
optimization and self-tuning algorithms.
An automatic SON RACH optimization function will have to collect
measurements on the RACH performance and its influencing factors such as
the changing UL interference and the load on the channel, then process these in
real time and update the RACH configuration parameters for best performance
for the prevailing conditions. The measurements are made in the eNodeB, on
the random access delay, random access success rate, and random access load.
The random access load can be indicated by the number of received preambles
in a cell in a time interval. It is measured per preamble range (dedicated, ran-
dom-low, and random-high), and averaged over the PRACHs configured in a
cell. Thresholds are set separately for random access delay and success rate. If
either of the thresholds is reached, RACH optimization is triggered. First, ran-
dom access load is analyzed to check if the random access is overloaded in any
of the three preamble ranges. If one of them is overloaded, RACH preambles
are reallocated among these three preamble ranges. If all of them are overloaded,
more physical resources need to be reserved for RACH. If none of them is over-
loaded, other parameters need to be adjusted, such as increasing the transmis-
sion power ramping step, the preamble transmit PowerMax, and distributing
the backoff time in a wider range. The RACH incoming probes must have suf-
ficient power to be detected by the eNodeB while at the same time minimizing
the contribution to the uplink interference. The neighboring cells must also be
configured with RACH parameters that will minimize frequency overlaps to
SON Technologies in LTE 173

reduce interference between RACHs. The eNodeB should be able to auto-tune


the RACH parameters whose values ranges can be configured by the OAM. ��

8.7 Utilization of Automated UE Measurements


The 3GPP study on the minimization of drive testing (MDT) and its enhance-
ments with additional measurements in Release 11 has resulted in the specifica-
tion of a set of automated collection of UE measurements. These measurement
collections and the use cases are presented in 3GPP TS 36.805 [17]. The mea-
surement logs usually consist of multiple events and measurements taken over
time. Examples include radio measurements of RSRP and RSRQ in the case of
LTE around the occurrence times of radio link failures, uplink and downlink
data throughput, which indicate the user’s quality of service, the uplink and
downlink data volume, which indicate where the traffic is concentrated, and the
uplink power headroom, which indicates the uplink coverage, while the number
of radio link and RRC connection establishment failures provide further infor-
mation about any coverage holes, Rach and paging related failures, and failures
in decoding such information as transmitted on common control channels,
PCFICH, and PDCCH. The network can also instruct the mobile to measure
and return its location for the purpose of drive test minimization alone. How-
ever, because this can increase the mobile’s power consumption, it is typically
managed using an extra subscription option within the home subscriber server.
The measurements are made with user consent and can support two modes
that are referred to immediate measurements for mobiles in the RRC_CON-
NECTED state and logged measurements for mobiles in the RRC_IDLE, the
Idle mode measurements. For mobiles in the RRC_CONNECTED state, the
mobile reports the measured quantities to the base station along with any loca-
tion data that it has available, but does not make any location measurements
just for the purpose of MDT. The base station can then return the information
to the management system, using the existing network management procedures
for trace reporting. In the Idle mode, the mobile makes its measurements with a
period that is a multiple of the discontinuous reception cycle. It then stores the
information in a log, along with time stamps and any available location data.
When then the mobile establishes an RRC connection, it can signal the avail-
ability of its measurement log using a field in the message RRC Connection
Setup Complete. The base station can then retrieve the logged measurements
from the mobile using the RRC UE Information procedure and forward them
to the management system.
The use cases of the measurements collected and reported are useful in
detecting coverage holes, and problems related to both downlink coverage and
common channel parametrization problems. These measurements can also be
174 From LTE to LTE-Advanced Pro and 5G

utilized by the vendors in the development and enhancement of solutions to


meet the requirements of some of the SON functions discussed earlier, particu-
larly as related to the optimization of coverage, mobility, and random access
procedures.

References
[1] 3GPP Specifications for SON 3GPP TS 36.902, “Evolved Universal Terrestrial Radio Ac-
cess Network (E-UTRAN); Self-Configuring and Self‐Optimizing Network (SON) Use
Cases and Solutions,” Release 9, V9.3.1, 2013.
[2] 3GPP TS 32.101, “Telecommunication Management; Principles and High Level Require-
ments,” V8.1.0, 2007.
[3] 3GPP TS 32.500, “Telecommunication Management; Self-Organizing Networks (SON);
Concepts and Requirements.”
[4] 3GPP TS 32.501, “Telecommunication Management; Self‐Configuration of Network El-
ements; Concepts and Integration Reference Point (IRP) Requirements.”
[5] 3GPP TS 32.502, “Telecommunication management; Self‐Configuration of Network El-
ements Integration Reference Point (IRP); Information Service (IS).”
[6] 3GPP TS 36.413, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1
Application Protocol (S1AP), Release 11,” 2013.
[7] 3GPP TS 32.541, “Telecommunication Management; Self-Organizing Networks (SON);
Self-Healing Concepts and Requirements,” V12.0.0, Release 12.
[8] 3GPP TS 32.511, “Telecommunication Management; Automatic Neighbour Relation
(ANR) Management; Concepts and Requirements,” 3GPP TS 36.300.
[9] 3GPP TS 32.521, “Telecommunication Management; Self-Organizing Networks (SON)
Policy Network Resource Model (NRM) Integration Reference Point (IRP),” V11.1.0,
2012.
[10] TR 32.836, “Study on Network Management (NM) Centralized Coverage and Capacity
Optimization (CCO) Self-Organizing Networks (SON) Function,” http://www.3gpp.
org/DynaReport/32836.htm.
[11] Bandh, T., G. Carle, and H. Sanneck, “Graph Coloring Based Physical-Cell-ID Assignment
for LTE Networks,” 2009 International Conference on Wireless Communications and Mobile
Computing: Connecting the World Wirelessly, June 2009.
[12] Feng, S., and E. Seidel Nomor, “Self-Organizing Networks (SON) in 3GPP Long Term
Evolution,” Research GmbH, Munich, Germany, May 20, 2008.
[13] 3GPP TS 36.423, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); X2
Application Protocol (X2AP), Release 11,” Sections 8.3.9-8.3.10, September 2013.
[14] R3-081165, “Solutions for the Mobility Robustness Use Case,” http://www.3gpp.org/ftp/
tsg_ran/WG3_Iu/TSGR3_60/Docs/R3-081165.
SON Technologies in LTE 175

[15] Kitagawa, K., et al., “A Handover Optimization Algorithm with Mobility Robustness for
LTE Systems,” IEEE 22nd International Symposium on Personal Indoor and Mobile Radio
Communications (PIMRC), September 2011.
[16] Socrates Research Project, “FP7 SOCRATES Final Workshop on Self Organization in
Mobile Networks,” February 2011, http://www.fp7-socrates.org/files/Presentations/
SOCRATES_2010_NGMN%20OPE% 20workshop%20presentation.pdf.
[17] 3GPP TR 36.805, “Technical Specification Group Radio Access Network Study on
Minimization of Drive-Tests in Next Generation Networks, Release 9,” V2.0.0 2009.
9
EPC Network Architecture, Planning,
and Dimensioning Guideline
The evolved packet core (EPC) refers to the long-term evolution (LTE) packet
core network, which is responsible for the overall control of the UE and the
establishment of the bearers. It provides session, mobility, and quality-of-ser-
vice (QoS) management and enables the operators to connect users to applica-
tions in their service delivery environment, on the Internet, and on corporate
networks. The EPC thus provides the interface to packet data networks such
as the Internet and other Internet Protocol (IP)-based packet communication
networks and was introduced by 3GPP in Release 8 of the standard. The EPC
together with the LTE radio access network (E-UTRA) is made up of the eNo-
deBs, which make up what is referred to as the system architecture evolution
(SAE) in LTE. The SAE distributes the RRM functions to the eNodeBs and
removes the RNC and SGSN from the equivalent 3G network architecture
to make a simpler mobile network. This allows the network to be built as an
all-IP-based network architecture. The EPC, unlike the core network in GPRS
and UMTS, is a purely packet-switched network and uses the IP for all services
[1]. The protocols running between the UE and the EPC are known as the
Non-Access Stratum (NAS) Protocols. The evolved core network consists of the
mobility management entity (MME), the serving gateway (SGW), the home
subscriber server (HSS), the packet data network gateway (PGW), and the pol-
icy control and charging rules function (PCRF). These functional blocks are
specified independently by 3GPP, but in practice vendors may combine some of
them in a single box, such as a combined MME and HSS or a combined SGW
and PGW as an edge interface. The MME is homed to a number of SGWs
that are the data connection between the towers and the network. SGWs create

177
178 From LTE to LTE-Advanced Pro and 5G

tunnels for a call to a packet gateway (PGW) that connects to the local Internet
service providers (ISPs). A call to a fixed network moves from handset to tower
via LTE radio access and from tower to packet gateway via a bearer tunnel that
passes through the serving gateways. The EPC can include two other functional
blocks that are related to the location positioning of the UEs. These are referred
to as the evolved serving mobile location center (E-SMLC) and the gateway
mobile location center (GMLC). Moreover, the EPC may include an IMS (IP
multimedia subsystem), which comprises all the necessary elements for provi-
sion of IP multimedia services comprising audio, video, text, and chat. How-
ever, the IMS is not an element of LTE, but we will briefly discuss it later in this
chapter. The LTE EPC network supports interfaces that allow interoperation
with various access technologies, in particular, the earlier 3GPP technologies
(GSM/EDGE and UMTS) as well as non-3GPP technologies such as WiFi,
CDMA2000, and WIMAX.
An objective of this chapter will be to provide a reasonable understanding
of the functions of each of these network elements and their relationships/inter-
faces and how the end-to-end connections are established and managed in LTE.
This will also provide a good background for help in modeling the traffic for di-
mensioning, which will be covered as well. We will also discuss the QoS classes
in LTE and their characterization in terms of treatment and performance.

9.1 Mobility Management Entity


The mobility management entity (MME) is the main control node that is re-
sponsible for the handing of signaling related to mobility and security between
the E-UTRAN and the core network. It plays a key role in initiating and man-
aging the bearers, distributing the paging messages to the eNodeBs, in the EPS
connection management (ECM) idle mode, which are handled by the session
and connection management layer within the NAS protocol and is also respon-
sible for managing the list of tracking areas where the mobile is paged. The
MME is the termination point of the NAS and is the entity that selects the
most appropriate SGW and PDN gateway for routing the user packet data.
The signaling connection to a UTRAN (3G) SGSN is also performed through
the MME interface with the SGSN, which is the S3 interface using the GTP-C
protocol.

9.2 Serving Gateway


The serving gateway (SGW) is a data plane element within the user plane. Its
main purpose is to serve as the mobility anchor for inter-eNodeB handovers,
as well as acting as a router for routing the IP packet data between the eNodeB
EPC Network Architecture, Planning, and Dimensioning Guideline 179

and the Internet (PGW). The SGW also acts as the mobility anchor for inter-
3GPP communication through the S4 interface with SGSN, using the GTP-U
protocol. That is it routes user data between the 2G/3G SGSN and the PGW
of the EPC. Moreover, it maintains information about the bearers when the UE
is idle and acts as a buffer for the downlink data when the MME is initiating
paging of the UE to reestablish the bearer.
In a way, the MME and SGW separate the control plane and the user
plane used in the UMTC RNC (except that the RRM functions have moved
to the eNodeBs). By separating the signaling plane from the user data, a more
scalable and flexible architecture is obtained, which is easier for expansion with
the growth of traffic and signaling.

9.3 Packet Data Network Gateway


The packet data network gateway (PGW) is the point of interconnect between
the EPC and the external IP networks and fulfills the function of entry and exit
point for UE data. The UE may have connectivity with more than one PGW
for accessing multiple PDNs, where a packet data network is identified by an
access point name (APN). A network operator typically uses a handful of differ-
ent APNs, for instance, one for the Internet and one for the IP multimedia sub-
system (IMS). Each mobile is assigned to a default PGW when it first switches
on, to give it always-on connectivity to a default packet data network such as
the Internet, through a default bearer. Later on, a mobile may be assigned to
one or more additional PDN gateways, if it wishes to connect to additional
packet data networks such as an intranet (private corporate networks) or the IP
multimedia subsystem.
The PGW include the following functions [1]:

• Per-user based packet filtering (by, for example, deep packet inspection);
• Lawful interception;
• UE IP address allocation;
• Transport-level packet marking in the uplink and downlink (e.g. setting
the DiffServ code point), based on the QoS class identifier (QCI) of the
associated EPS bearer for QoS control;
• UL and DL service-level charging based on PCRF rules, gating control,
rate enforcement as defined in 3GPP TS 23.203;
• UL and DL rate enforcement based on APN-AMBR;
• DL rate enforcement based on the accumulated MBRs of the aggregate
of SDFs with the same GBR QCI (e.g., by rate policing/shaping);
180 From LTE to LTE-Advanced Pro and 5G

• Mobility anchor for internetworking with non-3GPP technologies like


High Rate Packet Data (HRPD) and WiFi.

The APN identifies a packet data network. A network operator typically


uses a handful of different APNs; for example, one for the internet and one for
the IP multimedia subsystem.

9.4 Home Subscriber Server


The home subscriber server (HSS) is a database server located in the operator’s
premises and stores all the user subscription information such as the QoS pro-
file and any roaming access restrictions. It also plays a role in authentication and
security due to its ability to integrate the authentication center (AuC), which
formulates security keys and authentication vectors. The HSS contains the re-
cords of the user location and the original copy of the user subscription profile
in support of mobility and call and session setup. It interacts with the MME,
and must be connected to all the MMEs in the network that controls the UE.
This functional block is based on the pre-3GPP Release 4, home location regis-
ter (HLR) and authentication center (AuC).

9.5 Policy Control and Charging Rules Function


In addition to the HSS, the EPC has another logical functional node, which
is referred to as the policy control and charging rules function (PCRF). This is
the entity that detects the service flows and enforces charging policy, and QoS
authorization, which consists of the QCIs and bit rates. For applications that
require dynamic policy or charging control, a network element called the ap-
plications function is used. This element is usually collocated with the PGW or
may reside within it.

9.6 E-SMLC and GMLC


The E-SMLC manages the overall coordination and scheduling of resources
required to find the location of a UE that is attached to E-UTRAN. It also cal-
culates the final location based on the estimates that it receives, and it estimates
the UE speed and the achieved accuracy. The GMLC contains functionalities
required to support location services. After performing authorization, it sends
positioning requests to the MME and receives the final location estimates.
EPC Network Architecture, Planning, and Dimensioning Guideline 181

9.7 IP Multimedia Subsystem


The IP multimedia subsystem (IMS) is not part of the LTE, but it is a separate
network whose relationship with the LTE is the same as that of the Internet.
Nevertheless, it is valuable to cover the IMS as part of this chapter because as
IMS illustrates several aspects of the LTE’s operation. The IMS was originally
designed for the management and delivery of real-time multimedia services
over the 3G packet switched domain. It was first defined in 3GPP Release 5,
which was frozen in 2002. LTE was designed without a circuit-switched core
network and with the intention that LTE voice calls should be transported us-
ing voice over IP. This is an ideal application for the IP multimedia subsystem
and has led to a resurgence of interest in the technology. The IP multimedia
subsystem is specified by 3GPP in the same way as LTE, UMTS, and GSM.
3GPP TS 23.218 provides a useful introduction, while 3GPP TS 23.228 covers
the main stage 2 specification. The IMS as an external network contains the
signaling functions that manage VoIP calls for an LTE device. When viewed in
this way, the IMS is like the third-party VoIP servers but brings two main ad-
vantages. First, the IMS is owned by the network operator, not by a third-party
service provider. Second, it is more powerful than any third-party system as, for
instance, it guarantees the quality of service of a voice call, supports handovers
to 2G or 3G cells, and includes full support for emergency calls.

9.8 EPC Element Interconnection and Interfaces


The EPC network architecture showing the interconnection between the inter-
nal elements is shown in Figure 9.1. The description of the internal interfaces
shown in Figure 9.1 and a few other important interfaces not shown in the
figure in terms of the protocols used, the functions performed, and the 3GPP
references for more detailed information are given in the following.

9.8.1 S1-C and S1-U Interfaces


The S1-C also referred to as S1-MME is used in the control plane connecting
the MME with the eNodeBs (the E-UTRAN), while the S1-U operates in the
user plane and is used for communication between the E-UTRAN and SGW.
The protocols used by these interfaces are PPP or Ethernet at the data link layer
and IP at the network layer [2, 3].
The S1-C uses the SCTP at the transport layer for the S-C and uses the
port number 36412. SCTP refers to the Stream Control Transmission Pro-
tocol developed by the Sigtran working group of the IETF for the purpose
of transporting various signaling protocols over the IP network [4]. A single
182 From LTE to LTE-Advanced Pro and 5G

UE-associated signaling is assigned one SCTP stream and the stream is not
changed during the communication of the UE-associated signaling. In the ap-
plication layer, the S1-C interface is specified in 3GPP TS 36.41x series [5] and
is responsible for E-RAB management function of setting up, modifying, and
releasing E-RABs, which are triggered by the MME via a secure ciphered con-
nection. The release and modification of E-RABs may be triggered by the eNB
as well. The S1 signaling application also handles the initial context establish-
ment in the eNodeB, setting up the default IP connectivity, transfer of NAS-re-
lated signaling to the eNodeB, and the mobility functions for UEs in LTE such
as a change of eNodeB within the SAE/LTE and inter-MME/serving SAE-GW
handovers, a change of RAN nodes between different RATs (inter-3GPP RAT
handovers) via the S1 interface with the EPC involvement, and load-balancing
function to ensure equally loaded MMEs within an MME pool area, a reference
point for the control plane protocol between E-UTRAN and MME.
The S1-U also referred to as S1-SGW is the interface that connects the
eNodeB and the SGW for user plane traffic (i.e., bearers’ tunneling, inter-eNB
handover) and is used to provide IP-based transport of data streams from the
E-UTRAN towards the EPC (SGW). It uses the GTP, specified in TS 29.281,
at the transport layer over UDP over IP. This is described further in 3GPP TS
36.414 [6].

9.8.2 S5 Interface
This interface is between the SGW and PGW and provides for transport of
packet data towards end users during roaming and nonroaming cases (i.e., S8
is the inter-PLMN variant of S5) using the GPRS tunneling protocol GTP-U
over UDP over IP. This is further described in 3GPP TS 29.274 and 3GPP TS
39.281.

9.8.3 S6a Interface


This interface is used to exchange the data related to the location of the mobile
station and to the management of the subscriber requested service. The main
service provided to the mobile subscriber is the capability to transfer packet data
within the whole service area. The MME informs the HSS of the location of a
mobile station managed by the latter. The HSS sends to the MME all the data
needed to support the service to the mobile subscriber. Exchanges of data may
occur when the mobile subscriber requires a particular service, such as changing
data regarding his subscription or when some parameters of the subscription are
modified by administrative means. The signaling on this interface uses Diam-
eter S6a Application as specified in TS 29.272.
EPC Network Architecture, Planning, and Dimensioning Guideline 183

9.8.4 SGi Interface


This is the reference point between the PGW and a packet data network which
may include an operator external public or private packet data network or an
intra-operator packet data network (e.g., for provision of IMS services). For
more details, see TS 29.061. From the external IP network’s point of view, the
PGW is seen as a normal IP router. The L2 and L1 layers are operator specific
and the interworking with user defined ISPs and private/public IP networks is
subject to interconnect agreements between the network operators.
The message transfer unit (MTU) of the IP tunnel on the MS/UE side of
the IP link may be different than the MTU of the IP link connecting the PGW
to the PDN. As a result IP packets crossing the SGi interface may need to be
fragmented. Unnecessary fragmentation should be avoided when possible due
to the following reasons [7]:

• Fragmentation is bandwidth inefficient, since the complete IP header is


duplicated in each fragment.
• Fragmentation is CPU-intensive since more fragments require more
processing at IP endpoints and IP routers. It also requires additional
memory at the receiver.
• If one fragment is lost, the complete IP packet has to be discarded.

The reason is there is no selective retransmission of IP fragments provided


in IPv4 or IPv6. To avoid unnecessary fragmenting of IP packets the MS/UE,
or a server in an external IP network, may find out the end-to-end MTU by
path MTU discovery and hence fragment correctly at the source. IP Fragmenta-
tion on Gi/Sgi should be handled according to IETF RFC 791 and IETF RFC
2460. The PGW should enforce the MTU of IP packets to/from the MS/UE
based on IETF RFC 791and IETF RFC 2460.

9.8.5 Gx Interface
This interface (not shown in Figure 9.1) provides transfer of (QoS) policy in-
formation from PCRF to the SGW. The Gxc is used only in the case of PMIP-
based S5/S8. This interface is specified in TS 29.212. PMIP (proxy mobile IP)
refers to PMIPv6 as defined in IETF RFC5213.

9.8.6 S13 Interface


This interface (not shown in Figure 9.1) is used between MME and EIR to
exchange data, in order that the EIR can verify the status of the IMEI retrieved
184 From LTE to LTE-Advanced Pro and 5G

Figure 9.1 EPC network element interconnections and internal interfaces.

from the mobile station. The signaling on this interface uses the Diameter S13
Application in TS 29.272.

9.8.7 S10 Interface


This interface (not shown in Figure 9.1) is used to support user information
transfer and MME relocation support between the MMEs. This interface is
specified in TS 29.274.

9.8.8 S12 Interface


This interface, also not shown in Figure 9.1, provides the reference point be-
tween UTRAN and SGW for user plane tunneling when direct tunnel is estab-
lished. It is based on the Iu-u/Gn-u reference point using the GTP-U protocol
as defined between SGSN and UTRAN or, respectively, between SGSN and
GGSN. The usage of S12 is an operator configuration option. This interface is
specified in TS 29.281.

9.8.9 X2 Interface
This is an interface in the E-UTRAN and not in the EPC, but will be ex-
plained here. The X2 interface provides the capability to support radio interface
EPC Network Architecture, Planning, and Dimensioning Guideline 185

mobility between eNodeBs of UEs having a connection with E-UTRAN. The


functions and the protocols supported over this interface are fully covered in a
number of 3GPP specifications (3GPP TS 36.420 through 3GPP TS36.424).
The SCTP is used to support the exchange of X2 Application Protocol (X2AP)
signaling messages between two eNodeBs. The functions performed on the X2
interface include the following:

• Intra-LTE-Access system mobility support for ECM-connected UE;


• Context transfer from source eNodeB to target eNodeB;
• Control of user plane transport bearers between source eNodeB and
target eNodeB;
• Handover cancellation;
• UE context release in source eNodeB;
• Load management;
• Intercell interference coordination;
• Uplink interference load management;
• General X2 management and error handling functions:
• Error indication;
• Reset;
• Application-level data exchange between eNodeBs;
• Trace functions.

9.9 UE Mobility and Connection Management Within EPC


Just as the E-UTRA (the evolved radio access network) maintains a state ma-
chine within the UE and the eNodeB to track the terminals RRC state, which
is either the idle or the connected state (UE with an active traffic bearer), the
EPC needs to keep track of the availability and reachability of each terminal in
order to offer effective service to UEs. The EPC achieves this by maintaining
two sets of contexts for each UE. These two contexts are referred to as the EPC
mobility management (EMM) context and an EPC connection management
(ECM) context, each of which is handled by state machines located in the UE
and the MME. The EMM is analogous to the MM processes undertaken in
legacy networks and seeks to ensure that the MME maintains enough location
data to be able to offer service to each UE when required. The two EMM states
maintained by the MME are EMM-DEREGISTERED and EMM-REGIS-
TERED. The ECM states describe a UE’s current connectivity status with the
EPC, for example, whether an S1 connection exists between the UE and EPC
186 From LTE to LTE-Advanced Pro and 5G

or not. There are two ECM states, ECM-IDLE and ECM-CONNECTED.


Although the EMM and ECM states are independent of each other, they are
related and any discussion of a UE’s reachability is best served by viewing these
states in a combined fashion. There are three main phases of UE activity, each
of which can be described by a combination of EMM and ECM states. These
phases consist of the UE powered off, the UE in idle mode, and the UE with
an active traffic connection.

9.9.1 EPC Mobility Management State


The mobility management functions within EPC are used to keep track of the
current location of a UE. These, as stated earlier, consist of two states and are
explained in more detail in the following two sections.

9.9.1.1 EMM Deregistered State


In the EMM Deregistered state, the EMM context in MME holds no valid
location or routing information for the UE. The UE is not reachable by an
MME, as the UE location is not known and the UE is detached from the net-
work. However, some UE context in this state can still be stored in the UE and
MME, to, for instance, avoid running an authentication and key agreement
(AKA) procedure during every attach procedure.

9.9.1.2 EMM Registered State


The UE enters the EMM Registered state by a successful registration with an
attach procedure to either E-UTRAN or GERAN/UTRAN. The MME enters
the EMM-Registered state by a successful tracking area update procedure for a
UE selecting an E-UTRAN cell from GERAN/UTRAN or by an attach pro-
cedure via E-UTRAN. In the EMM-Registered state, the UE location in the
MME is known to at least an accuracy of the tracking area list allocated to that
UE and can receive services that require registration in the EPS. In the EMM-
Registered state, the UE maintains at least one active PDN connection, and an
EPS security context.
The UE state is changed to EMM-Deregistered if either of the following
events:

• The UE performs a detach procedure.


• All the bearers belonging to a UE are released, for instance after a han-
dover from E-UTRAN to non-3GPP access.
• The UE camps on E-UTRAN and detects that all of its bearers are
released.
• The UE switches off its E-UTRAN interface when performing hando-
ver to non-3GPP access.
EPC Network Architecture, Planning, and Dimensioning Guideline 187

• An MME initiated detach after expiration of the implicit detach timer,


changes the UE state to EMM-Deregistered in the MME.

The MME state transition diagrams in the UE and MME are shown in
Figures 9.2 and 9.3.

9.9.2 EPC Connection Management State


The ECM states describe a UE’s current connectivity status with the EPC, and
consist of two states which are explained in detail in the following.

9.9.2.1 ECM-Idle State


A UE is in ECM-Idle state when no NAS signaling connection exists between
the UE and the network. In the ECM-Idle state, a UE performs PLMN selec-
tion and cell selection/reselection. There exists no UE context in E-UTRAN for

Figure 9.2 EMM state model in UE.

Figure 9.3 EMM state model in the MME.


188 From LTE to LTE-Advanced Pro and 5G

the UE in the ECM-Idle state, and neither any S1-MME (i.e., S1-C) or S1-U
connection for the UE. It is noted that when the UE is in ECM Idle state, the
UE and the network may be unsynchronized, that is the UE and the network
may have different sets of established EPS bearers.
The location of a UE in ECM-IDLE state is known by the network on
a tracking area level. The UE may be registered in multiple tracking areas in
which case all the tracking areas in a tracking area list to which a UE is regis-
tered are served by the same serving MME. In the ECM-Idle state, if the UE
happens to be in the EMM_Registered context, that is if the UE is in the com-
bined EMM-Registered and ECM Idle state, The UE performs tracking area
updates, responds to paging commands from the MME, and establishes radio
bearers when uplink user data is to be sent.

9.9.2.2 ECM-Connected State


In the ECM-connected state, the UE is RRC connected in the E-UTRAN and
its location is known in the MME with an accuracy of a serving eNodeB ID.
The UE mobility is handled by handover procedure and performs the tracking
area update procedure when the TAI in the EMM system information is not
in the list of TAs that the UE registered with the network. In this state, the UE
maintains a signaling connection with MME which is made of an RRC connec-
tion and an S1-C (S1-MME) part. It is noted that if after a signaling procedure,
the MME decides to release the signaling connection to the UE, the states at
both the UE and the MME are changed to ECM-Idle.
The state in the UE is changed to ECM-Idle when a signaling connection
to the MME is released or broken. The release or failure is explicitly indicated to
the UE by the eNodeB or detected by the UE. The S1 release procedure changes
the state at both the UE and the MME from ECM-Connected to ECM-Idle. If
the UE does not receive the indication for the S1 release due to radio link error
or out of coverage, a temporal mismatch between the ECM-state in the UE and
the ECM-state in the MME may occur.
When a UE changes to ECM-Connected state, if a radio bearer cannot be
established, or the UE cannot maintain a bearer in the ECM-Connected state
during handovers, the corresponding signaling bearer is deactivated.
The ECM state transition diagrams in the UE and MME are shown in
Figures 9.4 and 9.5.

9.10 QoS Classes, Parameters, and Mapping Strategies


QoS is an important aspect of planning and design of mobile broadband net-
works for handling diverse services with varying requirements. The services to
be supported in LTE network can range from heavy data transfer applications,
real-time conversational voice, bank transactions, streaming video, medical ap-
EPC Network Architecture, Planning, and Dimensioning Guideline 189

Figure 9.4 ECM state model in UE

Figure 9.5 ECM state model in MME.

plications, and the casual Internet browsing type services. Each of these can
involve users with different service quality experience requirements. Therefore,
it is important to have a flexible QoS framework that can withstand future chal-
lenges. Advanced LTE QoS allows priorities for certain customers or services
during congestion. In the LTE broadband network, QoS is implemented be-
tween the CPE and the PDN gateway and is applied to a set of bearers. A bearer
is basically a virtual concept and is a set of network configuration to provide
special treatment to a set of traffic streams such as VoIP packets are prioritized
by network compared to Web browser traffic. In LTE, QoS is applied on radio
bearer, S1 bearer, and S5/S8 bearer, collectively called an EPS bearer. An EPS
bearer is not restricted to carrying a single data stream. Instead, it can carry
multiple data streams. That is, each EPS bearer comprises one or more bidirec-
tional service data flows (SDFs), each of which carries packets for a particular
service such as a streaming video application.
The EPS bearer is broken down into three lower-level bearers, namely,
the radio bearer, the S1 bearer, and the S5/S8 bearer. Each of these is itself as-
sociated with a QoS profile and receives a share of the EPS bearer’s maximum
error rate and maximum delay. The EPS bearer QoS profile is specified by a set
of four parameters that are referred to as the QCI, ARP, GBR, and MBR. Each
QCI (QoS class indicator) is characterized by the priority, packet delay budget
(allowed packet delay with values ranging from 50 ms to 300 ms), and packet
190 From LTE to LTE-Advanced Pro and 5G

error loss rate (with allowed values ranging from 10−2 to 10−6). The packet
delay budget is an upper bound, with 98% confidence, for the delay that a
packet receives between the mobile and the PDN gateway. The QCI specifies
the treatment IP packets will receive on a specific bearer. The QCI label for a
bearer determines how it is handled in the eNodeB. Priority level 1 is the high-
est priority level That is, a congested network meets the packet delay budget
of bearers with priority 1, before moving on to bearers with priority 2, for
instance. The 3GPP has defined a series of standardized QCI types [8], which
are summarized in Table 9.1. The QCI is a single integer that ranges from 1 to
9 and serves as reference in determining QoS level for each EPS bearer. It repre-
sents node-specific parameters that give the details of how an LTE node handles
packet forwarding in the sense of the scheduling weights, admission thresholds,
queue thresholds, and link layer protocol configuration. The network operator
can preconfigure the LTE nodes in how to handle packet forwarding according
to the QCI value. However, the QCI values are expected to be mostly used by
eNodeBs in controlling the priority of packets delivered over radio links. As
for the SGW and PGW nodes within the EPC, the QCIs may be mapped to
QoS mechanisms implemented by the respective vendors such as differentiated
service code points (DSCP) explained in RFC 2475 or MPLS mechanisms to
give appropriate priorities in the processing and routing of the packets within
these nodes and meet certain delay and packet loss requirements [1, 9]. In fact,
3GPP specifications mandate DiffServ on the S1-U and X2 interfaces, and the
protocol is commonly used within the EPC. In the DiffServ protocol, the in-
gress router examines the incoming packets, groups them into classes that are

Table 9.1
3GPP QoS Class Indicators and Characteristics
Bearer Packet Packet
Resource Delay Error/Loss
QCI Type budget, ms Rate Priority Example Services
1 GBR 100 10 −2 2 Conversational voice
2 150 10−3 4 Conversational video
3 50 10−3 3 Real-time gaming
4 300 10−6 5 Nonconversational video (buffered
streaming)
5 Non-GBR 100 10−6 1 IMS signaling
6 300 10−6 6 TCP based (e-mail, www, FTP, chat,
progressive video)
7 100 10−3 7 Voice, video (live streaming),
interactive gaming
8 300 10−6 8 TCP based (e-mail, www, FTP, chat,
9 9 progressive video)
EPC Network Architecture, Planning, and Dimensioning Guideline 191

known as per-hop behaviors (PHBs), and labels them using a 6-bit differenti-
ated services code point (DSCP) field in the IP header. Inside the network, in-
ternal routers use the DSCP field to support the algorithms for queuing, packet
dropping, and packet forwarding.
In Table 9.1, the GBR and MBR are defined only in the case of GBR type
EPS bearers. An EPS bearer is referred to as a guaranteed bit rate (GBR) bearer
where a bit rate is guaranteed, by the parameter GBR. The parameter MBR is
used for a GBR type bearer to indicate the maximum bit rate allowed on the
bearer within the LTE network. Any packets arriving at the bearer after the
specified MBR is exceeded will be discarded. These can be used for applications
such as VoIP and conversational video. For these bearers, bandwidth resources
are allocated permanently to the bearer at bearer establishment/modification.
The non-GBR bearers do not guarantee any particular bit rate. These can be
used for applications such as Web browsing or FTP transfer. For these bearers,
bandwidth resources are not allocated permanently to the bearer. Note that a
UE can be connected to more than one PDN (e.g., PDN 1 for Internet, PDN
2 for VoIP using IMS) through different PGWs and it has one unique IP ad-
dress for each of its all PDN connections. In that case, a parameter referred to
as UE-AMBR (UL/DL) indicates per UE aggregate maximum bit rate allowed
over the aggregated non-GBR EPS bearers associated to the UE no matter how
many PDN connections the UE has. Likewise, each APN access is associated
with an aggregate QOS parameter referred to as the APN AMBR (the aggregate
maximum bit rate) [10]. The APN AMBR is a subscription parameter stored
per APN in the HSS. It limits the aggregate bit rate that can be expected to be
provided across the mobile’s all-non-GBR bearers that are using the same APN
(e.g., excess traffic may get discarded by a rate shaping function). Each of those
non-GBR bearers could potentially utilize the entire APN AMBR, for instance
when the other non-GBR bearers do not carry any traffic. The PGW enforces
the APN AMBR in downlink. The enforcement of APN AMBR in uplink is
done in the UE and additionally in the PGW.
The QCI 5 to 9 (non-GBR) can be assigned to what is referred to as
the default bearers in LTE. The default bearer in LTE depends on the service
subscribed and remains connected until service is changed or terminated. Each
default bearer comes with an IP address. An LTE subscriber may be assigned
more than one default bearer in which case each of the bearers will have a sepa-
rate IP address. There are also the dedicated bearers, which are created on top
of an existing default bearer. The dedicated bearer shares the IP address previ-
ously established by the default bearer and therefore does not require to occupy
additional IP address. The dedicated bearers are mostly used for GBR services
although they can also be used for a non-GBR service. The dedicated bearer
may be used, for instance, for VoIP service to provide high-quality service and
improve the user experience. It is to be noted that whether a service is realized
192 From LTE to LTE-Advanced Pro and 5G

based on GBR or non-GBR bearers, would depend on the policy of the service
provider and the anticipated traffic load versus the dimensioned capacity. For
example, if sufficient ample capacity is provided in view of anticipated traffic,
any service, whether real time or nonreal time, can be realized based on a non-
GBR bearer. However, a service provider may leverage GBR bearers to imple-
ment service blocking rather than service downgrading. For most carriers this
may be a preferred user experience in which network carriers block a service re-
quest rather than enabling all services with degraded quality and performance.
The parameter ARP in Table 9.1 is used to decide whether to refuse a new
bearer request or remove an existing one in favor of admitting a higher priority
bearer when there are insufficient resources in the LTE network such as within
an eNodeB, SGW, or PGW. The allocation and retention priority (ARP) con-
tains three fields. The ARP priority level determines the order in which a con-
gested network should satisfy requests to establish or modify a bearer, with level
1 receiving the highest priority. Note that this parameter is different from the
QCI priority level defined before. The preemption capability field determines
whether a bearer can grab resources from another bearer with a lower priority
and might typically be set for emergency services. Similarly, the preemption
vulnerability field determines whether a bearer can lose resources to a bearer
with a higher priority.
An EPS bearer can be made of one or more bidirectional service data flows
(SDF), each of which carries packets for a particular service such as a stream-
ing video application. An EPS bearer is identified using a traffic flow template
(TFT), which is the set of SDF templates that make it up. The SDFs in an EPS
bearer should share the same quality of service, specifically the same QCI and
ARP to ensure that they can be transported in the same way. For example, a user
may be downloading two separate video streams, with each one implemented as
a service data flow. The network can transport the streams using one EPS bearer
if they share the same QCI and ARP, but has to use two EPS bearers otherwise.
In turn, each service data flow may comprise one or more unidirectional packet
flows, such as the audio and video streams that make up a multimedia service.
The service data flow is identified using an SDF template, which is the set of
packet filters that make it up. Each packet flow is identified using a packet filter,
which contains information such as the IP addresses of the source and destina-
tion devices, and the UDP or TCP port numbers of the source and destina-
tion applications. Packet flows are known to the application, and the mapping
between packet flows and service data flows is under the control of a network
element such as the PCRF. The packet flows in each service data flow have to
also share the same QCI and ARP. However, in case two packet flows need to
be assigned different priorities, they have to be implemented using two service
data flows on two different EPS bearers with each assigned a different ARP. For
instance, in a video telephony service, the network may assign a lower ARP to
EPC Network Architecture, Planning, and Dimensioning Guideline 193

the video stream, so that it can drop the video stream in a congested cell but
retain the audio.

9.11 Charging Parameters


The charging parameters are associated with each service data flow and describe
how the user will be charged. The charging methods consist of offline charging,
which is suitable for simple monthly billing, or online charging, which supports
more complex scenarios such as prepaid services or monthly data limits. The
measurement method selected determines whether the network should monitor
the volume or duration of the data flow or both. The tariff that the charging
system will eventually use is determined by the charging key, also known as the
rating group. These parameters are used by the serving and PDN gateways to
monitor the flow of traffic, once an SDF is configured, and the send the infor-
mation to an online or offline charging system�

9.12 EPC Planning and Dimensioning Guidelines


The objective of the EPC planning and dimensioning is to provide an optimal
network design that minimizes the cost and meets the QoS requirements based
on realistic traffic parameters. The planning should be based on a number of
parameters and considerations consisting of the required bit rates and QoS,
number of simultaneous bearers, busy hour session and call attempts, and the
signaling traffic. The outcome of the planning will be a network topology that
defines the number of the core network elements, their locations and the inter-
connect bandwidths. It is important to choose optimal locations for the eNo-
deBs, their homing geometry to the core network elements, the distribution of
the EPC functional elements and their locations, and the sizing of the network
elements and links. Some of these aspects will be discussed in the following two
sections.

9.12.1 Network Topology Considerations


To reduce the intercell interference, it is important to get an assessment of
traffic hot spots and try to place the eNodeBs as close to the traffic centers as
possible. It is also important to consider the pattern of handovers that can be
expected in homing the radio access nodes to the MMEs and SGWs. The han-
dover of the users to another tower location changes the traffic pattern, but the
change will be less dramatic if cells that form a logical community with frequent
local mobility are connected to serving gateways located in the same device or
rack. That will reduce traffic variability arising out of normal user movement
194 From LTE to LTE-Advanced Pro and 5G

between the cells. The EPC bearer channels are essentially tunnels that create
an independent set of data paths that must be traversed before traffic is subject
to normal IP routing. Thus, the location of the PGW, relative to the SGWs,
is another important consideration in EPC design. Where the two are close in
a network topology sense, traffic jumps off the EPC and onto the IP service
network close to the cell sites. Smart traffic handling at or near the eNodeB can
segregate Internet traffic from wireless service traffic and move it immediately
onto the best-effort infrastructure used for broadband Internet connectivity.
That will reduce traffic in the more expensive EPC components and improve
service performance and operations costs. Offloading is especially critical for
Internet video traffic, which can load the SGW/PGW and the associated tun-
nels significantly. This offloading of internet traffic from the EPC can also be
very efficient if there are many intra-LTE network servers and service elements
that need to be accessed internally. Such more distributed topology will require
more packet gateway points and less efficient 4G traffic aggregation. However,
centralizing packet gateway locations will mean managing longer bearer chan-
nel paths, more routing on the way and carrying the traffic over a longer dis-
tance before it can be connected to any server and affecting the user experience.
These considerations require appropriate trade-offs to be made in distributing
and locating the core network elements to achieve a balance between costs, and
the gains achieved in reducing congestion and improving user service Another
aspect impacting the service performance and user experience is the speedy and
reliable transmission of signaling exchanges with the MME, SGW, HSS, and
hence is an important considerations in the design of the EPC. As the signaling
path failures will compromise services, and latency in managing handoffs will
disrupt the bearer channel and interfere with user conversations or experiences.
These signaling channels need not be carried on the same routes as the bearer
channels, but where diverse routing may not be available, it may help to provide
for the priority transmission of signaling messages over the links.

9.12.2 EPC Dimensioning


In the planning and dimensioning of the evolved packet network, first the traf-
fic profile of each eNodeB is estimated. Then based on the considerations ex-
plained in the previous section, the location and distribution of core network
functional elements and the homing of the eNodeBs to the EMMs and SGWs
are decided so that it may support the load generated by all the subscriber. The
traffic profile from each eNodeB along with the capacities of the various ele-
ments is the main factor that controls the number of core network elements
installed. The traffic profile must be expressed in terms of a number of traffic
parameters or metrics. These are the number of busy hour attached subscribers,
ASUB, the number of busy hour data session set up attempts, BHDSA, and if
EPC Network Architecture, Planning, and Dimensioning Guideline 195

VoIP is supported, the number of busy hour voice call attempts, BHVCA. The
BHDSA is calculated for each user by the following formula

ASUB * FractionofActiveSubscribersBH * 3600


BHDSA = (9.1)
MeanSessionDuration(sec)

The ASUB parameter represents the number of LTE subscribers that are
able to have a successful connection with the PGW along with a successfully
established default EPS bearer and successfully allocated IP address in busy
hour. The busy hour, BH, is known to be the busiest 60-minute period of the
day, in which the total traffic is the maximum throughout the day. Moreover,
an estimate of the number of simultaneously established bearers, simultaneous
evolved packet system bearers (SEPSB), at busy hours is needed to provide the
bandwidth required for the network element data interconnect links such as the
S1-U, S5 and SGi interfaces. This parameter should be estimated for applica-
tions requiring the Internet and the intranet services and is obtained by

ASUB * FractionofActiveSubscribersBH * MeanSessionDuration(sec)


SEPSB = (9.2)
3600

The signaling for attachment and detachment, as well as EPS bearer es-
tablishment and management along with authentication requests and respons-
es, is what will impact the load on the S1-C interfaces.
In the most formal scenario, the resulting traffic profile, the QoS require-
ments such as delay, and throughputs along with the core network elements
capacity parameters and cost constraints can be translated into a mathemati-
cal linear programming formulation. This will include an objective function
for optimization and a set of decision variables such as the link capacities and
number of each network element types. This has been carried with heuristic
solutions developed on the basis of the CPLEX methodology in [11] where the
reader may refer for more information, and solution examples. However, here
we will continue on with providing the general guidelines for the dimensioning
process. With a detailed traffic profile and parameters and the capacity con-
straints of the vendor core network elements, one should be able to identify the
type and the needed number of network nodes and the bandwidth for the inter-
connect links. In general, the capacity constraints are characterized by four dif-
ferent basic types of capacity constraints parameters. These are the throughput,
transactions, subscribers, and bearers. The throughput is the total amount of
data load that a node can handle, and it is considered as a data plane limitation,
whereas the transactions parameter relates to the signaling traffic and signaling
messages processing capacity in the control plane. The subscribers parameter
196 From LTE to LTE-Advanced Pro and 5G

represent the number of subscribers that can be handled by a node and include
the active versus idle (i.e., without ongoing media session). The bearer param-
eters include the default and the dedicated type bearers explained in previous
sections. The default bearer is best effort and is mandatory for any attached
user, whereas the dedicated bearer is established based on a needed bit rate. The
number of transactions and subscribers (attached subscribers) in busy hour play
a major role in the determination of the capacity requirements for the MMEs.
Another factor is the number of established bearers in busy hour which impact
the control and connection related management signaling load placed on the
MMEs. The HSS is a database for subscribers’ data and involved with handling
control-plane signaling. The S6a link which provided the interface between the
MME and the HSS is concerned with the number of transactions the number
of attached subscribers. connection between the MME and the HSS is control
plane only. The throughput in busy hour is the major capacity determinant
for the SGW which works as a mobility anchor for the traffic being carried on
different eNodeBs and forwards packets between the eNodeB and the PGW.
With respect to signaling and transactions, the SGW is affected by the setup
and teardown of bearers, mobility (inter-eNB handover), and the idle to con-
nected transition states of the UE. The throughput is also a major determinant
in the capacity requirement for the PGW. For the PGW, the control-plane load
is affected by the setup and teardown of bearers, the QoS negotiation with the
PCRF, and inter-SGW mobility.
The dimensioning of the EPC network element interconnect and inter-
faces requires a more detailed estimate and classification of the traffic at busy
hours. The traffic may be classified into two major types, called the elastic and
the stream traffic [11, 14, 16]. The elastic traffic includes Web browsing and
FTP, and is generated by nonreal-time applications and carried over the TCP.
The TCP employs feedback control mechanisms to adapt the transfer rate to
the instantaneous network conditions, and allows flows on the link to share the
available capacity. The TCP and its optimal adaptation to wireless networks are
discussed in detail in [13]. Applications that generate elastic traffic require the
reliable packet delivery where every piece of data needs to be transferred. In case
of packet losses, the respective packets are retransmitted. In terms of quality of
service, the emphasis is placed on the actual throughput. However, stream traf-
fic refers to traffic flows whose packets need to be delivered in a timely manner,
with packet delay and delay variation being the most important quality mea-
sures. The stream traffic is usually generated by (near) real-time applications
such as audio/video communications or streaming applications, and carried via
the RTP/UDP. In [12] three capacity planning methods were investigated based
on: (1) a dimensioning formula-based approach, (2) an overbooking factor-
based approach, and (3) a delay-based approach. The formula-based approach
calculates the necessary link bandwidth to handle the expected peak aggregated
EPC Network Architecture, Planning, and Dimensioning Guideline 197

traffic flow. In the overbooking factor-based approach, the link dimensioning is


based on the expected average aggregated traffic flow boosted by an overbook-
ing factor. This assumes that not all users/applications sharing the link may be
peaking at the same instant, and hence assigns lower capacities than the ag-
gregated peak traffic flow. The delay-based approach uses the delay constraint
to determine the bandwidth needed for the link to handle the expected traffic
flow.
For the delay-based approach, the queuing models M/D/1, and M/G/R-
PS are used for the streamed traffic and the elastic traffic, respectively, to calcu-
late the delays. These models are used iteratively to calculate the link capacities
required to handle the required bit rates and throughputs while meeting the
delay constraints. The M/G/R-PS model guarantees the end-to-end QoS by
following the theory of process sharing which characterizes the traffic at the
flow level, and the two main QoS concerns to be guaranteed are throughput
and delay. The model is capable of characterizing the TCP traffic assuming each
user has an individual flow over the link, which is viewed as a processor-sharing
system offering its service (capacity) to several customers. In the M/G/R-PS
scheme, multiple files (applications) are transmitted simultaneously over one
link, each on a different TCP connection, by breaking them into small pieces
(i.e., packets) and transferring these pieces sequentially. The control mecha-
nism of TCP regulates the amount of traffic in a way that resources are shared
equally among active flows. Depending on whether one TCP connection is
able to utilize the total link capacity by its own, the system is described either
by an M/G/1 PS or M/G/R PS model. The basic M/G/R-PS model is discussed
further in [14–16]. This model is applied for dimensioning mobile networks as
well as ADSL. The elastic traffic acts like a processor sharing system because all
elastic traffic flows sharing the same link share the same amount of bandwidth
and other resources. The shared link appears like a system with R servers where
R is equal to the ratio between the link rate C and the attainable transfer rate
of one connection, rpeak, that is R = C/rpeak. Thus, up to R flows can be served
at the same time without rate reduction imposed by the system. If rpeak is larger
than the link capacity, the M/G/R PS model reduces to the simple M/G/1 PS
model. A good property of M/G/R PS queues is the independence of the aver-
age sojourn time (average time in system) from the shape of the general service
time or file size distribution and thus large files do not delay small ones too
much.

References
[1] 3GPP TS 23.002, “Technical Specification 3rd Generation Partnership Project; Techni-
cal Specification Group Services and System Aspects; Network architecture, Release 8.
V8.7.0,” 2010.
198 From LTE to LTE-Advanced Pro and 5G

[2] 3GPP TS 36.410, “Technical Specification 3rd Generation Partnership Project; Technical
Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access
Network (E-UTRAN); S1 General Aspects and Principles, Release 12, V12.1.0,” 2014.
[3] 3GPP TS 36.411, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1
Layer 1,” 2014.
[4] 3GPP TS 36.412, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1
Signaling Transport,” Release 12, 2011.
[5] 3GPP TS 36.413, “Evolved Universal Terrestrial Access (E-UTRA); S1 Application Proto-
col (S1 AP),” Release 12, 2011.
[6] 3GPP TS 36.414, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1
Data Transport,” 2014.
[7] 3GPP TS 29.061, “Technical Specification Group Core Network and Terminals; Inter-
working Between the Public Land Mobile Network (PLMN) Supporting Packet Based
Services and Packet Data Networks (PDN), V12.10.0, Release 12,” 2011.
[8] 3GPP TS 23.203, “Policy Control and Charging Architecture (Stage 2), Section 6.1.7.2.”
[9] 3GPP TS 29.213, “Technical Specification Group Core Network and Terminals; Policy
and Charging Control Signaling Flows and Quality of Service (QoS) Parameter Mapping,
Release 13, sec. 6.4, 2012.
[10] 3GPP TS 23.401 V13.4.0 (2015-09), “General Packet Radio Service (GPRS) enhance-
ments for Evolved Universal Terrestrial Radio Access Network, Release 13,” 2012.
[11] Dababneh, D., “LTE Traffic Generation and Evolved Packet Core (EPC) Network
Planning,” Thesis submitted to Ottawa-Carleton Institute for Electrical and Computer
Engineering (OCIECE) Department of Systems and Computer Engineering Carleton
University Ottawa, Ontario, Canada, K1S 5B6, March 2013.
[12] Checko, A., L. Ellegaard, and M. Berger, “Capacity Planning for Carrier Ethernet LTE
Backhaul Networks,” IEEE Wireless Communications and Networking Conference (WCNC),
April 2012.
[13] Rahnema, M., UMTS Network Planning and Inter-Operation with GSM, New York: John
Wiley & Sons, 2008.
[14] Riedl, A., et al., “Investigation of the M/G/R Processor Sharing Model for Dimensioning
of IP Access Networks with Elastic Traffic,” Institute of Communication Networks,
Munich University of Technology (TUM), Siemens AG, Munich, 2011.
[15] Lindberger, K., “Balancing Quality of Service, Pricing and Utilisation in Multiservice
Networks with Stream and Elastic Traffic,” Proc. of the International Teletraffic Congress
(ITC 16), Edinburgh, Scotland, 1999.
[16] Núnez Queija, R., J. L. van den Berg, and M. R. H. Mandjes, “Performance Evaluation
of Strategies for Integration of Elastic and Stream Traffic,” Centrum voor Wiskunde en
Informatica (CWI), PNA-R9903 February 28, 1999.
10
LTE-Advanced Main Enhancements
The LTE capabilities have been enhanced in Release 10 to provide increased
peak rates with up to 3 Gbps on downlink and 1.5 Gbps on uplink, higher
channel spectral efficiencies from a maximum of 16 bps/Hz in Release 8 to 30
bps/Hz in Release 10, and improved performance at cell edge. These higher peak
spectral efficiencies can be achieved using up to 8-layer spatial multiplexing in
the downlink and up to 4-layer spatial multiplexing in the uplink, according to
the Release 10 specifications for single user MIMO (SU-MIMO). Release 10
LTE introduces new enhancements to make the technology compliant with the
International Telecommunication Union’s requirements for IMT-Advanced,
where the resulting system is known as LTE-Advanced. A useful summary of
the features of the new system is provided in [1]. The Release 10 enhancements
are designed to be, for the most part, backwards-compatible with Release 8. A
Release 10 base station can control a Release 8 mobile, normally with no loss
of performance, while a Release 8 base station can control a Release 10 mobile.
In the few cases where there is a loss of performance, the degradation has been
kept to a minimum.
The main features of Release 10 include carrier aggregation, the enhance-
ment to multiple antenna transmissions, and the relaying functions with some
of the impacts to other aspects of the system, which will be covered in this chap-
ter. There are further features and enhancements to LTE-Advanced that have
been provided or proposed in Releases 11 and 12, which include coordinated
multipoint transmissions, enhanced PDCCH, proximity services for machine-
to-machine communications, IP flow mobility or seamless WLAN offload, se-
lective IP traffic offload, certain enhancements for machine-type communica-
tions, and improved interoperation with wireless local area networks. In this
chapter, we will also discuss the coordinated multipoint transmissions and the

199
200 From LTE to LTE-Advanced Pro and 5G

enhanced PDCCH, as well as provide a fairly good coverage of machine-to-


machine communication and the key enhancements as particularly related to
LTE networks for its realization.

10.1 Carrier Aggregation


LTE-Advanced allows a mobile to transmit and receive on a number of ag-
gregated carriers, where each carrier is named a component carrier. The com-
ponent carrier can have a bandwidth of 1.4, 3, 5, 10, 15, or 20 MHz, and a
maximum of five component carriers can be ultimately aggregated [2]. Thus,
the maximum bandwidth is 100 MHz. That is handled by a single UE category,
category 8, which can support a peak data rate of 3,000 Mbps in the downlink
and 1,500 Mbps in the uplink. There are originally certain restrictions in the
number of component carriers in UL and DL and the mode of operations in
Release 10, which have been lifted in the later Releases 11 and 12. For example,
in Release 10, up to two component aggregations are allowed on DL, and only
one carrier (no aggregation) is allowed in UL, but these restrictions were re-
moved in the later releases. Furthermore, in the TDD mode, each component
carrier had to have the same TDD configuration in Release 10, but that restric-
tion was removed as part of Release 11. Finally, the component carriers had to
have the same mode of operation (FDD or TDD) up to Release 11, but that
restriction was removed in Release 12.
However, the number of component carriers is limited to two when using
contiguous intraband aggregation and one otherwise in Releases 10 and 11.
Moreover, to limit the complexity of the specifications for band combinations
and the implementations, the specifications only support carrier aggregation in
a limited number of frequency bands [3]. Carrier aggregation (CA) can be used
for both FDD and TDD. The number of aggregated carriers can be different in
DL and UL, but the number of UL component carriers cannot be larger than
the number of DL component carriers (in the FDD mode). The individual
component carriers can be of different bandwidths. There are three different
scenarios for carrier aggregation: (1) the interband aggregation where the com-
ponent carriers are located in different frequency bands, (2) noncontiguous
intraband where the component carriers are in the same frequency band but
separated by a frequency gap, and (3) the contiguous intraband where the com-
ponent carriers are in the same frequency band and are adjacent to each other.�

10.1.1 Connection Signaling


In carrier aggregation, there is normally a serving cell for each component car-
rier where the coverage range of each cell may vary due to the different fre-
LTE-Advanced Main Enhancements 201

quency band that may be involved. However, the RRC connection signaling
is handled by one cell that is referred to as the primary serving cell (PCell),
which is said to use the primary component carrier (PCC): DL and UL. The
other component carriers are all referred to as the secondary component carrier
(SCC), DL and possibly UL, and the cells in which they are used are called the
secondary serving cells (SCell). The primary cell contains one component car-
rier in case of the LTE TDD mode or one component carrier on UL and one
on DL and is used in exactly the same way as a cell in Release 8. In RRC_IDLE
state, the mobile performs cell selection and reselection using one cell at a time.
The RRC connection setup procedure is unchanged as the mobile only com-
municates with a primary cell. The secondary cells are only used by mobiles in
RRC_CONNECTED and are added or removed by means of mobile-specific
RRC Connection Reconfiguration messages. Each secondary cell contains one
component carrier in the TDD mode, and in the case of the FDD mode, it has
one component carrier on downlink and optionally one on uplink. The carrier
aggregation only affects the physical layer, the MAC protocol on the air inter-
face, and the RRC, the S1-AP, and the X2-AP signaling protocols. There is no
impact on the RLC or PDCP [4]. The mobile can transmit the uplink signaling
control information, PUCCH, at the same time that it transmits data on the
PUSCH. The PUCCH is only transmitted from the primary cell. However,
the mobile can send the data on PUSCH on either the primary cell or on any
of the secondary cells. There are no significant changes in the procedures for
UL transmission and reception. The base station sends the PHICH acknowl-
edgments to the same cell that the mobile used for its uplink transmissions
(primary or secondary).

10.1.2 Resource Scheduling


The base station will use the PDCCH for transmitting the resource informa-
tion to the UE as in Release 8 but with a few modifications as detailed in [5,
6]. The component carriers are independently scheduled and generate inde-
pendent sets of hybrid ARQ feedback bits. However, cross-carrier scheduling
is supported in that the base station can trigger an UL or DL transmission
on one component carrier using a scheduling message on another. Release 10
implements cross-carrier scheduling by adding a carrier indicator field (CIF)
to each DCI format, which indicates the carrier to be used for the subsequent
transmission. In cross-carrier scheduling, the eNodeB may transmit its schedul-
ing messages on the component carrier that has the greatest coverage in order to
maximize the signaling reliability.
202 From LTE to LTE-Advanced Pro and 5G

10.2 Enhanced Multiantenna Transmissions


LTE-Advanced introduces enhancements to both the UL and DL with regard
to multiantenna transmission schemes that can be configured. In order to ad-
just the type of multiantenna transmission scheme to for instance the radio
environment, a number of different transmission modes (TM) has been de-
fined. The base station uses RRC signaling to inform the UE, based on its
capability, the transmission mode to use. On the DL and UL, TM 1-7 and
TM1 were introduced in Release 8, respectively, which we discussed in Chapter
7. TM8 was introduced in R9 and TM9 was introduced in R10 which allow
8 × 8 MIMO operation. In the UL, TM2 was introduced in R10 to allow 4
× 4 MIMO operation. The different transmission modes differ in the number
of antenna ports and the number of layers (streams, or ranks) used. Three new
categories of UEs were introduced in Release 10, category 6, 7, and 8, where
UE category 8 supports the maximum number of carrier components and 8 ×
9 spatial multiplexing.

10.2.1 Spatial Multiplexing on Uplink


The uplink is enhanced to support single-user MIMO, with up to four trans-
mit antennas and four transmission layers [6]. The peak data rate on uplink
in Release 10 is 600 Mbps. This is 8 times greater than in Release 8 and is
achieved with the use of four transmission layers and two component carriers.
Eventually, LTE is expected to support a peak uplink data rate of 1,500 Mbps
with the use of five component carriers. The PUSCH is transmitted on antenna
port 0 for single antenna transmission, on ports 20 and 21 for dual antenna
transmission and on ports 40 to 43 for transmission on four antennas, while the
same antenna ports are also used by the sounding reference signal (SRS). The
PUCCH can be transmitted from a single antenna on port 100 or from two
antennas using open loop diversity on ports 200 and 201 [7].
The need for SRSs from up to four antenna ports increases the demand
on SRS resources if all transmit antennas are to be sounded at a reasonable rate.
This means that the semistatic configuration of the resources via higher-layer
RRC signaling used in Release 8 is not efficient for this purpose. Therefore,
Release 10 introduces the possibility of dynamically triggering aperiodic SRS
transmissions via PDCCH channel. These dynamic aperiodic SRS transmis-
sions are known as type 1 SRSs, while the Release 8 RRC-configured SRSs
described are known as type 0 in Release 10.�

10.2.2 Spatial Multiplexing on Downlink


The downlink multiantenna transmission is extended to eight layer spatial mul-
tiplexing in Release 10 using transmission mode 9. The technique supports
LTE-Advanced Main Enhancements 203

three objectives: (1) single-user MIMO transmissions with a maximum of 8


layers on antenna ports 7 to 7 + n where n is the number of layers used, (2)
multiple-user MIMO transmissions to a maximum of four mobiles on each of
antenna ports 7 and 8 and provides the accurate feedback required by MU-
MIMO, and (3) allows the base station to switch a mobile between the two
techniques every subframe without the need for additional RRC signaling. Re-
lease 10 allows for two component carriers each of which can carry eight trans-
mission layers, resulting in a peak downlink data rate of 1,200 Mbps, which is
4 times greater than in Release 8. Eventually, LTE is expected to support a peak
downlink data rate of 3,000 Mbps with the use of five component carriers.
We have already seen that the peak downlink data rate in Release 10 is 1,200
Mbps. This is 4 times greater than in Release 8 and results from the use of two
component carriers, each of which carries eight transmission layers rather than
four. Eventually, LTE should support a peak downlink data rate of 3,000 Mbps,
through the use of five component carriers.

10.2.3 Downlink Reference Symbols


Reference signals are known symbols (to the receiver), which perform the func-
tions of: (1) providing an amplitude and phase reference for channel estimation
in the demodulation process, and (2) a power reference for channel quality es-
timation and frequency-dependent scheduling. Both of these functions are pro-
vided by the cell specific reference symbols (CRS), which are used in Release 8
in TM1-6. One cell-specific reference symbol sequence is provided per antenna
port from which the UE estimates the radio channels influence on the signal.
Using this together with knowledge about the codebook used for precoding,
which is performed before the addition of the reference symbols in Release 8,
the UE can demodulate the received signal and regenerate the data sent.
To support 8-antenna MIMO, Release 10, does not add four new anten-
na ports that each carries cell-specific reference symbols. As this would occupy
more resource elements, which would increase the overhead for Release-10 mo-
biles that recognized them and increase the interference for Release 8 mobiles
that did not, that would also add no benefit in improving the performance of
multiple-user MIMO [8]. Instead, Release 10 introduces UE-specific demodu-
lation reference symbols as done on the uplink in Release 8, which are added to
the different data streams before precoding. This scheme allows the knowledge
about the reference symbols to provide information about the combined influ-
ence of radio channel and precoding, while also no preknowledge about the
precoder is required by the receiver. The transparency of the precoding opera-
tion to the mobile allows the eNodeB to apply any precoding that it sees most
suitable in undoing the channel influence in the receiver. Moreover, the UE-
specific demodulation reference symbols are transmitted in physical resource
204 From LTE to LTE-Advanced Pro and 5G

blocks that are assigned to the target mobile. As a result, they do not cause any
overhead or interference for the other mobiles in the cell. The demodulation
reference symbols are transmitted on antenna ports 7 to 14. The signals on
ports 7 and 8 are the same ones used by dual-layer beamforming, while those
on ports 9 to 14 support single-user MIMO with a maximum of eight antenna
ports. With this, each individual reference symbol is actually shared among four
antenna ports by means of orthogonal code division multiplexing.
However, the UE-specific reference symbols are unsuitable for channel
quality measurements, which should be performed across the entire downlink
system bandwidth. This is handled by the transmission of channel state infor-
mation (CSI), reference symbols on eight more antenna ports numbered from
15 to 22 [8]. These symbols are not precoded, so the antenna ports are differ-
ent from the ones used by the UE-specific reference signals stated above. The
functions of the CSI reference symbols are similar to those of the sounding
reference symbols used on uplink in Release 8, in the support of channel qual-
ity measurements and frequency-dependent scheduling. A cell can transmit the
reference symbols using two, four, or eight resource elements per resource block
pair, depending on the number of antenna ports that it has available. The cell
chooses the resource elements from a larger set of 40, with nearby cells choosing
different resource elements so as to minimize the interference between them.
The base station then supplies each mobile with the reference signal configura-
tion. This defines the subframes in which the mobile should measure the signal
and the resource elements that it should inspect, with a measurement interval
of 5 to 80 ms that depends on the mobile’s speed (see Table 6.10.5.3-1 of [7],
for instance). The long transmission intervals help to considerably reduce the
overheads that they incur. To avoid interference caused to Release 8 mobiles
that do not recognize these reference symbols, the base station can schedule the
Release 8 mobiles in different resource blocks.

10.3 Relay Nodes


Relay nodes act similar to repeaters in that they extend the coverage range of
a cell and are useful in sparsely populated areas in which the performance of a
network is limited by coverage rather than capacity. However the relays decode
the received radio signal, before re-encoding and rebroadcasting it. By doing
this, they remove the noise and interference from the retransmitted signal, so
that it can achieve a higher performance than a repeater. The relays help to
increase the deployment of efficient heterogeneous networks made up of a mix-
ture of large and small cells, which is a feature proposed in LTE-Advanced. The
relay nodes use low-power base stations to provide coverage and capacity at cell
LTE-Advanced Main Enhancements 205

edges, and hot-spot areas can also be used to connect to remote areas without
fiber connection.
The relay node connects to the Donor eNodeB (DeNB) via a radio in-
terface, Un, which is a modification of the E-UTRAN air interface Uu, and
can be implemented via radio or point-to-point microwave link to increase its
range. The Un and Uu interfaces can use either the same or different carrier
frequencies. If the carrier frequencies are different, then the Un interface can
be implemented in exactly the same way as a normal air interface, where, for
instance, the relay node acts like a base station on the Uu interface towards the
mobile and independently acts like a mobile on the Un interface towards the
donor eNodeB. If the carrier frequencies are the same, then the Un interface
requires some extra functions to share the resources of the air interface with
Uu. The extra functions include enhancements to the physical layers and some
extra RRC signaling [9–11]. The resource sharing between the Un and Uu
interfaces are managed through allocating individual subframes to either the
Un or the Uu and is implemented in two stages. The donor eNodeB tells the
relay node about the allocation using an RRC RN Reconfiguration message,
and then the relay node configures the Un subframes as MBSFN subframes on
Uu, but without transmitting any downlink MBSFN data in them and without
scheduling any data transmissions on the uplink. However, since the start of
an MBSFN subframe is used by PDCCH transmissions on the Uu interface,
typically for scheduling grants for uplink transmissions, this prevents the use
of the PDCCH on the Un interface. Instead, the specification introduces the
relay physical downlink control channel (R-PDCCH), as a substitute for the
PDCCH on Un. The R-PDCCH transmissions look much the same as normal
PDCCH transmissions, but occur in reserved resource element groups in the
part of the subframe that is normally used by data. The first slot of a subframe
are used for transmitting downlink scheduling commands, while the second slot
are used for transmitting the uplink scheduling grants. The Un interface does
not use the physical hybrid ARQ indicator channel. Instead, the donor eNodeB
acknowledges the relay node’s uplink transmissions implicitly, using scheduling
grants on the R-PDCCH. This also eliminates the need for the physical control
format indicator channel.
The relay node’s access stratum is controlled by the donor node in which
the radio resources are shared between UEs served directly by the donor eNo-
deB and the relay nodes. The relay node’s nonaccess stratum is controlled by an
MME selected by the donor node. The donor eNodeB incorporates the func-
tions of a PDN gateway and a serving gateway, which allocate an IP address
for the relay node and handle its traffic. The donor node also incorporates a
relay gateway that shields the core network and the other base stations from the
need to know anything about the relay nodes directly. The relay node acts as an
eNodeB towards the mobile and controls the mobile’s access stratum. For this,
206 From LTE to LTE-Advanced Pro and 5G

the relay node has one or more physical cell identities of its own, broadcasts its
own synchronization signals and system information, and schedules its trans-
missions on the Uu interface.

10.4 IP Flow Mobility and Seamless WLAN Offload


The trends in the increasing mobile data traffic is threatening to overwhelm
operators’ networks and providing the incentive to offload the Internet traffic
from their core networks. Various mechanisms have been proposed in 3GPP
technical specifications Releases 10, 11, and 12 for LTE-Advanced to flexibly
offload IP traffic from the LTE core network as well as switch the IP PDN
connections, and IP flows between 3GPP LTE and non-3GPP radio access
technologies, of which an overview is given in the following sections.

10.4.1 Local and Selective IP Traffic Offload


In the Release 10 LTE-Advanced, the MME uses its knowledge of the mobile
location to select a nearby serving gateway and PDN gateway within the LTE
core network to route the IP traffic via local IP gateways. In this way, the traffic
ends up taking a shorter route, although it still travels through the EPC [12].
In Release 12, the PDN gateway can be replaced by a local gateway that lies in
a wireless local access network and may be colocated with the home eNodeB
(home eNodeB gateway). This allows the operator to offload Internet traffic
from the evolved packet core. Traffic for the operators internal services such
as the IP multimedia subsystem, is unaffected. A home eNodeB belongs to a
closed subscriber group (CSG), through which it can provide either exclusive
or preferential access to mobiles that also belong to the closed subscriber group.
A mobile’s list of closed subscriber groups is stored by the USIM and can be
downloaded from a device management server that is controlled by the network
operator. Home eNodeBs have lower-power limitations than normal base sta-
tions, can control only one cell, and support the X2 interface in Release 10.
The home eNodeB can communicate over the S1 interface with the evolved
packet core through the home eNodeB gateway that shields the EPC from the
potentially huge numbers of home eNodeBs. The S1 data and signaling mes-
sages are transported by the user’s Internet service provider and hence offload
the operator’s network from unnecessary traffic.
Moreover, IP traffic can also be offloaded locally and selectively based on
the access point names. This function is realized by having the home eNodeB
send the local gateway’s IP address to the MME as part of any S1-AP Uplink
NAS Transport or an Initial UE Message. If the user requests connectivity to
an access point name for which local IP access is permitted, then the MME
can select the local gateway in place of the usual PDN gateway and indicate
LTE-Advanced Main Enhancements 207

this selection to the home eNodeB. The local traffic can then travel directly be-
tween the home eNodeB and its incorporated local gateway and avoid passing
through the serving gateway. In case data arrives on the downlink while the user
is in RRC_IDLE, the local gateway sends the first downlink packet over the
S5 interface to the serving gateway, which triggers the usual paging procedure
and moves the mobile into RRC_CONNECTED. The local gateway can then
deliver subsequent downlink packets directly to the home eNodeB.

10.4.2 Multiaccess PDN Connectivity and IP Flow Mobility


The multiaccess PDN connectivity [13] allows a network operator to simulta-
neously connect a mobile to different access point names via different radio ac-
cess technologies, specifically one 3GPP network and one non-3GPP network
As an example, an operator may connect a subscriber to the IP multimedia
subsystem using the LTE network to exploit the LTE network’s quality of ser-
vice guarantees. At the same time, the operator might connect the user to the
Internet using a domestic wireless local area network to offload the Internet
traffic from the LTE access and core network. This function is implemented
through the intersystem routing policies specified in 4GPP TS 23.402 [14].
The policies can be defined per APN, per IP flow class under any APN or per IP
flow class under a specific APN and can be provided to the UE either through
access network discovery and selection function (ANDSF) [14] or by means of
static preconfiguration.
The IP flow mobility [15] allows IP flows belonging to the same or differ-
ent applications to connect via and different radio access technologies such as
the 3GPP U-Tran and WLAN and move seamlessly between them. In a typical
scenario, a user may connect through a general APN to the network operator’s
servers and to the Internet. The operator can then use a wireless local area net-
work for accessing the internet and any other best effort traffic, while reserving
the LTE access network for real-time traffic such as streaming video from its
own servers. The function uses ANDSF to provide the mobile with a priori-
tized list of flow-based intersystem routing policy rules. These are similar to the
service-based ISRP rules discussed earlier, except that each rule uses a routing
filter that contains information such as the source and destination IP addresses
and port numbers to identify an individual traffic flow. For this, the mobile has
to support dual-stack mobile IPv6 (DSMIPv6) [16, 17], with extensions that
let it define individual traffic flows by means of routing filters. IP flow mobility
is implemented by having the mobile to connect to a PDN gateway through
both the LTE and a non-3GPP access network. It can then assign a traffic
flow to a particular radio access technology by composing a DSMIPv6Binding
Update message that contains the routing filters and by sending the message
across the access network to the PDN gateway. The PDN gateway updates its
208 From LTE to LTE-Advanced Pro and 5G

routing information and directs subsequent downlink packets to the mobile


over the requested network. Because the solution is based on DSMIPv6, IP
address preservation and session continuity is provided when moving IP flows
from one access to the other.

10.5 Enhanced PDCCH


In Release 8 of LTE, the physical downlink control channel (PDCCH), which
carries the UE-specific scheduling assignments for downlink resource alloca-
tion, uplink grants, physical random access channel (PRACH) responses, UL
power control commands, and common scheduling assignments for signaling
messages (such as system information and paging), is transmitted on the control
region of subframes (the first three symbols of each subframe). This limits the
capacity of the control channel particularly for handling the increased control
signaling to handle new features introduced in LTE-Advanced such as cross-car-
rier scheduling in carrier aggregation, the enhanced multiantenna transmissions
(base stations with increasing number of antennas), and the cells that contain
a large number of low-data rate devices in, for instance, machine-to-machine
communications, which are all features introduced in Releases LTE-Advanced.
Furthermore, the PDCCH is distributed across the full downlink bandwidth
and hence cannot benefit from frequency-domain intercell interference co-
ordination. These limitations are addressed by the new enhanced PDCCH,
EPDCCH introduced in Release 11 [6, 7], which is in addition to the old
PDCCH. The EPDCCH carries the same information as the PDCCH but
is transmitted in the downlink data region and is UE-specific, in that differ-
ent UEs can have different EPDCCH configurations. The base station uses
RRC signaling on the PDCCH to configure individual resource block pairs
within each subframe to either the PDSCH or the EPDCCH on demand to
increase efficiency in resource utilization. Thus, the PDCCH is still needed.
The PDCCH also carries symbols 0 in each subframe which is used for trans-
mitting the PCFICH and PHICH, while also carrying the control information
that is located in the common search area. In each subframe that contains the
EPDCCH, the mobile searches for scheduling messages that are directed to
one of its radio network temporary identities (RNTI). If a match is found, the
mobile commences data transmission and reception in the usual way. Because
the EPDCCH and PDSCH arrive in parallel within each subframe, the mobile
needs to buffer the entire subframe and process the EPDCCH and then the
PDSCH if it is addressed to the mobile.
Each UE can be configured with two sets of EPDDCH each with an
adjustable capacity of 2, 4, or 8 PRB pairs. Since the EPDCCH is in the data
region and is UE-specific, it can also be used in beamforming to increase the
LTE-Advanced Main Enhancements 209

reliability of signal reception. The EPDCCH is transmitted on the four new


antenna ports of 107 to 110, which are associated with reference signals that
occupy the same resource elements as the ones on ports 7 to 10. The EPDCCH
and its reference signals are precoded by means of a mobile-specific precoding
matrix, so that the precoding process is transparent to the mobile. The EPD-
DCH supports simultaneous communications to four different mobiles, via
MU-MIMO where each mobile receives a single layer on one antenna port.

10.6 Coordinated Multipoint Transmissions and Reception


Coordinated multipoint transmissions and reception (CoMP) was introduced
in Release 11 to improve network performance at the cell edges. Mobiles com-
municating from around the cell edge receive a weaker signal from the serving
cell and experience more interference from other cells that are nearby. This re-
sults in degraded service quality and reduced data rates. In CoMP, a number of
TX antennas in the DL and a number of RX antennas in UL in nearby sectors
and sites cooperate so as to increase the power received by and from a mobile
at the cell edge, to reduce the interference and increase the data rate achieved
by cell-edge users. The cooperating antennas in CoMP can be in different cells
or in the same cell. 3GPP [18] has considered several scenarios in designing
CoMP. These consist of: scenario 1, where in a homogeneous network the co-
operating base station antennas control different sectors at a single site; scenario
2, similar to scenario 1, where the antennas are at different sites; scenario 3, as
for instance illustrated in Figure 10.1, for heterogeneous network containing
macrocells and picocells; and scenario 4 where the picocells are replaced by
remote radio heads (RRH) with the same physical cell identity as the parent.
The simplest scenarios for the different cells to cooperate are if their antennas
are at the same site (scenario 1) than if they are at different sites (scenarios 2

Figure 10.1 Coordinated transmission from two cells to device located near cell edges (one
scenario of CoMP).
210 From LTE to LTE-Advanced Pro and 5G

and 3). However, the operator can configure different eNodeBs to cooperate
the operation either across the X2 interface or by using proprietary techniques,
whereas remote radio heads homed to a centralized eNodeB may be configured
to communicate via a high-speed link [8].
The Release 11 specifications support uplink and downlink CoMP using
coordinated scheduling/beamforming, dynamic point selection, and noncoher-
ent joint transmission and reception [8, 18]. In coordinated scheduling/beam-
forming, nearby points coordinate their uplink scheduling and beamforming
decisions to minimize the interference that they receive from other mobiles. In
dynamic point selection, the network actually transmits and receives from only
one point at a time, with the selection potentially changing from one subframe
to the next. In joint reception (JR), the network receives data at multiple points
and combines them to improve the quality of the received signal. The CoMP
cooperating antenna sets for the uplink and downlink can be different from
each other, and the CoMP reception points can be different from the CoMP
transmission points, where a point that transmits in a particular subframe is
known as a CoMP transmission point here. The downlink CoMP is supported
through the CSI reference symbols. The CoMP has more impact on cell-edge
mobiles on the uplink than on the downlink. The simulation results carried out
by 3GPP [18] for joint transmission and reception in a heterogeneous network
without enhanced intercell interference coordination show improvements in
cell-edge data rates of 24% in the downlink and 40% in the uplink and with a
resulting cell capacity rise of 3% in the downlink and 14% in the uplink.

10.7 Machine-Type Communication


The rapid deployment of LTE networks and its much higher-capacity advanced
version is expected to play a significant role in providing the means for ma-
chine-type communication (MTC) in the realization of the Internet of things
(IoT) [18] and machine-to-machine (M2M) communication. M2M is a sub-
part of MTC, where the devices communicate with each other and exchange
data, while MTC could have machines communicating with each other or serv-
ers (machine to machine) or communicating to human (machine to human).
In machine-to-machine communication, remote devices such as smart meters,
home appliances, vending machines, automotive fleet management, security
and medical devices, security cameras, power utilities, and cell phones can be
interconnected to support a variety of new applications [19]. The MTC ser-
vices will provide the operators with the opportunity to introduce end-to-end
information solutions beyond the currently supported human-based commu-
nication. The MTC traffic is classified based on the type of traffic (e.g., real or
nonreal-time traffic). For instance, emergency alerting (i.e., accidental and/or
LTE-Advanced Main Enhancements 211

critical e-healthcare information) is a delay-sensitive, real-time traffic and there-


fore demands strict priority. However, smart metering and monitoring applica-
tions can generate regular traffic pattern, which is considered as low-priority
traffic, but with higher aggregated throughput requirements. More specifically,
the QoS requirements of different types of MTC services vary widely and are
reflected in the MTC service features such as group-based communications,
low or no mobility, time-controlled and delay-tolerant, small data transmis-
sion, secure connection, device monitoring, and alarm messages. The 3GPP has
defined the major service requirements for MTC in Release 10 and onwards of
3GPP TS 22.368 [20]. Nevertheless, the standardization work related to MTC
is ongoing in several standardization organizations outside 3GPP, such as IEEE,
TIA, and ETSI as in [21]. These standardization bodies have also described
the required enhancements in the existing communication standards in their
specifications. The IEEE working group IEEE 802.16p is focused on the usage
of MTC and describes modifications to various standards and air interface op-
timization for mass device communication in the use of subgigahertz spectrum
(WiFi).
One of the considerations in LTE-A has been a focus on features to fa-
cilitate MTC and thus dominate the noncellular technologies for this purpose.
This will also be a major focus of 5G networks. A major challenge in mobile
MTC is support of a large number of devices per cell sending packets (messag-
es) with small payloads. This can result in inefficient utilization of PRBs in the
case of LTE, causing excessive signaling overhead, and the risk of increasing net-
work congestion. Excessive signaling overload can also result when these devices
repeat their attempts to access the network after collisions. Thus, efficient RA
procedures and architecture enhancements for efficient resource allocation and
utilization is needed for MTC to ensure the necessary load control and conges-
tion avoidance over the control channels. Furthermore, provisioning low-cost
and low-power devices is another challenge for manufacturers for integration
into some of the end communicating entities.

10.7.1 Architectural Enhancements


To facilitate for efficient MTC and meet the service requirements, 3GPP has
considered baseline architectural enhancements in Release 10 and onwards for
various technologies including UMTS and LTE [22, 23]. The enhancements
include definition of some new mediating interfaces, as well as provision of
mechanisms for new radio resource allocation schemes to minimize the impact
of MTC traffic on H2H communication. The new interfaces define the inter-
action between MTC service logic components and the 3GPP PLMN and fa-
cilitate the deployment of third-party MTC services. The LTE-MTC interfaces
provides enhancements to the way the MTC devices connect to the 2G and
212 From LTE to LTE-Advanced Pro and 5G

3G system in that it also allows the application server to contact a device and
trigger it into action. In the architectural baseline, the MTC device communi-
cates with an MTC server or other MTC devices via the 3GPP bearer services
consisting of the SMS and IMS as provided by the PLMN.
The MTC server or the application server as also refereed to is an entity
which connects to the 3GPP network via either a service capability server (SCS)
or directly. The application server (AS) may be owned by a third-party service
provider or the operator and may communicate with the device either directly
over the SGi interface [24] or indirectly through an SCS. The SCS, in turn, can
reach the device either directly or send a device trigger request over the Tsp in-
terface [25] to the MTC-IWF. The MTC-IWF looks up the user’s subscription
details in the home subscriber server, decides the delivery mechanism that it will
use, and triggers the device over the control plane of LTE. Release 11 provides
only the SMS-based mechanism in which the MTC-IWF contacts the SMS
service center over the T4 interface [26] and have it to trigger the device using
a mobile-terminated SMS, as illustrated in Figure 10.2. However, the 3GPP is
planning for new interface referred to as the T5b, which will allow the MTC-
IWF to communicate directly with the MME.
The MTC device access the 3GPP network through the MTCu interface,
which can be based on the LTE Uu interface (in the case of LTE) for the trans-
port of user plane and control plane traffic. In MTC applications, the MTC
devices may initiate communication with the server, or there may also be occa-
sions where there is a need for the MTC server to poll data from MTC devices.
For cases where MTC devices that are not continuously attached to the network

Figure 10.2 MTC architecture showing 3GPP-defined interfaces with EPC elements.
LTE-Advanced Main Enhancements 213

or that have no always-on PDP/PDN connection, the MTC server may send a
trigger indication to the device and thereby cause it to attach and/or establish a
PDP/PDN connection.
The 3GPP study for Release 12 [27, 28] focused on developing solutions
to handle massive numbers of MTC devices expected to communicate either
directly with each other or with the backend server in M2M communication.
The key issues are the network congestion in the random access process and in
the case of LTE also the efficient use of the PRBs and transport block sizing
for M2M communication. The capacity of a PRB under normal channel con-
ditions can be significantly higher than the need for a typical M2M message,
which can range from around 4 to 8 bytes. This means allocating even a single
PRB for an M2M exchange can result in the inefficient utilization of the radio
spectrum. Therefore, alternative strategies in resource allocation such as based
on group communication using data aggregation from colocated M2M devices
can be helpful. The grouping of M2M devices helps to facilitate the access
control and decrease the redundant signaling to avoid congestion. The MTC
devices within the same group can be in the same area and/or have the same
MTC features and/or belong to the same MTC user.
The grouping and aggregation/multiplexing of small M2M messages can
be achieved through the use of intermediate gateway nodes placed between
the devices and the eNodeB. The intermediate node can also potentially in-
crease the RA success probability, reduce power consumption, and result in
smaller delays. As was discussed earlier in the chapter, relay nodes are one of the
features of LTE-Advanced networks and can be used to aggregate traffic from
multiple M2M devices. In this way, the messages from multiple M2M applica-
tions can be added to make a comparatively larger packet and thus maximize
the utilization of radio resources. This will also help to reduce the congestion
in the radio access process due to the fact that it will allow a single relay node
to communicate with the eNodeB on behalf of multiple M2M devices in one
access attempt.
The concept is realized by introducing two new network elements re-
ferred to as the MTCD (MTC device) and MTCG (MTC gateway) as elabo-
rated in [28, 29] for LTE-Advanced networks. The MTCD is the UE designed
for MTC, which communicates through the LTE network with an MTC
server and/or other MTCDs. The MTCG facilitates efficient communication
among a large number of MTCDs and provides the connection to the EPC
through the eNodeBs for communication with the MTC servers. The MTCD
is able to establish a direct link with its donor eNodeB, just as a UE. There-
fore, the link between the MTCD and the base station is similar to the link
between a UE and the eNodeB. The eNodeB-to-MTCG wireless link is based
on LTE specifications, whereas the MTCG-to-MTCD and MTCD-to-MTCD
214 From LTE to LTE-Advanced Pro and 5G

communications can either be via LTE specifications or other wireless commu-


nications protocols such as the IEEE 802.15.x. However, the radio resources
may be reused between MTCGs in the case of multiple MTCGs per cell to
improve on the spectral efficiency of the network. The allocation of radio re-
source and scheduling between the MTCG and MTCDs is carried out at the
MTCG in coordination with its donor eNodeB. Instead of communicating
with a base station directly, the MTCDs establish a link with the associated
MTCG first and thus help to mitigate intense competition for radio resources
particularly when a large number of the MTCDs request access to network re-
sources simultaneously, as illustrated in Figure 10.3. In cases when one or more
MTC groups send communication requests to an eNodeB simultaneously, or
when multiple MTCDs may directly try to access a base station, network access
congestion can occur which results in performance degradation for both M2M
and H2H (human-to-human) services. Therefore, additional measures have to
be implemented to tackle the problem some of which will be discussed in the
next section.
A number of strategies for allocation of radio resources for the different
transmission links such as the MTCD to MTCD, MTCG to eNodeB, and
MTCD to eNodeB are proposed in [29] with the aim of minimizing cochannel
interference maximizing network efficiency. The simulation results presented
show improved network performance in terms of a user utility defined based on
the resulting allocated bandwidth, transmission delay, and loss ratio to reflect
the satisfaction level of the service delivered to the user.

Figure 10.3 High-level illustration of group-based MTC through LTE.


LTE-Advanced Main Enhancements 215

10.7.2 Managing the Network Access


When a massive number of MTC devices try to access the network simultane-
ously, it can result in a low RA success rate and high PRACH congestion. That
will, in turn, lead to unexpected delays, increased packet losses, high-power
consumption, and inefficient utilization of radio resources. Further congestion
can result on the access channels when an increased number of devices will at-
tempt to reaccess the network after collisions. These considerations indicate the
need for improved RA schemes and RA resource allocation with overload con-
trol to ensure M2M communication to perform while minimizing the impact
to human-to-human communication.
The 3GPP has been proposing and studying a number of radio access
mechanisms starting with Release 10 [22] and more extensively with Release
11 [30, 31] to protect the network for RACH overload. The solutions speci-
fied now in [30] apply mostly to both UMTS and LTE. The basic ideas behind
the proposed mechanisms are based on access class barring and prioritization
and separate RACH resource assignment for MTC and H2H communication.
These are discussed next.

10.7.2.1 Access Class Barring


The access class barring basically concerns with the introduction of separate
access classes for MTC devices to allow the network to control the access from
these devices differently. Furthermore, depending on the granularity needed
for differentiating among MTC devices, one or several access classes can be
defined. The access class can be used to control the probability of an RA suc-
cess by the device, as well as to control if a cell is barred or not for those MTC
access classes. The access class parameters can be downloaded from a device
management server and stored within the USIM [32]. The network uses con-
trol signaling to indicate to individual UEs or a group of UEs how to scale the
access control parameters. The MTC device indicates the access class parameter
to the network in its RRC connection requests and its nonaccess stratum signal-
ing. If the network is overloaded, it might then reject the message and provide
the device with a back-off timer to try again after the timer expiration. On
the downlink side, the MME can also signal the serving gateway to reduce the
number of low-priority downlink data notifications by a specified percentage.
The serving gateway responds by randomly discarding incoming packets for
idle low-priority devices to help alleviate the overload condition. Release 11
[31] introduced the Extended Access Barring (EAB), which is a technique for
the network to selectively control the rate of access attempts from UEs con-
figured for EAB, by, for instance, lowering their access priorities dynamically
considering the network congestion status. The UEs selected for EAB are those
216 From LTE to LTE-Advanced Pro and 5G

that are considered more tolerant to access restrictions than others in order to
prevent overload of the access or the core network without the need to intro-
duce any new access classes. The operator can select UEs for EAB through the
information elements broadcast from a new system information block, SIB 14.
A UE configured for EAB uses its allocated access class, as defined under access
class barring when evaluating the EAB information broadcast by the network
to determine if its access to the network is barred. The EAB is part of the 3GPP
specification now and is defined in [31].
The details of parameter configuration scenarios for different access bar-
ring implementations are discussed in [31]. The 3GPP [30] uses two different
traffic models based on the scenarios in which either MTC devices access the
network uniformly over a period of time as in a nonsynchronized manner, or
where a large number of MTC devices access the network in a highly syn-
chronized manner such as after a power outage to evaluate the performance
of different RACH parameter configuration scenarios. The simulation results
presented conclude in favor of the EAB as performing best in terms of such
performance metrics as the access collision and access success probabilities.

10.7.2.2 RACH Resource Partitioning


In these schemes, a number of schemes may be used in the allocation of RACH
resources to MTC and H2H communication to achieve some balance in the
performance achieved for each. These schemes were discussed also in [27] and
consist of the following:

Separate RACH Resource Assignment


In this scheme, separate RACH resources can also be allocated for M2M and
H2H communication to control the access success probabilities for each. In the
case of LTE, this is realized by either splitting the preambles into H2H group(s)
and MTC group(s) or by allocating PRACH occasions in time or frequency to
either H2H or MTC devices.

Dynamic Allocation of RACH Resources


In the dynamic resource allocation, the network predicts when access load will
surge due to MTC devices and dynamically allocates additional RACH resourc-
es for the MTC devices to cope with the situation.

Slotted Access
In this method, dedicated access cycle/slots (similar to paging cycle/slots) are
defined for MTC devices. The access slots are synchronized with the corre-
sponding system frames, and an MTC device is associated with an access slot
through its ID (IMSI). The access slot could be the paging frame for the MTC
device in the simplest scenario.
LTE-Advanced Main Enhancements 217

Pull-Based Scheme
In the pull-based schemes, the MTC server is assumed to be aware of when
MTC devices have data to send or the MTC server needs information from the
MTC devices. In these cases, the server can have the network to page the MTC
device, which then will perform an RRC connection establishment for sending
the data to the server. The network paging of the MTC device is performed
taking into account the network load condition.

10.7.3 LTE MTC Devices


M2M devices, which are particularly battery-driven, have strict battery power
constraints and hence require low complexity as well as low cost. Because the
M2M devices may be used also in locations without a direct energy source such
as water meters, sensors used in intelligent containers, on vehicles for on-board
security, or on the body of animals for wildlife management, it is important to
tailor the design for the lowest complexity and lowest-power consumption. For-
tunately, M2M devices spend most of the time in the idle state due to their low
activity and comparatively longer message interarrival times. The idle state is a
low-power state in the cellular networks in which devices usually sleep to save
battery and wake up periodically to check for any system information updates
or scheduled packet arrival by listening to the broadcast paging messages. In
the current LTE standards, the maximum allowed paging cycle is 2.54 seconds,
which is too short and not optimal for M2M communication due to what was
just explained. Therefore, longer paging cycles perhaps of the order of minutes
is needed but should be carefully selected to achieve a fair balance between
delays and power consumption efficiency. 3GPP has considered these issues in
its Release 12 [34] and has introduced a new power saving state, in which the
mobile transmits periodic tracking area updates as it does in RRC_IDLE but
does not monitor the base station for paging messages and is not reachable by
the network until it reawakens. Other options under consideration is the intro-
duction of an extended discontinuous reception cycle, in either the RRC_IDLE
or the RRC_CONNECTED state or both. A long DRX cycle offers a compro-
mise between the basic RRC_CONNECTED and RRC_IDLE states.
A 3GPP-specified, currently available, low-complexity device is the Cat-0
device, which does not support Tx diversity and MIMO operations. The aver-
age price of this is 40% to 50% of traditional LTE devices [33]. The category-0
works in the half-duplex FDD mode with a receiver bandwidth of 20 MHz but
peak uplink and downlink data rates of 1 Mbps. Its maximum transmit power
is still 23 dBm. In Release 13, the 3GPP has planned for category-M devices
specifically optimized for MTC. This category includes Cat-1.4 MHz with a
receiver bandwidth of 1.4 MHz, and peak uplink and downlink bit rates of 1
Mbps (transport block size of 1,000 bits), a maximum transmit power of 20
218 From LTE to LTE-Advanced Pro and 5G

dBm, and the Cat-200 kHz with a receiver bandwidth of 200 kHz (known as
the narrowband LTE) and peak uplink bit rate of 0.144 Mbps, and downlink
bit rate of 0.2 Mbps. Both of these will reduce the modem complexity further
down to about 20% and 15% of the Release 8 categories, respectively. The cat-
egory-M devices have a single receiver chain and work in the half-duplex mode.

References
[1] 3GPP TR 36.912, “Feasibility Study for Further Advancements for E-UTRA (LTE-Ad-
vanced), Release 11,” September 2012.
[2] 3GPP TS 36.300, “Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved
Universal Terrestrial Radio Access Network (E-UTRAN); Overall Description; Stage 2,
Release 11,” 2010.
[3] 3GPP TS 36.101, “User Equipment (UE) Radio Transmission and Reception, Release
11,” 2010.
[4] 3GPP TS 36.322, “Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved
Universal Terrestrial Radio Access Network, Radio Link Control (RLC) Protocol Specifi-
cation, Release 12,” 2011.
[5] 3GPP TS 36.212, “Multiplexing and Channel Coding, Release 11.”
[6] 3GPP TS 36.213, “Physical Layer Procedures, Release 11,” 2010.
[7] 3GPP TS 36.211, “Physical Channels and Modulation, Release 10,” 2009.
[8] Cox, C., An Introduction to LTE, 2nd ed., New York: John Wiley & Sons, 2014.
[9] 3GPP TS 36.216, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical Layer
for Relaying Operation, Release 11,” 2010.
[10] 3GPP TS 36.331, “Radio Resource Control (RRC); Protocol Specification, Release 11,”
2013.
[11] 3GPP TS 36.306, “User Equipment (UE) Radio Access Capabilities, Release 11,” 2010.
[12] 3GPP TS 23.401, “General Packet Radio Service (GPRS) Enhancements for Evolved
Universal Terrestrial Radio Access Network (E-UTRAN) Access, Release 12,” 2011.
[13] 3GPP TR 23.861, “Multi Access PDN Connectivity and IP Flow Mobility, Release 12,
Annex A,” 2011.
[14] 3GPP TS 23.402, “Architecture Enhancements for Non-3GPP Accesses, Release 12,”
2011
[15] 3GPP TS 23.261, “IP Flow Mobility and Seamless Wireless Local Area Network (WLAN)
Offload, Release 11,” 2010.
[16] 3GPP TS 24.303, “Mobility Management Based on Dual-Stack Mobile IPv6, Release
11,” 2010.
[17] IETF RFC 5555, “Mobile IPv6 Support for Dual Stack Hosts and Routers,” June 2009.
LTE-Advanced Main Enhancements 219

[18] 3GPP TR 36.819, “Coordinated Multi-point Operation for LTE Physical Layer Aspects,
Release 11,” 2010.
[19] Luigi, A., I. Antonio, M. Giacomo, “The Internet of Things: A Survey,” International
Journal of Computer and Telecommunications Networking, Vol. 54, No. 15, October 2010,
pp. 2787–2805.
[20] Gonclves, V., and P. Dobbelaere, “Business Scenarios for Machine-to-Machine Mobile
Applications,” Proc. International Conference on Mobile Business and Global Mobility
Roundtable (ICMB-GMR), 2010, pp. 394–401.
[21] 3GPP TS 22.368, v1.0, “Service Requirements for Machine-Type Communications
(MTC) Stage 1, Releases 10 through 13,” 2009.
[22] ETSI TS 102 689, v1.1.1, “Machine-to-Machine Communications (M2M): M2M
Service Requirements,” August 2010.
[23] 3GPP TR 23.888, “System Improvements for Machine-Type Communications, V10.0.0,
Release 10,” 2009.
[24] 3GPP TS 29.061, “Interworking Between the Public Land Mobile Network (PLMN)
Supporting Packet Based Services and Packet Data Networks (PDN), Release 12,” 2011.
[25] 3GPP TS 29.368, “Tsp Interface Protocol Between the MTC Interworking Function
(MTC-IWF) and Service Capability Server (SCS), Release 12,” 2011.
[26] 3GPP TS 29.337, “Diameter-Based T4 Interface for Communications with Packet Data
Networks and Applications, Release 12,” 2011.
[27] 3GPP TR 22.888, “Study on Enhancements for Machine Type Communication, V12.0.0,
Release 12.”
[28] 3GPP TR 23.887, “Study on Machine-Type and Other Mobile Data Applications
Communications Enhancements, V2.0.0, Release 12,” 2011.
[29] Zheng, K., et al., “Radio Resource Allocation in LTE-Advanced Cellular Networks with
M2M Communications,” IEEE Communications Magazine, July 2012.
[30] 3GPP TR 37.868, “Study on RAN Improvements for Machine-Type Communications,
Release 11, Annex B,” October 2011.
[31] 3GPP TS 22.011, “Technical Specification Group Services and System Aspects; Service
Accessibility, Release 11,” 2010.
[32] 3GPP TS 24.368, “Non-Access Stratum (NAS) Configuration Management Object
(MO), Release 11,” September 2012.
[33] Mehmood, Y., et al., “Mobile M2M Communication Architectures, Upcoming Challenges,
Applications, and Future Directions,” EURASIP Journal on Wireless Communications and
Networking, November 2015.
[34] 3GPP TR 37.869, “Study on Enhancements to Machine-Type Communications (MTC)
and Other Mobile Data Applications; Radio Access Network (RAN) Aspects, Release 12,”
2011.
11
Optimization for TCP Operation in 4G and
Other Networks1
Data services have been overtaking and continue to overtake the voice applica-
tions on the wireless mobile networks. This will be particularly the case with
LTE networks where, at least in the earlier stages of its deployment, data ser-
vices of various kinds such as Web services, video applications, and heavy data
downloads will remain to compose the majority of the traffic. Therefore, the
existence of well-tuned protocols for the efficient reliable end-to-end transfer of
data becomes very critical. The Transmission Control Protocol (TCP) is an end-
to-end transport protocol that is commonly used by network applications that
require reliable guaranteed delivery. The most typical network applications that
use TCP are File Transfer Protocol (FTP), Web browsing, and TELNET. TCP
is a sliding window protocol with timeout and retransmits. Outgoing data must
be acknowledged by the far-end TCP. Acknowledgments can be piggybacked
on data. Both receiving ends can flow control the far end, thus preventing a buf-
fer overrun. As is the case with all sliding window protocols, TCP has a window
size. The window size determines the amount of data that can be transmitted
in one round-trip time (RTT) before an acknowledgment is required. The time
between the submission and receipt of an ACK packet is generally referred to as
the RTT. For TCP, this amount is not a number of TCP segments but a number

1. Most of the material in this chapter on TCP performance in other networks appears in Chap-
ter 16, “The TCP Protocols, Issues, and Performance Tuning over Wireless Links,” in UMTS
Network Planning, Optimization, and Inter-Operation with GSM, by Moe Rahnema (John
Wiley and Sons, November, 2007.)

221
222 From LTE to LTE-Advanced Pro and 5G

of bytes. However, the reliable TCPs such as the TCP-Reno type are tuned to
perform well in traditional fixed networks where transmission errors are rela-
tively very low and packet losses are assumed to occur mostly because of con-
gestion. In such case the TCP receiving end will throttle back the transmitter
sending rate as TCP responds to all packet losses by invoking congestion con-
trol and avoidance. This results in degraded network throughput performance
and delays in networks and on links where packet losses mostly occur due to bit
errors, signal fades, and hard handovers (which occur in LTE networks) rather
than congestion as in the mobile communication environment.
A number of different strategies and protocol variations have been pro-
posed to provide efficient reliable transport protocol functionality and meet the
challenges on wireless networks. Some of these schemes rely on specialized link
layer protocols, specified by the 3GPP standards, which enhance the reliability
of the wireless link. These are called link layer solutions, which when combined
with certain options or versions of available TCP implementations help greatly
to address and resolve the wireless problems, as we will discuss in detail. Other
schemes require special measures to be taken such as splitting the TCP connec-
tion in the base station, providing sniffing proxies, or implementing nonstan-
dard modifications to the TCP stack within the end systems (both the fixed
and the mobile hosts). We will also discuss the latter category of solutions and
their merits to provide the reader with a broad knowledge base and insight for
tackling specific problems and issue in this particular field as they arise. This
chapter is based on an extension of the material presented in a previous book
[1] by the author and will include material specific to the optimization for TCP
in LTE-type networks.

11.1 TCP Fundamentals


The TCP was adopted in 1983 as the official Internet transport level protocol
for the reliable end-to-end transfer of nonreal-time information, which includes
Web browsing, FTP, and e-mail. This protocol has been originally developed to
optimize its performance over fast and fairly reliable wired networks. The TCP
provides a byte-streamed transmission. The TCP sender divides the data into
segments, which are called packets at the IP layer. The segments are limited to
a maximum segment size (MSS) (excluding the TCP header and options field),
which can be managed by the TCP sender and receivers, and is communicated
in the SYN messages at connection setup. The segments are sequentially num-
bered and transmitted. The MSS value must fit into the IP maximum transmis-
sion unit (MTU) size in order to avoid fragmentation at the TCP layer within
network routers. Therefore, the choice of an appropriate MSS depends on the
MTU used at the IP layer. In practice, to find the optimum MSS, the MTU
Optimization for TCP Operation in 4G and Other Networks 223

Discovery procedure [RFC 1191], which allows the sender to check the maxi-
mum routable packet size in the network that can be used. Then the MSS is set
as a parameter. However, the MSS may interact with some TCP mechanisms.
For example, since the congestion window is counted in units of segments, large
MSS values allow TCP congestion window to increase faster.
TCP operates in acknowledged mode. That is, the segments that are re-
ceived error-free are acknowledged by the receiver. The TCP receiving end uses
cumulative acknowledgment messages in the sense that one single ACK mes-
sage may acknowledge the receipt of more than one segment, indicated by the
sequence number of the last in-sequence, error-free, received packet. However,
the receiver does not have to wait for its receiver buffer to fill before sending
an ACK. In fact, many TCP implementations send an ACK for every two seg-
ments that are received. This is because when a segment is received, the TCP
starts a delayed ACK, waiting for a segment to send with which the ACK can
be piggybacked. If another segment arrives before the 200-ms (TCP has a timer
that fires every 200 ms, starting with the time the host TCP was up and run-
ning), the sender will send the second ACK. If the tick occurs, before another
segment is received or the receiver has no data of its own to send, the ACK is
sent. Packets that are not acknowledged in due time are retransmitted until they
are properly received.
The TCP is a connection-oriented protocol meaning that the two applica-
tions using TCP must establish a TCP connection with each other before they
can exchange data. It is a full-duplex protocol in that each TCP connection
supports a pair of byte streams, one flowing in each direction. The acknowledg-
ment of TCP segments for one direction are piggybacked to transmission in
the reverse direction or sent alone when there is no reverse data transmissions.
The TCP includes a flow-control mechanism for each of these byte streams that
allow the receiver to limit how much data the sender can transmit. The TCP
also implements a congestion-control mechanism to avoid congestion within
the network nodes.

11.1.1 TCP Connection Setup and Termination


The TCP sender and receiver must establish a connection before exchanging
data segments as already mentioned. This involves a three-way hand-shaking
process with the following steps:

• Step 1: The client end system sends TCP SYN control segments to the
server and specifies the initial sequence number.
• Step 2: The server end system receives the SYN and replies with SYN
ACK control segment. In the meantime it allocates the buffer, specifies
the server, and receives the initial sequence number.
224 From LTE to LTE-Advanced Pro and 5G

• Step 3: The client also allocates buffers and variables to the connection.
The server sends another SYN segment, which signals that the connec-
tion is established.

In the connection termination phase, the steps are:

• Step 1: The client end system sends TCP FIN control segment to server.
• Step 2: The server receives FIN and replies with ACK. If it has no data
to transmit, it sends a FIN segment back with the ACK. Otherwise, it
keeps sending data that is then acknowledged by the receiving end (the
client end). Then, after it finishes, it sends the FIN segment.
• Step 3: The client receives FIN and replies with ACK. It enters a timeout
“wait” before it closes the connection from its end.
• Step 4: The server receives the ACK and closes the connection.

11.1.2 Congestion and Flow Control


Congestion and flow control are performed by the use of a variable size conges-
tion window (CWND) and a receiver advertised window, respectively [2]. The
receiver advertised window is dependent on the amount of buffer space that the
receiver can or decides to allocate for the connection. The maximum number
of data segments that the TCP sender can at any time inject into the network
with outstanding acknowledgments is determined by the minimum of the re-
ceiver advertised window and the senders congestion control window at the
time. The congestion control window size is dynamically changed by the TCP
sender to limit the transmission rate into the network based on indirect detec-
tion of congestion within the network. The TCP sender uses two mechanisms
for the indirect detection of congestion within the network: (1) the timeouts
for ACKs from the receiving end based on a retransmission timeout (RTO)
timer, and (2) the receipt of duplicate ACKs (DUACKs) from the receiving
end. The receiver transmits duplicate ACKs for every segment that is received
out of order. The DUACKS are not delayed and are transmitted immediately.
The sequence number within a duplicate ACK refers to the sequence number
of the last in-sequence, error-free received packet. The RTO is based on a jitter-
compensated, smoothed version of the RTT measurement samples. An RTT
sample is the time from when the sender transmits a segment to the time when
it receives the ACK for the segment.
Optimization for TCP Operation in 4G and Other Networks 225

The TCP congestion control starts with probing for usable bandwidth
when the data transmission starts following connection setup. This is called the
slow start congestion control phase, in which the congestion window (CNWD)
starts from a small initial value and grows rapidly with an exponential rate.
Then when congestion is detected, the CWND is restarted from a lower value
and is incremented only linearly. This phase is called congestion avoidance. The
boundary between the initial low start and the congestion avoidance phases are
controlled by the two important variables of CNWD and a slow start threshold,
SS-Thresh. The TCP sender thus uses the slow start and congestion avoidance
procedures to control the amount of outstanding data that are being injected
into the network. The two phases of slow start and congestion avoidance are
discussed in the following two sections. �

11.1.3 Slow Start Congestion Control Phase


The slow start phase is also known as a probing phase in which the TCP sender
tries to probe for the available network bandwidth at initial start-up or after a
loss recovery phase. The TCP sender starts by transmitting one segment and
waiting for its ACK. When that ACK is received, the congestion window is
incremented from one to two, and two segments can be sent. When each of
those two segments is acknowledged, the congestion window is increased to
four. Then up the minimum of the updated congestion window and the re-
ceiver advertised window can be sent. This provides an exponential growth in
the transmission rate, although it is not exactly exponential because the receiver
may delay its ACKs, typically sending one ACK for every two segments that it
receives.
This continues until either the CWND reaches the initial value set in
the slow start threshold parameter, SS-Thresh, or when congestion is observed.
TCP interprets packet losses as a sign of congestion. The SS-Thresh is initially
set to an arbitrarily high value (equivalent to, say, 65,535 bytes) as it will adjust
itself in response to congestion as will be discussed in the next section. Con-
gestion causes packet losses and occurs when the data packets arrive through a
large pipe and are queued for transmission over a smaller pipe or when multiple
input streams arrive at a router whose output capacity is less than the sum of
the inputs.
Generally, all TCP versions use either retransmission timeouts or the re-
ceipt of DuACKs, indicating the receipt of an out-of-sequence received packet,
as a sign of packet loss or delays due to congestion. The exact way in which
timeout events or DuACKs are used for this purpose depends on the particular
TCP version implemented as will be discussed later in the chapter. However,
once the TCP sender determines that packets are getting lost, it starts to per-
form congestion avoidance measures.
226 From LTE to LTE-Advanced Pro and 5G

11.1.4 Congestion Avoidance Phase


When congestion occurs, TCP must slow down its transmission rate of packets
into the network. The TCP sender slows down its transmission rate by enter-
ing either the congestion avoidance phase or restarting the slow start phase. If
congestion was detected by the timeout event, it restarts the slow start phase to
begin probing the network bandwidth again. The CWND is reset to one seg-
ment, but the SS-Thresh is set to one-half the minimum of the CWND, the
receiver advertised window, this time. This continues until either the CWND
reaches the newly updated SS-Thresh, or another timeout event occurs. The
important variables that define the boundary between the slow start phase and
the congestion avoidance phases are the SS-Thresh and the CWND.
The congestion avoidance phase is entered if either the CWND exceeds
the SS-Thresh or a certain number of duplicate ACKs, depending on the TCP
version implemented, are received. In the congestion avoidance phase, the TCP
sender increments the CWND by one segment for each ACK received resulting
in a linear growth of the transmission window. Note that the ACKs in TCP can
be cumulative in the sense that one ACK from the receiver may acknowledge
the proper receipt of more than one data segment. In the congestion avoid-
ance phase, a cumulative ACK causes the CWND to increment by only one
regardless of how many data segments are acknowledged by that ACK. This is
in contrast to the slow start phase in which a cumulative ACK will cause the
CWND to increment by the number of the data segments acknowledged by
the single ACK message.

11.1.4.1 TCP Congestion Algorithm Bottlenecks in Wireless Networks


Since the TCP was originally designed and developed for the fixed networks
with very small error rates (packet losses much below 1%), all packets that are
not acknowledged by the receiving end in due time are interpreted by the send-
ing end to be due to congestion and dropouts within the network as opposed
to packet corruptions due to bit errors. Therefore, the original TCP runs into
a number of performance problems when it is implemented on networks that
involve wireless links. The wireless networks incur considerably higher packet
corruptions (which result in packet losses) due to burst errors caused by channel
fades and packet losses, which occur in the mobility-handling processes such
as cell reselections and hard handovers. For that reason, a number of different
measures and mechanisms have been developed for adopting the use of TCP in
wireless networks, which we will discuss in detail later.

11.1.5 TCP RTO Estimation


The accurate estimation of the RTO parameter is very important as it would
help to avoid spurious retransmissions and the activation of congestion avoid-
Optimization for TCP Operation in 4G and Other Networks 227

ance mechanisms. The RTO estimation is based on processing and filtering


the network RTT samples obtained by the TCP sender. The RTT is the time
elapsed from the instant a segment is transmitted to the reception of an ACK
for that segment. The TCP sender measures the RTT samples by the time
stamping technique. The sender places a time stamp within the data segment.
The receiver upon receiving the segment puts the time stamp on a time stamp
echo field within the ACK message. Using these two values, the sender is able
to calculate the RTT properly. This requires that the two communicating ends
be time synchronized or be aware of each other’s time.
The RTT can change from time to time even for a given connection due
to the changing traffic and condition in the network. Thus, the RTO estima-
tion should be adaptive. Various methods have been developed and evolved for
the optimum estimation of the RTO [3, 4]. However, all methods are based on
some filtering and jitter-compensated version of the sample RTTs measured by
the TCP sender. However, the RTO estimation procedures are now standard-
ized [5], with the current procedure in use being due to a method proposed by
Jacobson and Karels in [6]. This procedure uses the sample RTT measurements
obtained by the TCP sender to calculate an exponentially smoothed average of
the sample RTTs (SRTT) and adjust the final estimation by a deviation correc-
tion to account for rapid jitters in the RTT. The algorithm is given here:

SRTT = (1 − g ) ∗ SRTT + g ∗ sample RTT (11.1)

The mean variance is:

RTTVAR = (1 − h ) ∗ RTTVAR + h ∗ sample RTT − SRTT (11.2)

RTO = SRTT + 4RTTVAR (11.3)

The recommended values for the coefficients g and h are:

g = 1 8 = 0.125, h = 1 4 = 0.25

with the initial RTO value set to 3 seconds.


The timeout values estimated from (11.3) are converted into multiples of
a timer granularity, dependent on the implementation. In typical implementa-
tions, the timer granularity is from 200 to 500 ms.
228 From LTE to LTE-Advanced Pro and 5G

When a segment is retransmitted upon the timer expiration (upon RTO


expiration), the new RTO is initialized to double its value at the time. This
results in an exponential timer back-off and is bounded by a maximum value of
at least 60 seconds (RFC 2988).
The RTT samples are not made on segments that were retransmitted (and
thus for which it is ambiguous whether the reply is for the first instance of the
packet or a later instance). The problem with this solution is that if the RTT
suddenly increases, all packets will be retransmitted before the ACKs arrive, as
a result of using a lower outdated RTO.
The algorithm suggested to solve the retransmission caused ambiguity
problem in the RTT estimation is known as Karn’s algorithm [4]. When an
ACK arrives for a packet that has been retransmitted at least once, any RTT
measurement based on this packet is ignored. Additionally, the backed-off RTO
for this packet is kept for the next packet. Only when it (or some other future
packet) is ACKed without retransmission is the RTO recalculated from the
SRTT. The convergence of the SRTT to new RTT values depends on the expo-
nential back-off mechanism and the SRTT filtering used, but it is shown that it
typically converges quite fast [4].

11.1.6 Bandwidth-Delay Product and Throughout


The bandwidth-delay product (BDP) obtained by the product of the network
RTT and the network transmission bit rate is a measure of the network data
holding in transit capacity. This means without any additional buffering within
the network nodes, the link between the transmitter and receiver will not be
able to hold more than the BDP, regardless of the CWND and the advertised
window. Therefore, the BDP sets a limit to the TCP’s maximum window size
(for both the congestion and the receiver windows). When the window size is
set to the BDP, there will be a constant flow of data from the sender to the re-
ceiver, and the link is utilized effectively. The maximum throughput is achieved
when the receiver window size is set to the BDP, as the maximum throughout
is bounded by:

Throughput ≤ Receiver window/RTT

where RTT is the round-trip-time of the path between the transmitter and
receiver. To optimize the TCP throughput, the receive window should be set to
≥ [path bandwidth * RTT], which is the BDP.
In networks with very high BDP, also known as long fat networks (LFN)
such as the geostationary satellite links, care must be taken in the calculation of
the RTO. This is because in such networks, the window size is very big and the
Optimization for TCP Operation in 4G and Other Networks 229

standard RTT measurements do not measure the RTT of every segment, just a
window at a time [7].
However, in networks with very small BDP such as this one, the effect of
the slow start phase is null because the BDP is fulfilled by the initial value of the
congestion control window which is set to 1.

11.2 TCP Enhanced Lost Recovery Options


TCP implements a few options that can be used in the speedy recovery from
lost packets. There are the fast retransmit, fast recovery, and selective acknowl-
edgments, all of which were developed for improving performance on fixed net-
works. However, some of these, as we will see later, happen to be useful features
for handling some of the packet loss characteristics of wireless links.

11.2.1 Fast Retransmit


In the fast retransmit mechanism, three duplicate acknowledgments carrying
the same sequence number trigger a retransmission without waiting for the as-
sociated timeout event to occur. The window adjustment strategy for this early
timeout is the same as for the regular timeout and slow start is applied.
When an out-of-sequence data segment is received, the TCP receiver sends
an immediate DUACK acknowledging the last received error-free in-sequence
packet. The purpose of this immediate DUACK is to let the other end know
that a segment was received out of order and to tell it what sequence number
is expected next. If three or more duplicate ACKs are received in a row for the
same segment sequence number, it is taken as a strong indication that a segment
has been lost. Then TCP performs a retransmission of the missing segment,
without waiting for the retransmission timer (RTO) to time out. If only one or
two duplicate ACKs are received in a row, it is interpreted as an indication that
segments may simply be reordered in the transmission process and waits before
retransmitting the indicated segment expected.

11.2.2 Fast Recovery


The fast recovery pertains to starting the congestion avoidance phase as op-
posed to re-starting the slow start phase following a fast retransmit. The reason
for not performing slow start in this case is that the receipt of three duplicate
ACKs are taken to mean that although a segment is missing, but data is flowing.
The receiver can only generate the duplicate ACKs when another segment is
received, implying that there is still data flowing between the two ends, and no
need to reduce the flow abruptly by restarting the slow start phase. Then when
230 From LTE to LTE-Advanced Pro and 5G

a new data segment is ACKed, the congestion window is halved and transmis-
sion continues in the congestion avoidance phase. The fast retransmit and fast
recovery options are usually implemented together [8], in which case it helps to
achieve better utilization of the connection available bandwidth.

11.2.3 Selective Acknowledgment


The TCP selective acknowledgments (SACK) is a TCP option that allows the
receiving end to specify precisely which data segments are received and which
are not in the presence of multiple noncontiguous data segment losses. It uses
a bit vector which specifies lost packets. TCP SACK is an Internet Engineering
Task Force (IETF) proposed standard [RFC 2018, October 1996], which is
implemented for most major operating systems. SACK allows the TCP sender
to recover from multiple packet losses faster and more efficiently. This option
enables quicker recovery from multiple packet losses, particularly when the load
is not light, and the end-to-end delays are large such as on satellite links.

11.2.4 Timestamp Option


Traditionally, the TCP implementations time only one segment per window re-
sulting in one RTT measurement at a time (per RTT). This yields an adequate
approximation to the RTT for small windows (e.g., a 4- to 8-segment window),
but it results in an unacceptably poor RTT estimate for long delay links, LFNs
(e.g., 100-segment windows or more).
When using the timestamp option, each data segment is time-stamped
and used in more accurately estimating the RTO. RFC 1323 suggests that TCP
connections utilizing large congestion windows should take many RTT samples
per window of data to avoid aliasing effects in the estimated RTT.

11.3 TCP Variations as Used on Fixed Networks


There are a variety of versions of TCP implementation with respect to con-
gestion control mechanisms, their triggering, and recovery from packet losses.
Although these different versions were all developed and evolved for improving
performance on fixed wired networks, some have features that make them more
suitable for implementation on wireless networks. Some can be complemented
with other measures to make them more suitable for wireless networks. These
are discussed in the following sections first before we focus the subject on the
peculiarities and the requirements of the wireless networks, particularly UMTS.
A comparative discussion of the different versions of TCP is provided in [9].
Optimization for TCP Operation in 4G and Other Networks 231

11.3.1 TCP Tahoe


TCP Tahoe is the oldest and most famous version of TCP. In the literature, it is
sometimes called just TCP. The TCP Tahoe congestion algorithm includes slow
start and congestion avoidance. TCP Tahoe uses three RTO events to deter-
mine that a packet has been lost, in which case it then performs the congestion
avoidance measures. This is its main drawback [10], as when a segment is lost,
it would take a long time for the sender side to decide it and take the appropri-
ate measures.

11.3.2 TCP Reno


TCP Reno includes the slow start and congestion avoidance algorithms and
besides implement the fast retransmit. In the fast retransmit mechanism, three
duplicate acknowledgments carrying the same sequence number triggers a re-
transmission without waiting for the associated timeout event to occur. TCP
Reno uses the same window adjustment strategy for this early timeout as was
discussed earlier in Section 16.1.3 in [1] for the regular timeout and slow start
phases.

11.3.3 TCP New Reno


TCP New Reno introduces fast recovery in conjunction with fast retransmit as
discussed in Section 16.2.2 in [1].

11.3.4 TCP SACK


TCP SACK implements the selective acknowledgment option discussed in Sec-
tion 2.3.

11.3.5 TCP Vegas


In the TCP Vegas, the congestion avoidance mechanism is based more on a con-
sideration of packet delay, rather than packet loss. It uses packet delays to help
determine the rate at which to send packets. It was developed at the University
of Arizona by Lawrence Brakmo and Larry L. Peterson. TCP Vegas detects con-
gestion based on increasing RTT values of the packets in the connection unlike
other flavors such as Reno and New Reno, which detect congestion only after
it has actually happened via packet loss. The algorithm depends heavily on ac-
curate calculation of the RTT value and was developed to meet the challenges
for the higher-speed Internet.
232 From LTE to LTE-Advanced Pro and 5G

11.4 Characteristics of Wireless Networks, Particularly 3G


The inherently different characteristics of the wireless networks cause them to
experience different performance impacts by the TCP parameters, and mecha-
nisms. The TCP retransmission mechanisms for instance are either due to a
timeout or triple-duplicate ACKs, which are caused mainly by packet loss in
a wired environment. Adding a wireless, mobile portion to this setting with
its significant error figures will inevitably trigger TCP congestion control and
retransmission back-off mechanisms against error caused packet losses and not
the congestion caused.
Those characteristics of the wireless networks which are considered highly
relevant to impacting the TCP performance are discussed and highlighted in
this section. These are then used in subsequent sections as the framework to
discuss the complementary measures, as well as the modifications to the TCP
that have been proposed for implementing the protocol on wireless networks.

11.4.1 Packet Losses, Delays, and Impacting Parameters


High block error rates compared to what TCP was designed for occur on the
radio interface side of wireless networks due to channel fades and mobility such
as hard handovers and cell reselections. The radio interface uses powerful error
correction codes and interleaving to combat the burst errors and reduce the
error rates, but uncorrectable errors still occur, which can result in significant
frame errors over the link. The uncorrectable frame errors are detected by errors
detection codes, and that results on average in a block error rate (BLER). Then
if the frame losses due to BLER are not locally corrected, they are interpreted by
the end-to-end TCP as a sign of congestion in the network and triggers conges-
tion control procedures that will reduce the CWND and reset retransmission
timers. This will slow down the packet flow rate into the network and result
in high delay variations and underutilization of the allocated bandwidth. The
reduction in congestion window is a necessity when the network is experiencing
congestion to avoid a collapse from congestion, but it is not required if packet
losses occur due to corruption of data on the wireless link. This unnecessary
reduction in congestion window for corruption losses is the main reason that
can result in poor TCP performance on wireless links.
The link layer retransmission protocols known as HARQ and radio link
control (RLC) are implemented to handle the corrupted frames through lo-
cal retransmissions over the radio channel. The link-level retransmissions hide
some of the data losses from the TCP, but can result in increased transmission
delays with significant delay variances. The later occurs as a result of the vary-
ing degree of retransmissions required due to the dynamically changing radio
channel. The retransmissions further add to the delays involved in the complex
Optimization for TCP Operation in 4G and Other Networks 233

error coding and interleaving performed to combat errors. In [11], measure-


ments show latencies of the order of 179 ms to over 1 second for code-division
multiple-access (CDMA)-based 3G1X networks. Although the RTT in LTE
and 3G networks are much shorter than in the older GPRS networks, of the
order of 100 ms to 1 second in 3G network based on UMTS and HSPA [10,
12] and around 20 ms or less in LTE, overloads caused by failures of mobility-
specific pathological case of a significant load increase in a particular cell due to
failure, malfunction, or simply because of a convergence of mobile users at a cell
site can result in significant round-trip delays as well as packet losses and impact
the TCP performance resulting in further throughput degradation. TCP seg-
ments can also experience significant delay increases in the case of LTE lossless
handovers [13]. In the lossless handover, the frames in the PDCP retransmis-
sion queue are forwarded to the target eNodeB, and hence no significant losses
are incurred at the TCP layer. However, this results in increased delays of up
to a few hundred milliseconds. The number of retransmissions depends on the
size of the PDCP retransmission queue and this will impact the TCP perfor-
mance [14]. Lossless handovers function with RLC AM (Radio Link Control
Acknowledged Mode).
Packets can also be lost in the process of hard handovers and cell reselec-
tions such as in LTE and the older GPRS networks. In cell reselections, packets
buffered at the base station for a given cell may be completely flushed out before
they are transmitted over the radio link once a cell reselection or hard handover
to a cell under a different base station takes place as can occur in the LTE seam-
less handover cases. In seamless handover, the source eNodeB can potentially
drop a large number of segments. This can result in data loss at the TCP layer
and cause unnecessary TCP back-offs, especially with a large retransmission
queue. This further adds to packet loss rates and the end-to-end transmission
delays, which are perceived by the TCP as being due to congestion. In the hard
handover or cell reselection processes, the connection is paused for a certain
time, making several retransmissions of the same segment to get lost during
the pause. Due to the exponential back-off mechanism of TCP, the time inter-
val between retransmissions is doubled each time, until it reaches a significant
limit, which can drastically degrade the performance when the mobile resumes
its connection.

11.4.2 Delay Spikes


A delay spike is a sudden increase in the latency of the link. Delay spikes in
both LTE and 3G networks can occur due to a number of mechanisms which
include, momentary link outages due to loss of radio coverage such as caused
by blockage or passage through a tunnel, hard intersystem and intercarrier han-
dovers, blockage by higher priority traffic in the packet scheduling processes
234 From LTE to LTE-Advanced Pro and 5G

such as on the HS-DSCH, and channel switching in the radio resource control
(RRC) state transitions for providing bandwidth on demand to the users in real
time, which may take up to 500 ms. Delay spikes also result from cell reselec-
tions in UMTS and GPRS, and lossless handovers in LTE, which can result in
degraded performance even when no packets are lost (no flush-outs). Moreover,
delay spikes can also occur by the reordering and the RLC ARQ transmissions
within the LTE link layer protocols. Voice call preemptions in GPRS are an-
other source of delay spikes. The voice calls can take time slots away form an
existing data call thus reducing the available bandwidth to the data call. The
sudden sharp increase in RTT delays caused by delay spikes is not captured by
the RTO estimation algorithms. That causes the TCP sender to time out and
perform spurious (unnecessary) retransmissions and multiplicative reductions
of the transmission window, resulting in reduced utilization of the allocated
bandwidth [15, 16].

11.4.3 Dynamic Variable Bit Rate


The data bit rates experienced by users in LTE as well as in 3G networks are
highly dynamic and not steady as they would be more or less in fixed networks.
The user mobility and changing channel conditions in the resource assignment
process can result in a different bit rate experienced by the user and so would
the changing traffic and interference within the cell due to the dynamics of
other users impacting the bit rates experienced by a given user over the data
session. The real-time changing data rates can result in large delay variations
and delay spikes. The delay spikes then cause spurious timeouts in TCP where
the TCP sender incorrectly assumes that a packet is lost while the packet is only
delayed. This can then unnecessarily force the TCP into slow start, adversely
impacting the protocol performance. �

11.4.4 Asymmetry
Asymmetry with respect to TCP performance occurs if the achievable through-
put is not solely a function of the link and traffic characteristics of the forward
direction, but also significantly dependent on those of the reverse direction as
well [17]. The asymmetry can affect the performance of TCP because TCP
relies on feedback through ACKs messages. Asymmetry in network bandwidth
can result in variability in the ACK feedback returning to the sender for ACKs
on the reverse path. A very simplified example is given to show the effect of
asymmetry on TCP performance. Assume we are downloading a file using a link
with 384 kbps in the DL and 8 kbps in the UL. Further, assume that 1,500-byte
segments are used in the DL while the only traffic in the UL is 40-byte ACKs.
The time required for transmission of one data packet is 1,500*8/384 ms, that
is, 31.25 ms while the time required for one ACK is 40*8/8 = 40 ms. Since TCP
Optimization for TCP Operation in 4G and Other Networks 235

is ACK-clocked, that means at the steady state it cannot send more than it can
receive ACKs. Therefore, the effective data rate of the link is reduced from 384
kbps to 300 kbps (384*31.25/40) [15].
Generally, a normalized bandwidth ratio, defined as the ratio of the trans-
mission time required for ACK packets to that of data packets [17], can be used
to quantify the asymmetry measure. Then when this ratio is the greater one, the
asymmetry will reduce the effective bandwidth available to TCP. A compromise
can be made by using delayed ACKs or other cumulative ACK scheme, in order
to stop the reverse link from being the bottleneck, but this will further slow
down the TCP’s slow start phase.

11.5 TCP Performance Optimization for Wireless Networks


Several proposals have been made by researchers to improve the performance of
TCP over wireless networks, some of which are already in use on networks [18].
Some of these solutions included features that originated from the attempt to
further improve the TCP performance on wired networks, as was stated in ear-
lier sections. Later, it turned out that some provided useful features for imple-
mentation on the wireless networks. We will discuss here those solutions and
their optimization that are specifically targeted for wireless networks. We will
also discuss some of their reported performances in GPRS, UMTS, and LTE.
These solutions are either based on optimizing parameters and implementing
features in lower-layer protocols that impact TCP performance (link layer so-
lutions), making intelligent use of features provided in various TCP versions
and optimizing the relevant parameters, inclusion of intelligent performance
enhancement elements such as proxies inside the network nodes (proxies), split
connection solutions, and making modifications to TCP end systems (TCP
end-to-end solutions). We will discuss each of these categories, present exam-
ples, and report performance results.

11.5.1 Link Layer Solutions


The link layer solutions help to hide the lossy characteristics of the radio link
between the mobile and the network from the TCP. They achieve this through
basically local retransmission of packets that experience uncorrectable errors
over the radio links. An advantage of optimizing link layer protocols and their
parameters is that they do not require any changes in the end host and hence
are open to operator decisions. There are a variety of link layer solutions that
have been proposed in the literature for use on wireless links based on improv-
ing the radio link control protocol [19]. The solutions implemented by GPRS/
EDGE and UMTS are based on specifications of radio link control (RLC)
protocols that attempt to make the radio link more reliable for TCP data and
236 From LTE to LTE-Advanced Pro and 5G

signaling transmissions. RLC uses the automatic retransmission request (ARQ)


mechanisms to request the retransmission of corrupted packets due to uncor-
rectable errors. In LTE and HSPA systems, the RLC is further complemented
by another protocol sublayer beneath which is referred to HARQ at the MAC
sublayer as was detailed in Chapter 3. We will focus on discussing the RLC and
the HARQ as used mainly in LTE-type systems and the related parameters and
issues and how its configuration can impact the performance achieved. Then we
will discuss how certain critical TCP parameters and options can be tuned and
selected to provide the best performance in conjunction with the functionality
provided by the link layer solutions.
LTE, UMTS, and GPRS/EDGE all implement local retransmission of
packets that are corrupted over the radio channel. This helps to mitigate or to
reduce the TCP end-to-end retransmissions. Reducing the TCP retransmissions
triggered by packet losses helps to prevent TCP congestion control and retrans-
mission timer back-off mechanisms, which result in degraded throughput and
increased delays. The local retransmissions of corrupted packets are performed
by the RLC protocol in acknowledged mode [20] in UMTS and GPRS net-
works, whereas LTE as well as HSPA (3G+) improves on the link layer by in-
cluding the HARQ protocol sublayer. The HARQ uses either chase combining
or incremental redundancy depending on which one is selected in using already
erroneously received packets plus rapid local retransmissions to reduce the need
for RLC-level retransmissions. The HARQ provides the added benefits of in-
creased throughput when the BLER is around 10−3 or more with the additional
throughput increase obtained at the expense of system delay per simulation
results presented in [21]. However, the packets left with uncorrectable errors
after the HARQ maximum retransmissions reached are detected through error
correction detection codes such as cyclic redundancy check (CRC) bits and are
retransmitted over the link by the RLC protocol sublayer between the mobile
and the base station when the acknowledged mode (AM) of the protocol is
used. With RLC retransmissions, it is possible to reduce the effective residual
bit error rates down to around 10−6 thus minimizing the impact of loss to TCP.
In fact with the AM bearer, the RLC ARQ reduces the probability of a packet
loss in the wireless last hop between eNB and UE to nearly 0%. Measurements
obtained on a real LTE testbed in [22] show that the AM bearer almost always
achieves better throughput than the UM bearer due to less packet loss prob-
ability when the RLC PDU loss probability is varied between 0.001% and 1%.
The results obtained show that with the loss probability on the order of 10−5,
the utilization of the AM bearer is almost 90%, and that of the UM bearer is
nearly 87%. The missing 10% in utilization is due to the time period of the
TCP slow start phase in the TCP server and the time taken to determine the
optimal TCP receive window size by auto-tuning in the TCP client. Moreover,
Optimization for TCP Operation in 4G and Other Networks 237

the results presented in [22] show that compared with that of the AM bearer,
the utilization of the UM bearer is 75% with a loss probability on the order of
10−4, 25% with a loss probability on the order of 10−3, and at worst, only 12%
with a loss probability on the order of 10−2. Since the RLC PDU loss probabil-
ity after the HARQ in LTE is on the order of 10−3, the UM bearer is thus not
appropriate for TCP applications.
When the packet loss rates over the link stay below 10%, TCP goodput
suffers about 10% (TCP Reno or Vegas). The degradation in the TCP goodput
becomes extremely high when the packet losses exceed 10% [23]. The com-
bined HARQ and RLC AM sublayers in LTE and HSPA networks result also in
overall reduced delays as the erroneous packets are retransmitted locally over the
UE and the base station (ENodeB) radio link, which, in turn, results in further
improvement in the TCP performance.
Since the link layer solutions are normally TCP unaware, it is important
to configure the parameters to provide as much of the reliability required for
TCP operation and prevent inefficient interactions between the two protocols.
The optimal performance is achieved when the bandwidth provided by the
RLC, or the HARQ-RLC combination in the case of LTE and HSPA networks,
which can vary over time due to the local retransmissions, is fully utilized by
TCP. None optimal utilization result when the TCP sender leaves the link idle
or retransmits the same data that RLC is already retransmitting. The link layer
tuning for TCP performance optimization requires consideration of a number
of parameters and measures, which are discussed in the following sections.�

11.5.1.1 Impact of Link Layer Parameters


The main link layer parameters and features that impact the operation of the
TCP may include the interleaving depth in the case of UMTS, the RLC mode
configuration and retransmit timer reflecting the persistence degree, the buffer
dimensioning, and the PDU reordering parameters.

Interleaving Depth Factor (UMTS)


In the case of UMTS, different configurations can be chosen for a given re-
quired throughput so that an RLC PDU (packet data unit) can fit in 1 to up
to 8 frames of 10 ms. Likewise, a given bit rate can be obtained through one or
several channelization codes with multicode transmissions. Depending on what
configuration can be chosen within the vendor’s implementation, various inter-
leaving depths can be achieved. The interleaving depth has a proportional im-
pact on the data transfer delays. Larger interleaving depths result in larger delays,
but in more randomization of burst errors caused by radio channel fades. More
randomization of burst errors helps more of the errors to get corrected by the
error correcting codes. However, for the same average packet error rates (errors
left at the decoder output), TCP favors the bursty over the uniform errors. Since
238 From LTE to LTE-Advanced Pro and 5G

a smaller interleaving depth in the case of the more bursty residue errors result
in reduced end-to-end data transmission delays, as confirmed by simulations
presented in [10]. The reduced end-to-end delays can help to reduce the num-
ber of TCP packet retransmission timeouts, which would trigger unnecessary
congestion control and retransmission timer back-offs. Therefore, choosing the
optimum interleaving configuration for a given throughput and the achievable
error rates is one area of experimentation within the vendor´s implementation
flexibility. The trade-offs are the reduced delays achieved with shorter interleav-
ing depths, and the achievable packet error rates in each scenario.

Timer Parameters
These are parameters within the link layer protocols that basically control the
retransmission timing of protocol data units not yet received or received in er-
ror. In the 3G UMTS networks, where only the RLC is implemented, this is
referred to as the retransmission timer, which controls the ARQ retransmission
timeout for retransmitting of lost RLC blocks in acknowledged mode. This
timer determines the number of radio frames waited before a dropped block is
retransmitted and impacts the retransmission delay. The retransmission delays
at the RLC level, in turn, are reflected in the end-to-end transfer delays seen by
the TCP, which can result in the triggering of congestion avoidance measures
and loss of throughput. Therefore, it is important to pay attention to the re-
transmission timeout values that are configured for the RLC and make sure that
it is configured to properly reflect the expected delay on the radio link under
normal conditions. The RLC ARQ RTT is influenced by many factors such
as t_Reordering, status prohibit timer (t_StatusProhibit), and UL scheduling
(e.g., scheduling request, buffer status reporting). This timer is used by the
receiving side of an AM RLC entity in order to detect the loss of RLC PDUs
at the MAC layer. If the RLC receiver detects a gap in the sequence of received
PDUs based on the RLC sequence number, it starts a reordering timer and
assumes that the missing packet still is being retransmitted by the HARQ pro-
tocol sublayer as is the case with LTE. In the rare case that the reordering timer
expires, usually in a HARQ failure case, an RLC acknowledged-mode (AM)
receiver sends a status message comprising the sequence number of the missing
RLC PDU(s) to its transmitting peer entity and requests a retransmission that
way. The ARQ function of RLC AM sender then performs retransmissions to
fill up the gap and reassemble and deliver the RLC SDUs to the upper layer in
sequence. The value of this reordering timer has significant impact on the RTT
seen by the end-to-end TCP and results in spurious timeouts, which trigger un-
necessary throttling of the sender and loss of throughput. Measurements carried
out on LTE testbeds and presented in [22] show that a setting of 0 ms for the
reordering time almost always achieves better throughput than the larger t_Re-
ordering cases with every PDU loss probability case over the radio link. For
Optimization for TCP Operation in 4G and Other Networks 239

example, the results show that the 0 ms t_Reordering case result in 4%, 9%,
and 32% gain compared with a setting of 30 ms for the PDU loss probability
cases on the order of 10−4, 10−3, and 10−2, respectively. These results basically
point out that decreasing the RLC ARQ RTT helps to increase the throughput
of TCP applications.

Link Layer Retransmissions


In theory with an infinite number of retransmissions configured, the link layer
protocols will provide an error-free packet service to the upper layer. In LTE,
the combination of the MAC HARQ and the RLC ARQ can be seen as one
retransmission mechanism with two feedback mechanisms due to the fact that
both are implemented within the same node (i.e., eNodeB). Normally, the
number of retransmissions for the HARQ may be set to achieve a residual rate
on the order of 10−3 due to erroneous feedback signaling, for example, a NACK
incorrectly interpreted as an ACK by the transmitter or persisting decoding
failure, and which then can help to reduce the maximum number of RLC ARQ
retransmissions which involve more overhead and latency.
In practice, the maximum number of link layer retransmissions is con-
figurable up to a maximum of 40, in case of RLC in for instance 3G networks
with a value of 5 usually used. The more transmissions that are permitted for a
single packet, the more reliable the channel will look to the TCP. The retrans-
missions end up consuming part of the link bandwidth. Therefore, TCP will see
the wireless link as a reliable link with a smaller bandwidth and a higher delay
(the delays caused by the RLC retransmissions). The effective bandwidth seen
by the TCP will reduce with increased packet error rates and hence the link level
retransmissions. However, the RLC local retransmission of corrupted packets
results in huge gains in the TCP goodput, defined as the ratio of TCP data seg-
ments (excluding TCP headers, which are not useful data) correctly received
to the bandwidth of the wireless link. This has been verified by the simulation
results shown in Figure 11.1, which are reproduced from [10]. The results are
shown for different values of the retransmission parameter, MAXDAT in the
RLC. The case with MAXDAT = 0 corresponds to the pure TCP end-to-end
retransmission. The goodput excludes packet losses and retransmissions, where-
as the throughput refers to the sender data rate.
The huge gains achieved in the throughput with RLC retransmissions
is due to the fact that the retransmitting a packet at the TCP level has much
more long-term harmful consequences for the goodput than the retransmission
itself. The missing packets arrive at the TCP level will trigger the congestion
control mechanisms, which result in reducing the throughput. However, a re-
transmission at the RLC level results in just an additional delay other than the
retransmission itself. The RLC in the AM performs the in-sequence delivery
of the packets at the transport (TCP) layer, hence avoiding spurious duplicate
240 From LTE to LTE-Advanced Pro and 5G

Figure 11.1 The positive influence of RLC retransmissions on TCP goodput. (Reproduced
from [10].)

acknowledgments that could trigger the fast recovery procedures. The RLC re-
transmissions are particularly beneficial for TCP-based applications when the
delays over the wireless link are small compared to the delays incurred within
the fixed core network. Therefore, the number of maximum allowed link level
retransmissions should be set to the highest possible value. Also for effective re-
transmissions at the link layer, it is important to have the granularity of the link
layer timers much finer compared to the TCP timers. Otherwise, contention
between the two timers can reduce the throughput.
However, this can potentially become problematic if the wireless delays
are predominant. If the wireless delays are predominant, a high value for al-
lowed number of retransmissions can result in high RTT and RTT variations
as seen by the TCP. The high RTT variations are caused by the changing radio
channel conditions, which result in different number of retransmissions till a
packet is successfully transmitted over the radio link. The high variations in
RTT can result in unstable RTO (timeouts) and trigger TCP retransmission
timeouts, which result in competing TCP recovery. That is, the TCP starts the
congestion control mechanisms, and retransmission of packets that are already
under retransmission by RLC. However, for low residue error rates on the radio
channel, the RTT jitter due to retransmissions will be small, and hence there
will be less likelihood of high competing interactions between TCP and RLC.
Optimization for TCP Operation in 4G and Other Networks 241

In fact, the simulations results presented in [10] confirm a high degree of ro-
bustness of the TCP timeout mechanisms and a nonexistence of competing
retransmissions even on wireless links with high delays. This is particularly the
case when the RTO estimation algorithm considers several consecutive RTT
samples as well as their variances.
The problem of competing error recovery between TCP and other reliable
link layer protocols resulting from spurious timeouts at the TCP sender is also
investigated in [24]. The same conclusion is reached as in [25] with performing
measurements during bulk data transfers with very few timeouts due to link
layer delays. The measurements performed in [25] indicate most TCP timeouts
are due to the radio link protocol resets resulting from unsuccessful transmis-
sion of data packets under weak signal strengths; TCP was seen to provide a
very conservative retransmission timer allowing for extra delays due to link layer
error recovery beyond 1,200 ms. This is the upper range of typical round trip
time measurements observed in some of the existing UMTS networks. Mea-
surements performed on a major providers UMTS network [26] over different
times of the day have indicated that only 10% of the measured RTTs exceeded
1 second. Therefore, the number of RLC retransmissions can be set to a high
value to make the link more reliable, without causing excessive link delays that
would be seen by TCP. In the case of LTE as was discussed in Chapter 3, the
more effective combined HARQ and RLC within the eNodeB helps to reduce
the retransmission delays and the error rates further and allow proper setting
between the number of retransmissions at the MAC HARQ level and the more
costly RLC ARQ retransmissions to provide a more reliable radio link for im-
proved TCP performance.
Setting the effective number of link layer retransmissions to an adequately
large number will also help to prevent link resets. Link resets can have a major
impact on TCP performance when TCP header compression is implemented to
reduce the per segment overhead. Header compression uses differential encod-
ing schemes which rely on the assumption that the encoded “deltas” are not lost
or reordered on the link between the compressor and the decompressor. Lost
deltas lead to generation of TCP segments with false headers at the decompres-
sor. Thus, once a delta is lost, an entire window worth of data is lost and has to
be retransmitted. This effect is worsened as the TCP receiver will not provide
feedback for erroneous TCP segments and forces the sender into timeout. The
RLC link resets results in a failure of the TCP/IP header decompression, which,
in turn, causes the loss of an entire window of TCP data segments. Therefore,
making the link layer retransmissions persistent has multiple benefits when
transmitting TCP flows. Finding a reasonable value for the link layer retrans-
missions is a challenge of TCP optimization over wireless links.
242 From LTE to LTE-Advanced Pro and 5G

RLC Buffer Influence


The RLC transmission buffer stores all the packets arriving from the upper
layer that have not yet been sent by the RLC. In theory, the TCP connection
bandwidth is fully utilized when the transmission window is set to the BDP.
The BDP is the minimum amount of buffer that is needed for optimum perfor-
mance when there is a constant flow of data over the link. However, additional
buffer space is required at the link layer, which is the slower link in network, to
absorb the delays that result from the retransmissions as well as the delay varia-
tions caused by contention and queuing within the core network nodes. The
RLC buffers help to provide enough packets in store for continuous transmis-
sion and utilization of the link in times when packet arrivals are delayed from
the core network. In this way, the link is not left idle and full utilization can
be achieved. It is important to properly size the RLC buffers and manage the
queuing to prevent overflow and packet dropouts. Packet dropouts occur when
packets arrive and the buffer is full. An overflow of the RLC buffer can result
in multiple packet drops in a row with disastrous impact on the TCP goodput
[10].
The RLC transmission buffers can quickly fill up when the receiver adver-
tised window is considerably larger than the connection BDP. The TCP sends
data up to the limit of the minimum between the receiver advertised window
(Rxwnd) and the congestion control window, CWND per RTT, but the con-
gestion window is periodically increased until the TCP receiver’s advertised
window is reached. The latter usually corresponds to the default socket buffer
size (commonly 8 or 16 Kb). When the latter overtakes the BDP, TCP injects
more packets into the network per RTT than can be handled by the wireless
link, since TCP is not aware of the BDP limitation of the connection.
The overbuffered RLC links result in TCP sending too many packets be-
yond what can be handled by the effective bandwidth of the wireless link. The
excess queuing results in inflated end-to-end delays, which can cause a number
of problems including unnecessarily long timeouts in cases of unsuccessful link
layer transmission of a data packet. The advance accumulation of large chunks
of data within the RLC buffers has also certain other side effects of, for instance,
when a user aborts the download of a Web page in favor of another. The stale
data must then first drain from the queue, which in case of a narrow band-
width link can take a significant time. Solutions around this problem are either
to adjust the allocated RLC buffer consistent to the connection’s bit rate or
implement advanced queue management with explicit congestion notification
(ECN) within the base station [27, 28]. The objective is to adapt the buffer size
allocated to the connection bit rate and the worse-case RTT.
Optimization for TCP Operation in 4G and Other Networks 243

11.5.2 TCP Parameter Tuning


There are certain critical parameters in TCP, which can be experimented with
and tuned to provide the best performance when used on wireless links. These
measures result in more effective utilization of the benefits achievable by the
link layer solution based on RLC and provide complementary benefits. They
are therefore discussed here as a continuation of the link layer solution section.
The TCP parameters that can be tuned are the receiver advertised window, and
the maximum TCP data segment size, MSS.

11.5.2.1 TCP Rxwnd Tuning


The maximum number of data segments that TCP may have outstanding (un-
acknowledged) in the network at any time is ruled by the minimum between
the receiver advertised window, Rxwnd, and the congestion control window at
the time. TCP sends data up to the limit of the minimum between the receiv-
er window (Rxwnd), and the congestion control window, CWND per RTT.
Therefore, it is important to configure a suitable value for the Rxwnd to guar-
antee that the connection bandwidth is fully utilized. The connection band-
width is fully utilized when the transmission window is at least of the size of
the BDP. Therefore, a lower limit for the Rxwnd is set by the following relation

Rxwnd > BPD (11.4)

On the other upper side, any packets transmitted over the BDP limit in
one RTT are buffered within the network nodes and particularly at the RLC
buffer. Since the wireless link is the bottleneck link in the network, buffering
within the fixed part of the network is assumed to be negligible. This implies
that all packets transmitted over the connection capacity (BDP) in one RTT are
buffered within the RLC buffer. Therefore, to prevent overflow of the network
buffering capacity, we should ensure that the receiver window is upper limited
by the following relation

Rxwnd < BDP + B (11.5)

where B is the size of the RLC buffer


Combining (11.4) and (11.5) into one relation, the following guideline
results for sizing the receiver advertised buffer

BDP + B > Rxwnd > BDP (11.6)

in which the value of BDP can be estimated based on typical measured RTTs
in the network.
244 From LTE to LTE-Advanced Pro and 5G

The above is based on a static setting of the TCP receiver window. How-
ever, in the radio environment the effective BDP and packet queuing incurred
within the network can change over the course of a connection due to changing
radio channel conditions, mobility, and changing delays caused by rate varia-
tions and channel scheduling in 3G. Then for small RLC buffer sizes, setting
the receiver window statically to the buffer size result in significant underuti-
lization of the link since the full buffer of packets is not able to absorb the
variations over the wireless link. The Next Generation TCP/IP stack supports
Receive Window Auto-Tuning. Receive Window Auto-Tuning continually de-
termines the optimal receive window size by measuring the BDP and the ap-
plication retrieve rate and adjusts the maximum receive window size based on
changing network conditions.
A dynamic window regulation approach is presented in [29], for improv-
ing the end-to-end TCP performance over CDMA 2000-1X based 3G wireless
links. Since TCP inserts data into the network up the minimum between the
TCP transmission window (i.e., congestion control window) and the receiver
window, it is possible to regulate the flow to the maximum link utilization while
preventing excess network buffering by dynamically adjusting the TCP window
in real time. However, TCP adjusts its transmission window in response to the
ACK messages received from the receiver, but basically regardless of the BDP;
TCP is not aware of the connection bandwidth-delay product. The dynamic
window regulator algorithm proposed in [29] uses a strategy based on regulat-
ing the release of the ACK messages, in response to packet buffering status for
instance in the base station, to adaptively adjust the TCP transmit window to
the optimum range given by (11.6). The algorithm provides the advantage of
dynamically adjusting the window size to the connection’s varying BDP. The
algorithm is shown to result in highest TCP goodput and performance for even
small amounts of buffer per TCP flow (at the RLC link) and minimize packet
losses from buffer overflows.

11.5.2.2 TCP Maximum Segment Sizing


The maximum data segment size (MSS) is also a parameter that impacts the
TCP performance. The optimum data segment size used by a TCP connection
is a trade-off between avoiding segment fragmentation within the IP routers
and rapidly achieving the maximum link throughput as set by the maximum
transmission window size. Fragmentation at the IP level is not recommended
because it increases processing time. Furthermore, the loss of a single fragment
implies the retransmission of the entire original datagram. However, using too
small MSS can hurt the transmission of small files which take short TCP flows.
This is because the full link capacity is not used during an important part of
the transmission time due to the TCP slow-start phase. The effect is more pro-
nounced when a high BDP is involved. For example, MSS of 1,460 bytes can
Optimization for TCP Operation in 4G and Other Networks 245

increase the application throughput by more than 30% compared to an MSS of


536 bytes in EGPRS [30]. When the maximum transmission unit is not known
within the fixed network routers, the MTU discovery feature provided within
the TCP protocol stack can be used to obtain the optimum value for the MSS. �

11.5.2.3 Initial Transmission Window Setting


When a TCP connection is started, there is an initial window that by default
has the size of two segments. It is possible to increase the size to speed up the
slow start phase of a TCP connection particularly when most connections in-
volve shorts TCP flows. As a guide, the initial CWND may be set to:

Initial CWND = Minimum ( 4 MSS, max (2 MSS, 4,380 bytes )) (11.7)

as recommended in RFC 2414.

11.5.3 Proxy Solutions


The Proxy solutions insert a proxy between sender and receiver TCP hosts to
help TCP’s performance, leaving the end-to-end semantics of the protocol un-
changed. In the Proxy solution, the idea is to avoid the transmission of stand-
alone TCP ACK packets over the radio link between eNodeB and UE together
with the associated overhead added at physical and link layers including sched-
uling request/response exchange at the MAC layer [31]. This functionality
requires no changes to the TCP, but will require new software entities called
the ARQ proxy and ARQ client. The ARQ proxy is a software module imple-
mented in the eNodeB protocol stack, which is not an implementation of the
TCP layer in a conventional sense. It sniffs the packets destined to various UEs
and must have access to the network and transport layer headers. This is pos-
sible in LTE networks as in contrast to previous cellular systems, the network
layer packets become explicitly visible at eNodeB due to the mapping used to
encapsulate one network layer packet into one variable sized RLC PDU. Upon
detection of a TCP data packet, the ARQ proxy generates a TCP ACK confirm-
ing the reception of TCP data packet, assuming that it is received in sequence.
For TCP ACK generation, ARQ proxy simply copies the required fields (IP
addresses, port numbers, and flow sequence numbers) into the corresponding
fields of a TCP ACK packet template allocated in the memory. The TCP ACK
will be associated with a hash index into a hash table, and is obtained through
application of a hash function onto raw packet headers’ data. This results in
almost 21 times fewer channel resources than the original TCP scheme and
is a consequence of the size difference between TCP ACK and its hash value.
The TCP ACK is comprised of TCP (20 bytes), IP (20 bytes), PDCP (1 byte),
RLC (2 bytes) headers, and PHY CRC (2 bytes), bringing it to 45 bytes in
246 From LTE to LTE-Advanced Pro and 5G

total, whereas the hash value can come down to as little as 16 bits long. The
generated TCP ACK in the base station Proxy is not immediately released into
the core network, but stored locally in a hash table at the eNodeB. Meanwhile,
the original TCP data packet continues its path towards the UE where a similar
hash algorithm is executed to associate the TCP packet with an identical hash
index. The client proxy is implemented as a stand-alone module on top of
the MAC sublayer and can be introduced at the driver level or inside network
interface firmware. In this way, the TCP ACK from the UE is replaced with a
simple hash index, which not only saves bandwidth over the radio link, but also
reduces the error probability due to its much shorted length. Using the hash
index as a short MAC layer request, the client proxy within the UE can request
eNodeB to release a given TCP ACK into the network core in a way that is
consistent with the employed acknowledgment strategy (delayed or selective
acknowledgment). The UE transmits the computed hash index to the eNo-
deB using an interface between the ARQ client and the ARQ proxy defined at
the MAC layer. This reduces the RTT for the time associated with TCP ACK
transmission over the radio link, including uplink bandwidth reservation delay,
which depends on the UE’s state (being active or idle), and the framing used.
This reduction is typically in the order of tens of milliseconds [32]. However, in
case the MAC layer does not succeed to pass the hash value and the TCP ACK
arrives at the physical layer, it is transmitted as in the legacy implementation
(i.e., with no ARQ proxy enabled). It is seen that the proxy solution does not
violate the end-to-end TCP semantics as since it is the UE that triggers the TCP
ACK transmissions from the eNodeB.
Each TCP ACK located in the hash table in the eNodeB has a lifetime that
is assigned at the moment of TCP ACK generation. In case the lifetime, which
can be set to the TCP timeout, is exceeded, the packet is silently dropped from
the hash table. This mechanism ensures that the eNodeB memory resources are
freed for TCP ACKs not requested by UE. This happens when the TCP data
packet arrives out-of-order or when the UE implements delayed-ACK scheme.
Other scenarios are handover cases, in which the hash table stored by the ARQ
proxy in the old eNodeB is not transferred to the new eNodeB and is deleted.
In this way, after a location update, the UE will send the hash values for only
those TCP ACKs that correspond to packets received from the new eNodeB.
Performance evaluation results presented in [31] demonstrate that the
ARQ proxy approach is able to provide uplink capacity increase, sustain high
error rate, and bring TCP performance increase due to reduced RTT. Further-
more, they can be incrementally deployed in already operational network where
UEs and eNodeBs implementing ARQ proxy approach coexist with those which
do not. For example, in case a UE does not include the ARQ client module,
none of the TCP ACKs generated at the eNodeB is requested using their hash
values and will be simply deleted after their lifetime expiration. However, if an
Optimization for TCP Operation in 4G and Other Networks 247

eNodeB does not implement the ARQ proxy solution, all bandwidth allocation
requests sent by UEs will be rejected and the original TCP ACK packets will
transit over the radio channel. This is a consideration of the fact that whichever
comes first at the physical layer (TCP ACK or its hash value) will be transmitted
over the radio channel.

11.5.4 Selecting the Proper TCP Options


There are a few options and features which have been added to various avail-
able TCP implementations for improving the protocol’s performance in various
scenarios. Some of these features and options offer merits that are well suited
to certain applications and cases in wireless networks. The available TCP op-
tions relevant to improved performance in wireless networks include the TCP
selective acknowledgment (SACK), the TCP timestamp option, and the fast
transmit and recovery mechanisms. We will discuss the benefits these options
provide with some of the reported results on their trials or simulation on wire-
less networks. This discussion should provide a good insight to guide the opti-
mization engineers in the proper selection of the option sets for each applica-
tion scenario based on availability.

11.5.4.1 TCP SACK Option for Wireless


TCP selective acknowledgment (SACK) is a TCP enhancement that allows re-
ceivers to specify precisely which segments have been received even in the pres-
ence of packet loss. TCP SACK is an Internet Engineering Task Force (IETF)
proposed standard which is implemented for most major operating systems.
SACK was added as an option to TCP by RFC 1072 and enables the receiver
to give more information to sender about received packets allowing sender to
recover from multiple-packet losses faster and more efficiently. On the contrary,
TCP Reno and New-Reno can retransmit only one lost packet per RTT because
they use cumulative acknowledgments. The cumulative ACKs do not provide
the sender with sufficient information to recover quickly from multiple packet
losses within a single transmission window. One study [33] showed that TCP
enhanced with selective acknowledgments performs better than standard TCP
in such situations. The SACK option is implemented by the Santa Cruz TCP
version, and the wireless TCP (WTCP) [34]. The SACK option is particularly
well suited to long fat networks such as on geosatellite links with long round-
trip delays. In fact, TCP SACK is highly recommended for GPRS and also for
UMTS and LTE from an energy-saving viewpoint. One of the proposals in the
IETF draft on TCP for 2.5G and 3G systems was [16]: “TCP over 2.5G/3G
SHOULD support SACK. In the absence of SACK feature, the TCP should
use New Reno.” The SACK option is not recommended when the window size
is small (very small BDPs) and the loss rates are also high [25].
248 From LTE to LTE-Advanced Pro and 5G

The SACK option is expected to significantly reduce the number of tim-


er-driven retransmissions in the wireless networks where high end-to-end delay
variations and delay spikes are observed. When the SACK option was tried
on a very busy GPRS network, the percentage of timer-driven retransmissions
dropped from 11.24% to only 1.73% [30]. This translates to a significant im-
provement in throughput. Another empirical study involving large-scale passive
analysis for TCP traces over different GPRS networks showed that the use of
TCP-SACK leads to much higher per-TCP connection utilization for all-size
flow types [35]. The selective acknowledgment option lets the TCP sender to
cancel the timers of the segments it knows (via the SACK messages) that the
receiving end has correctly received and hence is preventing spurious retrans-
missions. This permits the sender to retransmit only the missing packets. The
limited transmission achieved by SACK extends fast retransmit and recovery
mechanisms to connections with small congestion windows, and is therefore
best suited to short-lived flows, as well.

An Extension to SACK
An extension to the Selective Acknowledgment (SACK) option for TCP SACK,
defined in RFC 2883 allows the receiver to indicate up to four noncontiguous
blocks of received data. RFC 2883 defines an additional use of the fields in the
SACK TCP option to acknowledge duplicate packets. This allows the sender
of the TCP segment containing the SACK option to determine when it has
retransmitted a segment unnecessarily and adjust its behavior to prevent future
retransmissions. The reduced unnecessary retransmissions result in better over-
all throughput.

11.5.4.2 TCP Timestamp Option


Most common TCP implementations time only one segment per window to
minimize the computations. While this is expected to provide a good approxi-
mation to RTT for small windows (e.g., 4 to 8), it may not capture the fast
round-trip delays variations that can happen in the wireless networks. More-
over, it is not possible to accumulate reliable RTT estimates when time-stamped
segments get retransmitted (resulting in ambiguity over whether a receiver ACK
is the response to the original or the retransmitted segment). This creates a re-
transmission ambiguity in the measurement of the RTT for the segment. The
timestamp option helps to resolve this ambiguity problem.
The TCP timestamp option (also referred to as TCP echo option) al-
lows the time-stamping of every data segment, including the retransmitted seg-
ments, to provide more RTT samples for capturing the delay variations and
provide more accurate estimation of the connection’s RTO. A more accurate
RTO should help avoid spurious retransmissions and the activation of conges-
Optimization for TCP Operation in 4G and Other Networks 249

tion avoidance mechanisms. Nevertheless, the effect of the 12-byte overhead on


TCP header should be considered in the decision to use the option.
The timestamp option will also allow the use of the Eifel algorithm [36],
which allows a TCP sender to detect a posteriori whether it has entered loss
recovery unnecessarily. The Eifel algorithm requires that the TCP timestamp
option be enabled for a connection. Eifel makes use of the fact that the TCP
timestamp option eliminates the retransmission ambiguity in TCP. Based on
the timestamp of the first acceptable ACK that arrives during loss recovery, it
decides whether loss recovery was entered unnecessarily. The Eifel detection al-
gorithm provides a basis for future TCP enhancements. This includes response
algorithms to back out of loss recovery by restoring a TCP sender’s congestion
control state.

11.5.4.3 Fast Retransmit and Recovery Option


The fast retransmit and recovery options when used together are designed to
prevent the unnecessary triggering of congestion control via the slow start phase
in the event of transient losses. The two mechanisms, when both are imple-
mented such as in the TCP Reno version, interpret the receipt of a triple du-
plicate packet indicating the same sequence number as a sign of a packet loss
without a break in the TCP flow due to excessive congestions. The TCP sender
then retransmits the lost packet without waiting for the retransmission time
out (fast retransmit) and then enters the congestion avoidance rather than the
slow-start congestion control phase (fast recovery). This will result in a better
utilization of the available bandwidth of the channel when occasional single
packet losses occur.
The fast retransmit-recovery works well for single lost segments; it does
not perform as well when there are multiple lost segments [9], which can hap-
pen, for instance, in hard handovers, in deep fades or channel switching phases
in 3G. A new enhancement to the fast retransmit-recovery feature is imple-
mented in the TCP NewReno (RFC 2582), which results in faster throughput
recovery by changing the way that the sender increases the packet transmission
rate during fast recovery when multiple segments in a window of data are lost
and the sender receives a partial acknowledgment

11.5.5 TCP Implementation Types and Impacts


Among the various TCP types discussed earlier, the TCP Vegas, which uses
basically packet delays as a measure to trigger congestion control and throttle
the sending rate, has been reported in [37] as providing the highest goodput
performance in LTE compared to other variants such as the TCP Tahoe, TCP
Reno, TCP New Reno, and TCP SACK using NS-2 network simulator. The
results presented there also indicate use of less buffer space with TCP Vegas
250 From LTE to LTE-Advanced Pro and 5G

compared to for instance the more commonly used TCP Reno. TCP Vegas has
been implemented in Linux Kernel and FreeBSD. However, the conventional
TCP implementations include TCP Tahoe, TCP SACK, TCP Reno, and New
Reno. The options provided by these implementations are shown in Table 11.1
for reference purposes.

11.5.6 Split TCP Solutions


The split TCPs isolate mobility and wireless related problems from the fixed
network side. They do this by splitting the TCP connection in the base station
(eNodeB in the case of LTE and RNC in the case of UMTS) into two con-
nections: a wired connection between the fixed host and the base station and a
wireless connection between the base station and the mobile terminal. In this
way the wired wireless connection uses a TCP modified to better suit the wire-
less link. The split connection mechanisms terminate the TCP connection at
the base station, and thereby generally violate the semantics of the TCP in the
strict sense. Examples of split TCP solutions developed for wireless networks are
discussed in the following sections.

11.5.6.1 Indirect TCP (I-TCP)


Indirect TCP (I-TCP), developed in 1994 to 1995, is one of the earliest wireless
TCP proposals [38]. I-TCP utilizes the resources of mobility support routers
within normally the access nodes to provide transport layer communication
between mobile hosts and hosts on the fixed network. Packets from the sender
are buffered at the mobile support router until transmitted across the wireless
connection. A handoff mechanism is proposed to handle the situation when the
wireless host moves across different cells. A consequence of using I-TCP is that
the TCP ACKs are not end-to-end, thereby violating the end-to-end semantics.

Table 11.1
Major TCP Implementations and Features

TCP Implementation
Feature TCP Tahoe TCP Reno New Reno TCP SACK
Slow-start Yes Yes Yes Yes
Congestion avoidance Yes Yes Yes Yes
Fast retransmit Yes Yes Yes Yes
Fast recovery No Yes Yes Yes
Enhanced fast No No Yes No
retransmit-recovery
SACK No No No Yes
Optimization for TCP Operation in 4G and Other Networks 251

An example of this kind of solution for LTE is presented in [39], which inves-
tigates the performance of a split connection TCP proxy deployed in LTE’s
S-GW node. Numerical results show significant performance improvement of
file downloading, Web browsing, and video-steaming applications in case of
noncongested transport networks.
11.5.6.2 Mobile TCP
Although it uses a split connection, Mobile TCP (M-TCP) preserves the end-
to-end semantics [40] and aims to improve throughput for connections that
exhibit long periods of disconnection. M-TCP adopts a three-level hierarchy.
At the lowest level, mobile hosts communicate with mobile support stations
in each cell, which are, in turn, controlled by a supervisor host. The supervi-
sor host is connected to the wired network and serves as the point where the
connection is split. A TCP client exists at the supervisor host. The TCP client
receives the segment from the TCP sender and passes it to an M-TCP client to
send it to the wireless device. Thus, between the sender and the supervisor host,
standard TCP is used, while M-TCP is used between the supervisor host and
the wireless device. M-TCP is designed to recover quickly from wireless losses
due to disconnections and to eliminate serial timeouts. In the case of disconnec-
tions, the sender is forced into the persist state by receiving persist packets from
M-TCP. While in the persist state, the sender will not suffer from retransmit
timeout, it will not exponentially back off its retransmission timer, and it will
preserve the size of its congestion window. Hence, when the connection re-
sumes following the reception of a notification from M-TCP, the sender is able
to transmit at its previous speed. TCP on the supervisor host does not transmit
ACK packets it receives until the wireless device has acknowledged them. This
preserves the end-to-end semantics and preserves the sender timeout estimate
based on the whole round trip time.

11.5.6.3 Mobile-End Transport Protocol


Mobile-end transport protocol (METP) replaces TCP/IP over the wireless link
by a simpler one with smaller headers. METP is designed specifically to directly
run over the link layer [41]. This approach shifts the IP datagram reception and
reassembly for the wireless terminal to the base station. The base station also
removes the transport headers from the IP datagram. The base station acts as a
proxy for TCP connections. It also buffers all the datagrams to be sent to the
wireless terminal. These datagrams are sent using METP and a reduced header,
which provides minimal information about source/destination addresses and
ports, and the connection parameters. In addition, it uses retransmissions at
the link layer to provide reliable delivery. When an interbase station handover
occurs, all the state information needs to be transferred to the new base station.�
252 From LTE to LTE-Advanced Pro and 5G

11.5.7 TCP End-to-End Solutions


The end-to-end wireless TCP solutions try to address and resolve the wireless
losses at the transport layers of the sender and receiver. In some cases, these
solutions simply make use of some of the features and options that are already
provided within the available TCP implementations. For example, the TCP se-
lective acknowledgment option, SACK, which is implemented by most operat-
ing systems, allows the sender to recover from multiple-packet losses faster and
more efficiently. SACK and its enhancement in RFC 2883 reduce the number
of retransmission timeouts significantly, which results in improved throughput
and delay. The TCP with the timestamp option is another solution discussed
earlier that helps to provide more accurate estimate of RTT, capture the delay
variations more effectively, and hence prevent spurious timeout-caused retrans-
missions. In addition to SACK and the TCP timestamp option, there are a
few other end-to-end TCP wireless solutions that have been proposed in the
literature but require modifications to the TCP stack in the end systems. For
completeness of the discussion, we will briefly discuss the major ones in the
following sections.

11.5.7.1 Probing TCP


The probing TCP solution [42] is based on exchanging probing messages be-
tween the TCP sender and receiver to monitor and measure the network RTT
when a data segment is delayed or lost. The probing is used as alternative to
retransmitting and reducing the congestion window sized as is done in con-
ventional TCPs. The probes are TCP segments with header options and no
payload. This helps to alleviate congestion because the probe segments are small
compared to the data segments. The cycle terminates when the sender can make
two successive RTT measurements with the aid of receiver probes. In cases of
persistent errors, TCP decreases its congestion window and threshold, but for
transient random errors, the sender resumes transmission at the same window
size used before entering the probe cycle.

11.5.7.2 TCP Santa Cruz


TCP Santa Cruz [43] also makes use of the options field in the TCP header.
This variation uses the relative delays of the data segments received in the for-
ward direction to control the congestion window size. Therefore, the loss of
ACKs and congestion in the reverse path do not affect the throughput of the
connection. This is a merit that can work handily on highly asymmetric con-
nections with little bandwidth on the reverse path. The scheme improves the
RTT estimation compared to other algorithms and is able to include the re-
transmissions in the estimations and thus the RTTs in the congestion periods.
TCP Santa Cruz also implements the selective acknowledgment feature. �
Optimization for TCP Operation in 4G and Other Networks 253

11.5.7.3 Wireless TCP


Wireless TCP (WTCP99) [24] uses rate-based transmission control as opposed
to the TCP’s self-clocking ACK-based mechanisms. The transmission rate is
determined by the receiver using a ratio of the average interpacket delay at the
receiver to the average interpacket delay at the sender.
The sender transmits its current interpacket delay with each data packet.
The receiver updates the transmission rates at regular intervals based on the
information in the packets, and conveys this information to the sender in the
ACKs. WTCP99 computes an appropriate initial transmission rate for a con-
nection based on a packet-pair approach rather than using slow-start. This is
useful for short lived connections in the wide-area wireless networks where the
round trips are large. WTCP99 achieves reliability by using selective ACKs.
The protocol does not perform retransmissions triggered by timeouts. Instead,
the sender goes into blackout mode when ACKs do not arrive at sender-spec-
ified intervals. In this mode, the sender uses probes to elicit ACKs from the
receiver, similar to the probing TCP, and is designed to cope with long fades. In
WTCP99, the receiver uses the inter-packet delays as the congestion metric to
control the transmission rate. The motivation for this approach is based on the
considerations that the round trip time calculation for a wireless WAN is highly
susceptible to fluctuations due to the rapid delay variations discussed earlier. �

11.6 Application-Level Optimization


The web search applications make use of the Hypertext Protocol (HTTP),
which uses the services of TCP. Version 1.1 of HTTP, called HTTP 1.1, can be
used to expedite web downloading [44]. A web page usually contains the URL
of a number of objects. Once the main web page is received, the client starts
to request the objects embedded in the web page, and the request for a new
object is not sent until previous object has been received. Normally each object
requires the setup of a different TCP connection, which incurs the overheads
and the delays associated in the TCP connection setups. When the HTTP 1.1 is
used, one persistent TCP connection is kept alive during the whole download.
Moreover, requests for multiple objects are pipelined so that the server will be
able to start transmission of next objects without further delays.
However, the use of HTTP Version 1.1 can create certain complexities in
the server load balancer when content is divided among multiple servers and
URL switching is in place. In that case, the load balance needs to set up mul-
tiple TCP connections between it and the different serves and map them all to
one TCP connection between itself and the user client. There are several ways
that can be done, but all result in some loss of performance and incurrence of
new delays as discussed in detail in [45]. Therefore, when HTTP 1.1 is used,
254 From LTE to LTE-Advanced Pro and 5G

it is recommended [45] not to use URL switching unless there is a clear need
for it. Some load balancers do not allow HTTP 1.1 when URL switching is
involved.

References
[1] Rahnema, M., UMTS Network Planning, Optimization and Inter-Operation with GSM,
New York: John Wiley & Sons, 2008.
[2] Allman, M., V. Paxson, and W. Stevens, “TCP Congestion Control,” RFC 2581, April
1999.
[3] Postel, J., “Transmission Control Protocol – DARPA Internet Program Protocol Specifica-
tion,” RFC 793, September 1981.
[4] Karn, P., and C. Partridge, “Improving Round-Trip Time Estimation in Reliable Transport
Protocols,” ACM Transactions on Computer Systems, 1991.
[5] Paxson, V., and M. Allman, “Computing TCP’s Retransmission Timer,” RFC 2988, No-
vember 2000.
[6] Jacobson, V., and M. J. Karels, “Congestion Avoidance and Control,” ACM SIGCOMM,
November 1988.
[7] Stevens, W. R., TCP/IP Illustrated, Volume 1: The Protocols, Reading, MA: Addison-Wesley
Professional Computing Series, 1994.
[8] Stevens, W., “TCP Slow Start, Congestion Avoidance, fast Retransmit, and Fast Recovery
Algorithms,” RFC 20002, January 1997.
[9] Kumar, A., “Comparative Analysis of Versions of TCP in a Local Network with Lossy
Link,” IEEE/ACM Transactions on Networking, August 1998.
[10] Lefevre, F., and G. Vivier, “Optimizing UMTS Link Layer Parameters for a TCP
Connection,” Proc. IEEE Conf. on Vehicular Technology, 2001.
[11] Chan, M. C., and R. Ramjee, “TCP/IP Performance over 3G Wireless Links with Rate
and Delay Variation,” Proc. ACMMobiCom, 2002, pp. 71–78.
[12] Chakravorty, R., J. Cartwright, and I. Pratt, “Practical Experience with TCP over GPRS,”
Proc. of IEEE Globecom, November 2002.
[13] 3GPP TS 36.133, “Requirements for Support of Radio Resource Management, Release 8,
V8.9.0,” 2010.
[14] Nguyen, B., et al., “Towards Understanding TCP Performance on LTE/EPC Mobile
Networks,” School of Computing, University of Utah, AT&T Labs – Research.
[15] Teyeb, O., and J. Wigard, “Future Adaptive Communications Environment,” Department
of Communication Technology, Aalborg University, June 11, 2003.
[16] Inamura, H. et al, “TCP Over Second (2.5G) and Third (3G) Generation Wireless
Networks,” RFC 3481, February 2003.
Optimization for TCP Operation in 4G and Other Networks 255

[17] Balakrishnan, H., et al., “The Effects of Asymmetry on TCP Performance,” ACM Mobile
Networks and Applications, 1999.
[18] Fahmy, S., et al., TCP over Wireless Links: Mechanisms and Implications, Technical report
CSD-TR-03-004, Purdue University, 2003.
[19] Alcaraz, J. J., F. Cerdan, and J. García-Haro, “Optimizing TCP and RLC Interaction in
the UMTS Radio Access Network,” IEEE Network, March/April 2006.
[20] 3GPP TS 25.322, “RLC Protocol Specifications,” 1999.
[21] Sandrasegaran, K., et al., “Analysis of Hybrid ARQ in 3GPP LTE Systems,” 16th Asia-
Pacific Conference on Communications (APCC), November 2011, pp. 418–423
[22] Park, H. -S., J. -Y. Lee, and B. -C. Kim, “TCP Performance Issues in LTE Networks,”
International Conference on ICT Convergence (ICTC), September 2011, pp. 493–496.
[23] Pavilanskas, L., “Analysis of TCP Algorithms in the Reliable IEEE 802.11b Link,”
Proceedings 12th International Conference ASMTA, 2005.
[24] Balakrishnan, H., et al., “A Comparison of Mechanisms for Improving TCP Performance
over Wireless Links,” Proc. SIGCOMM’96, August 1996.
[25] Kojo, M., et al., “An Efficient Transport Service for Slow Wireless Telephone Links,” IEEE
JSAC, Vol. 15, No. 7, pp. 1337–1348.
[26] Vacirca, F., F. Ricciato, R. Pilz, “Large-Scale RTT Measurements from an Operational
UMTS/GPRS Network,” IEEE WICON 2005, Bucharest, Hungary, June 2005.
[27] Ludwig, R., and B. Rathonyi, “Multi-Layer Tracing of TCP over a Reliable Wireless Link,”
Proceedings of ACM Sigmetrics, 1999.
[28] Braden, B., et al., “Recommendation on Queue Management and Congestion Avoidance
in the Internet,” RFC 2309, April 1998.
[29] Chan, M. C., and R. Ramjee, “Improving TCP/IP Performance over Third Generation
Wireless Networks,” Proc. IEEE INFOCOM’04, 2004.
[30] Sanchez, R., J. Martinez, and R. Jarvela, “TCP/IP Performance over EGPRS
Networks,” December, 2002, europe.nokia.com/downloads/aboutnokia/.../mobile_
networks/MNW16.pdf.
[31] Kliazovich, D., et al., “Cross Layer Error Control Optimization in 3G LTE,” IEEE Global
Telecommunications Conference (GLOBECOM), December 2007, pp. 2525–2529.
[32] 3GPP, R2-061866, “Non-Synchronized Random Access in E-UTRAN,” Ericsson,
www.3gpp.org.
[33] Fall, K., and S. Floyd, “Simulation Based Comparison of Tahoe, Reno, and Sack TCP,”
Computer Communication Review, Vol, 26, No. 2, July 1996, pp. 5–21.
[34] Sinha, P., et al., “WTCP: A Reliable Transport Protocol for Wireless Wide-Area Networks,”
ACM Mobicom ’99, Seattle, WA, August 1999.
[35] Benko, P., G. Malicsko, and A. Veres, “A Large-Scale, Passive Analysis of End-to-End TCP
Performance over GPRS,” Proc. IEEE INFOCOM, March 2004, pp. 1882–1892.
256 From LTE to LTE-Advanced Pro and 5G

[36] Ludwig, R., and H. Katz, “The Eifel Algorithm: Making TCP Robust Against Spurious
Retransmissions,” ACM Computer Communications Review, January 2000.
[37] Abed, G., M. Ismail, and K. Jumari, “Appraisal of Long Term Evolution System with
Diversified TCP’s,” 5th Asia Modelling Symposium (AMS), May 2011, pp. 236–239.
[38] Bakre, A., and B. R. Badrinath, “Handoff and System Support for Indirect TCP/IP,”
Second USENIX Symposium on Mobile and Location-Independent Computing Proceedings,
Ann Arbor, MI, April 1995.
[39] Farkas, V., B. Héder, and S. Nováczki, “A Split Connection TCP Proxy in LTE Networks,”
Information and Communication Technologies, Volume 7479 of the series Lecture Notes in
Computer Science, 2012, pp 263–274.
[40] Brown, K., and S. Singh, “M-TCP: TCP for Mobile Cellular Networks,” ACM Computer
Communication Review, Vol. 27, No. 5, October 1997, pp. 19–43.
[41] Wang, K. Y., and S. K. Tripathi, “Mobile-End Transport Protocol: An Alternative to TCP/
IP over Wireless Links,” INFOCOM, San Francisco, CA, March/April 1998, p. 1046.
[42] Tsaoussidis, V., and H. Badr, “TCP-Probing: Towards an Error Control Schema with
Energy and Throughput Performance Gains,” Proceedings of ICNP, 2000.
[43] Parsa, C., and J. J. Garcia-Luna-Aceves, “Improving TCP Congestion Control over
Internets with Heterogeneous Transmission Media,” Proc. of the 7th Annual International
Conference on Network Protocols, Toronto, Canada, November 1999.
[44] Fielding, R., et al., “Hypertext Transfer Protocol-HTTP 1.1, RFC 2616,” 1999.
[45] Kopparapu, C., Load Balancing Servers, Firewalls, and Caches, New York: John Wiley &
Sons, 2002.
12
Voice over LTE
LTE was originally regarded as a broadband IP-based cellular system for carry-
ing data services. It was expected that the operators would be able to carry voice
either by reverting to circuit switching over 2G/3G systems or by using VoIP
in one form or another. However, this would necessitate the preservation of the
existing circuit-based networks of 2G/3G in the longer term and prevent the
simplicity and cost-effectiveness of having a single network to handle all servic-
es. Therefore, it was thought that LTE should eventually be able to handle voice
calls through Voice over IP given that it is an all-IP network. Voice over IP is an
ideal application for IP multimedia subsystem (IMS) to provide a rich multime-
dia solution. Hence, Groupe Speciale Moble Association (GSMA) [1] chose the
IMS as a standardized means for providing the signaling functionality to sup-
port voice as well as SMS over LTE, which is briefly referred to as VOLTE. This
uses a reduced version of IMS, which provides the required functionality and
simplicity acceptable to the operators. The most important IMS service is the
multimedia telephony service (MMTel) and its support is mandated in VOLTE
specifications. An MMTel voice device can support any number of codecs, but
the list must include the adaptive multirate (AMR) codec that is also used in
3G networks, as well as the wideband adaptive multirate codec (AMR-WB)
if the optional wideband voice communication is supported. The AMR-WB
codec is a G.722.2 ITU-T speech codec, which is also specified in 3GPP TS
26.194 and uses a sampling rate of 16,000 samples per second with a resulting
compressed data rate of 23.85 kbps. This chapter will provide an overview of
the IMS architecture and its components and interfaces as used in VOLTE, the
IMS signaling protocols used, and will also show how some typical capacity
estimates are obtained for supporting Voice over IP in LTE. However, we will

257
258 From LTE to LTE-Advanced Pro and 5G

begin the discussions first by explaining how efficient radio resource scheduling
can be carried out for packet voice communication in LTE.

12.1 Radio Resource Scheduling


A concern with voice over LTE is the signaling load for resource assignment
with a large number of simultaneous voice calls. This is based on consideration
of the limited capacity on the physical downlink control channel (PDCCH),
which carries all resource allocation information for both the uplink and down-
link shared channels. The resource assignment on the shared channels can be
done dynamically or semipersistently. In the dynamic resource assignment,
when the UE sends a scheduling request (SR) to the eNodeB when it has data
to send and then the base station allocates resources. The resource assignment is
indicated through the downlink control information (DCI) over the PDCCH.
The size of the DCI depends upon several factors including whether it is for
uplink or downlink allocation but is limited to 3 OFDM symbols in each sub-
frame of 1 ms. This will limit the number of UEs that can receive a resource
allocation signaling in a subframe if using dynamic resource scheduling. The
dynamic scheduling is particularly inefficient in the case of voice calls in which
the rather small voice packets, which may occupy one or at most two PRBs,
come at regular intervals (say, every 20 ms). In this case, sending an SR for every
voice packet causes an overloading of the PDCCH signaling channels.
To be able to support the resource allocation for a large number of voice
calls, without increasing the size of the PDCCH, the semipersistent scheduling
(SPS) is used. With SPS, the eNodeB uses a specific RRC signaling message
and configures the UE with an allocation ID (SPS-RNTI) and a periodicity.
Then when the UE receives an allocation (DL/UL) using the SPS-RNTI, it
repeats this allocation, according to the preconfigured periodicity [2, 3]. For
VOLTE, the eNodeB can assign predefined chunk of radio resources for VoIP
users with interval of 20 ms. This saves the UE from having to request resources
each TTI (20 ms) and hence significantly reduces the signaling load on the
PDCCHs. This resource assignment scheme is referred to as semipersistent in
the sense that the base station can change the assigned resource or the resource
location within the grid if required for link adaptation. For instance, if the
radio link conditions change, a new allocation can be sent to convey the new
best resource location, the modulation-coding scheme, and the required PRBs.
Meanwhile, any HARQ subsequent retransmission is separately scheduled via
dynamic scheduling. Finally, when the data transfer is completed (the voice call
is terminated), the SPS is deactivated via several possible mechanisms such as
through explicit signaling or an inactivity timer expiration.
Voice over LTE 259

12.2 IMS Architecture in VOLTE


3GPP has chosen the IMS for the implementation of Voice over IP in LTE [4]
The IMS was originally designed for the management and delivery of real-time
multimedia services over the 3G packet switched domain and was first defined
in 3GPP Release 5 and was frozen in 2002. The reader may refer to [4] for the
stage 2 specifications and to [5] for more comprehensive coverage. Reference
[6] provided a good source for the development and deployment of applications
over IMS. The IMS may be viewed as an external signaling network that man-
ages VoIP calls in LTE, and is not a part of the LTE. More specifically, the IMS
is much like third-party VoIP servers but is owned by the LTE network operator
and has the advantages of guaranteeing the quality of service for voice calls, sup-
porting the handovers to 2G and 3G networks as well as emergency calls, and
is able to communicate with IMSs that are owned by other network operators.
The IMS specifications are complex system with a large number of implemen-
tation options and its rollout has been taking place a few years after the rollouts
of LTE [7].The IMS uses the IP connectivity access networks (IP-CAN) such
as GPRS/UTRAN/E-UTRAN, or wireless LAN to transport multimedia sig-
naling between the UE and IMS entities, and the bearer traffic between UEs
and application servers. The IP-CANs, which maintain the service while the
terminal moves, hide the moves from the IMS. Most of the IMS specifications
are independent of the access network except in very few cases such as (1) the
method to be used by the UE to acquire the addresses for the mobile’s first point
of contact with the IMS, which is the element referred to as the P-CSCF, and
(2) in the establishment of PDP context and EPS bearers [4, 8]. The latter will
be controlled based on either the APN of the PDN connection or the user’s
subscription, the local operator’s IP bearer resource-based policy, and the lo-
cal operator’s admission control function and GPRS/EPS roaming agreements,
depending on the information contained within the GPRS/EPS with respect to
the policy and charging control.
The key components of IMS include the Proxy Call Session Control
Function (P-CSCF), the Serving Call Session Control Function (S-CSCF), the
Interrogating Call Session Control Function (I-CSCF), the Application Server
(AS), and the application-level software within the user equipment. We will
also include here the Interconnection Border Control Function (IBCF), which
is used for communication between IMS networks. The interconnection and
interfaces of these elements within the operator network are shown in in the
basic IMS signaling architecture (a reduced version) given in Figure 12.1. This
is based on the information provided in [4]. Note that the protocols for the in-
terconnection reference points Gm, Mw, Mm, Ma, and Mx are based on SIP as
defined by IETF RFC 3261 and additional enhancements required to support
3GPP needs [4].
260 From LTE to LTE-Advanced Pro and 5G

Figure 12.1 A reduced version of the IMS main signaling architectural elements. (© 2014.
3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)

The reference point Ma conveys charging information according to 3GPP


TS 32.240 and 3GPP TS 32.260. The Cx reference point is used for the trans-
fer of service parameters of the subscriber from HSS to S-CSCF, and the ses-
sion signaling transport parameters from the CSCF to HSS. These parameters
may include the IP address, the port number of the CSCF, and the transport
protocol. The Sh interface is between the HSS and the SIP applicator, as well as
between the HSS and the OSA service capability server and is an intraoperator
interface. This interface is used for the transfer of user-related data stored in the
HSS such as service-related MSISDN, visited network capabilities, the UE time
zone, and user location information. The Sh interface also supports retrieval
of the private user identities using the same public user identity [4]. The Ut
interface is used by the user to configure network services that are hosted on
the AS. The ISC is the IP Multimedia Service Control interface and is used by
the S-CSCF for communicating with the application servers and is covered in
details in [9]. The Mb interface is for used for bearer control. More details on
the interfaces shown in Figure 12.1 are provided by 3GPP in [3, 7].

12.2.1 P-CSCF
The proxy CSCF (P-CSCF) is the mobile’s first point of contact with the IMS,
and secures the signaling messages for transmitting across the IP connectivity
access network by means of encryption and integrity protection. The P-CSCF
Voice over LTE 261

behaves like a proxy as defined in IETF 3261 [10], which relays all SIP signal-
ing to and from the user whether in the home or a visited network. The P-
CSCF determines the serving CSCF in the home or visiting network, which is
then used to process the VOLTE calls.

12.2.2 S-CSCF
The S-CSCF performs a variety of functions as detailed in [4], which includes
user registration for access to services such as the voice calls. This module is
similar to the MME but is always located in the mobile’s home network and en-
sures that the user receives a consistent set of IMS services while also roaming.
It is the home subscriber’s S-CSCF that processes the subscriber’s VOLTE calls
and is reached via the P-CSCF. The S-CSCF routes the SIP request via normal
IMS routing principles towards, for example, a server in the home or visited
access network using the ISC or Mx interfaces. The S-CSCF may exhibit user
agent-like behavior and maintain states for certain applications where the exact
behavior will depend on the SIP messages being handled, on their context, as
well as on the S CSCF capabilities needed to support the services. The Stage
3 design is expected to determine the more detailed modeling in the S CSCF.

12.2.3 I-CSCF and IBCF


The Interrogating CSCF (I CSCF) is the contact point for signaling messages
arriving from other IMS networks for all connections destined to a user of that
network operator. Such messages are routed via the local exit point that is the
Interconnection Border Control Function (IBCF). The I-CSCF then asks the
home subscriber server (HSS) for the serving CSCF that is controlling the target
mobile for forwarding the signaling message. When the local P-CSCF does not
know which Serving CSCF controls the call (for example, when the destination
user is outside the current IMS domain), it forwards it to IBCF which then
sends it out to the destination network. The I-CSCF also receives the signaling
messages from a roaming user currently located within that network operator’s
service area through the Mw interface with the P-CSCF for determination of
the serving CSCF. The functions performed by the I-CSCF include also reg-
istration whereby a S-CSCF is assigned to a user performing SIP registration,
and translating the E.164 address contained in all Request Uniform Resource
Identifiers (URIs) having the SIP URI with user phone parameter format into
the Tel: URI format of IETF RFC 3966.

12.2.4 Application Server and the HSS


The IMS service profile is defined and maintained in the HSS and its scope is
limited to the IM CN Subsystem. The IMS specifications allow multiple service
262 From LTE to LTE-Advanced Pro and 5G

profiles to be defined in the HSS for a subscription. Each user is associated with
an IMS user profile, which is stored in the home subscriber server. The user
profile contains a set of service profiles that define the services such as multi-
media telephony and SMS. The service profile is downloaded from the HSS to
the S CSCF, where it is associated with at least one public user identity and is
activated upon registration. The service profile may also be associated with a set
of initial filters, which define how the serving CSCF interacts with application
servers to invoke the service. Each public user identity can be associated with
only one service profile. Each service profile is associated with one or more pub-
lic user identities which are registered at a single S CSCF at one time. However,
all public user identities of an IMS subscription should be registered at the same
S CSCF. The IM CN Subsystem is capable of using the public service identity
for routing of IMS messages.
The application server (AS) executes the service specific logic as identified
by the public service identity. The AS can support IP voice services including
multimedia telephony, voicemail, and SMS. The users are able to manipulate
their application data across the signaling interface Ut. These application serv-
ers are stand-alone devices that are outside the LTE network and can also act
as interfaces to other application environments. For instance, the service ca-
pability server (SCS) type provides the open service access (OSA) across the
application programming interface (API) to third-party application developers.
There is also the IP multimedia service switching function (IM-SSF), which
provides the access to a service framework for developing customized applica-
tions for mobile environment based on enhanced logic customized applications
for mobile enhanced logic (CAMEL) as explained in [6]. The AS can generally
provide complex and dynamic service logic that can, for instance, make use of
additional information that is not directly available via SIP messages such as
location, date, and time. The reader can refer to 3GPP TS23.228 [4] for more
details on the functions of the IMS application server.

12.2.5 Naming and Addressing


Two main identity schemes that may be assigned to a user of IP multimedia
services are the private user identities and the public user identities. One or
more private user identities may be assigned to every IM CN subsystem user.
The private identity is assigned by the home network operator, and used, for
example, for registration, authorization, administration, and accounting pur-
poses. This identity takes the form of a Network Access Identifier (NAI), as
defined in IETF RFC 4282. It is possible for a representation of the IMSI to
be contained within the NAI for the private identity. The private user identity
is contained in all registration requests (including reregistration and deregistra-
tion requests) that are passed from the UE to the home network, but is not
Voice over LTE 263

used for routing of SIP messages. The IP multimedia services identity module
(ISIM) within the UE universal integrated circuit card (UICC) should securely
store one private user identity where it cannot be modified by the UE. The
private user identity is a unique global identity that is defined and permanently
allocated by the home network operator, for identifying the user’s subscription
(IM service capability) and not the user. The private user identity specifies the
user’s authentication information, which is used, for instance, during registra-
tion and may be present in charging records based on operator policies. The
private user identity must be stored in the HSS from where it is obtained and
stored by the S-CSCF upon registration.
The public user identity is what is used in the request for communications
to other users and may be included, for instance, on a user’s business card. Every
IM CN subsystem should be assigned one or more public user identities, which
should include one taking the form of a SIP URI [10]. The user public identity
is usually a SIP uniform resource identifier (URI), which identifies the user,
the network operator, and application using a format such as sip:username@
domain. The user public identity can also support traditional phone numbers
in two ways. These consist of an SIP URI that includes a phone number using
a format such as sip: +phone number@domain; and a Tel URI that describes a
stand-alone phone number [11]. The ISIM application within the UE should
securely store at least one public user identity and should not be modifiable
by the UE. A public user identity should be registered either explicitly or im-
plicitly before originating or terminating IMS sessions. The public user identi-
ties may be used to identify the user’s information within the HSS during, for
example, mobile terminated session setup, but are not authenticated by the
network during a registration. It should be possible to globally register globally
through, for instance, one single UE request from a user who has more than
one public identity via a mechanism within the IP multimedia CN subsystem,
for example, by using an implicit registration set. The public and the private
user identities are used during user registration with the IMS services. In the
registration process, the mobile sends its IP address and private user identity to
the serving CSCF and quotes one of its public identities. The serving CSCF
then contacts the home subscriber server, retrieves the other public identities,
and sets up a mapping between each of these fields. The user can then receive
incoming calls that are directed towards any of those public identities as well as
make outgoing calls.
More details on user identity schemes and requirements can be found in
3GPP references [4, 12]. The mechanisms for addressing and routing for access
to IM CN subsystem services and issues of general IP address management are
discussed in 3GPP TS 23.221 [13]. Finally, it is noted that the VoLTE specifica-
tions insist that every network operator use the IMS well-known access point
name, IMS to ensure that a mobile can access the IMS while roaming [6].
264 From LTE to LTE-Advanced Pro and 5G

12.2.6 UE Application Software


The UE should contain the universal integrated circuit card (UICC), which
carries an application known as the ISIM. This is used to communicate with
the IMS across the signaling interfaces noted earlier [12, 14, 15]. The VoLTE
specifications require that the UE should support both IP version 4 and IP6.
The ISIM interacts with the IMS in the same way that the USIM interacts with
LTE. The ISIM maintains the home network operator’s IMS domain name
and one IP multimedia private user identity (IMPI), which plays a similar role
to the IMSI identity. The network operator uses this private user identity for
registration, authorization, administration, and accounting purposes, as noted
in the previous section.
The ISIM also contains one or more instances of the IP multimedia pub-
lic identity (IMPU), which is similar to an e-mail address or phone number that
identifies the user to the outside world. A user can also reach the IMS using a
USIM alone by using procedures that derive an IP multimedia private identity
from the user’s IMSI.

12.3 IMS Signaling Protocols in VOLTE


The signaling protocols used in VOLTE are mainly the Session Initiation Pro-
tocol (SIP) and the Session Description Protocol (SDP). The SIP is used for
session initiation and setup, whereas the SDP is used for negotiating the media
parameters and characteristics. The SIP was developed by the Internet Engi-
neering task Force (IETF) for control of real-time packet switched multimedia
communication and is based on the Hypertext Transfer Protocol (HTTP). The
basic SIP is defined in RFC 3261 [10], with extensions provided in several
other IETF specifications, such as the RFC 3455 [16] for use in the IMS. The
3GPP specification that defines the application of SIP within the IP multime-
dia subsystem is given in [17]. The primary objective of SIP is to set up and
manage the media sessions such as the VoIP calls and perform registration as
well. The SIP has a text-based syntax and though makes the signaling messages
long but makes them easy to read. The protocol is client-server based in that
a client sends a request to a server, which then replies with a response. This is
similar to HTTP with the difference that a single device can function both
as a client and as a server. SIP messages are by default transported over UDP,
but the protocol itself includes its own mechanisms for acknowledgments and
retransmissions in order to provide for reliable message delivery. The SIP signal-
ing messages travel transparently between the mobile and the IMS in the LTE
user plane through a default EPS bearer with QOC class indicator 5.The bearer
is set up before the mobile registers with the IMS and is torn down after it
deregisters. The EPC transports the mobile’s voice traffic using a dedicated EPS
Voice over LTE 265

bearer with QOS class 1, which is set up at call origination and torn down when
the call is terminated. The voice packets are transported using IP, UDP, and the
Real-Time Transport Protocol (RTP). RTP supports the delivery of real-time
media over an IP network, by carrying out tasks such as labeling packets with
sequence number and timestamps. The RTP is described in IETF RFC 3550.
The description of the media content carried within the SIP signaling
messages are handled by the Session Description Protocol (SDP) originally
specified in [18]. The original version of SDP basically defined a media stream,
using session information such as the device’s IP address and media information
such as the media types, data rates, and the codecs used. Later on, the proto-
col was enhanced to an offer-answer model that allowed two or more parties
to negotiate the media and codecs to use [19]. This is the version used in the
IMS where the SIP requests and responses are used to carry the SDP offers and
answers.
IMS uses the Extensile Markup Language (XML) Configuration Access
Protocol (XCAP) described in IETF RFC 4825 [20] on the Ut interface be-
tween the mobile and the application server. The client uses the XCAP to map
XML formatted data stored on a server to HTTP uniform resource identifier
(URIs) for access using HTTP [21]. This allows devices to use XCAP to read
and modify information such as voicemail settings and supplementary service
configurations that are stored on the application servers. The Diameter applica-
tion protocols [22] are used for providing the communication means among
the HSS, the CSCFs, and the application servers.

12.4 Voice Capacity Analysis


In this section, we will use a simple hand-waving analysis to estimate the voice
capacity of a LTE network under certain assumptions. The estimation will
be provided for the downlink and uplink sides with a 10-MHz transmission
bandwidth and under the channel conditions with CQIs and the 3GPP rec-
ommended modulation coding requirement as was assumed in Table 3.5 of
Chapter 3. The input data are thus provided in columns 1, 2, and 4 of Table
12.1. The speech codec used will be the AMR 12.2 kbps. The reader can easily
extend the analysis to other similar cases with different codecs and transmission
bandwidth. The narrowband codec is used here based on the considerations
that if the other leg of the call happen to be PSTN (IP to PSTN), the user will
not get the quality improvement using a wideband codec, and moreover, the
AMR-wideband support is still an option for LTE handsets. We will assume
semipersistent scheduling is used under which it is further assumed that the
performance is not limited by the control channel resources but by the shared
channel resources carrying the voice payloads. No accounts are made for gains
266

Table 12.1
Estimated Downlink and Uplink Voice Capacities for Various Channel Conditions with AMR 12.2-kbps Voice Codec
and 10-MHz Transmission Bandwidth
Channel Actual Actual Number of Voice Number of Voice
Condition Modulation Number of Coding Coding Calls Supported Calls Supported
(CQI) Order Bits Used Rate Rate Used on DL on UL Comments
1 QPSK 2 0.78 0.078 56 52 For channel
2 QPSK 2 0.12 0.12 87 80 conditions with
CQI of 10:15, same
3 QPSK 2 0.19 0.19 138 127 modulation coding
4 QPSK 2 0.3 0.3 218 201 scheme as for
5 QPSK 2 0.44 0.44 320 295 channel condition
with CQI of 9 were
6 QPSK 2 0.6 0.6 436 403 used. Moreover
7 16QAM 4 0.37 0.37 538 497 Uplink capacities
8 16QAM 4 0.49 0.49 712 658 were calculated
for same channel
9 16QAM 4 0.61 0.61 887 819
conditions as giveby
10 64QAM 4 0.46 0.61 887 819 the CQIs on DL
From LTE to LTE-Advanced Pro and 5G

11 64QAM 4 0.56 0.61 887 819


12 64QAM 4 0.66 0.61 887 819
13 64QAM 4 0.77 0.61 887 819
14 64QAM 4 0.87 0.61 887 819
15 64QAM 4 0.94 0.61 887 819
Average 580 536
Voice over LTE 267

due to voice silent statistics, but the full shared channels are assumed available
for only the voice traffic.
The narrowband AMR 12.2 codec generates a voice packet every 20 ms
at 12.2 kbps when active, which results in 20 × 12.2 = 244 bits. This voice pay-
load is placed in an RTP/UDP/IP packet with 3 bytes of overhead after using
the IP header compression ROHC, which brings the packet size to 244 + 24 =
268 bits. This would pass through the three sublayers of the air interface proto-
col stack of MAC, RLC, and PDCP. The VoIP packet passes through these lay-
ers of the air interface protocol stack. We will assume an RLC header of 1 byte
(for unacknowledged mode operation using a 5-bit sequence number), a MAC
header of 2 bytes, and a PDCP header of 1 byte using short sequence number
resulting in 4 additional bytes of overhead bring the IP voice packet to 268 + 32
= 300 bits in total. Now as explained in Section 3.10.2, a downlink PRB con-
tains 120 resource elements that are modulation symbols assuming single-layer
transmission with a Tx diversity antenna. This assumes that all the first three
symbols in each subframe are taken for control channel and is a conservative
estimate, under the assumption of semipersistent scheduling. Next we will cal-
culate the number of PRBs needed to handle a voice call on the downlink side
for each channel condition (CQI) shown in column 1 of Table 12.1, but first
we do it for one channel condition instance to illustrate the procedure. Take,
for instance, the case for CQI of 1, which requires the QPSK modulation with
a coding rate of 0.078. The QPSK results in 2 bits for each resource element
(OFDMA symbol), and with the code rate of 0.078 provides 120 symbols/
PRB × 2 bits/symbol × 0.078 = 18.72 bits payload. This means with the 300
bits per voice packet, we will require 300/18.72 = 16.025 PRBs per 20 ms for
each voice call on the downlink side. This is true if every single packet with
the modulation scheme and coding assumed is received without any errors all
the time. However, in practice, some packets would be received in error and
would have to be retransmitted locally at the link level using the HARQ fast
retransmission to bring down the error rate within the receiving end. Note that
acceptable voice quality needs to have 98% to 99% of packets received with
no error after decoding. Assuming a reasonable 10% retransmissions at HARQ
level, the number of PRBs required per VoIP call per 20 ms would increase to
16.025 × 1.10 = 17.62. Now in the 1-ms subframe, there are 50 PRBs with the
10-MHz transmission bandwidth assumed, allowing 50 PRBs/17.62 PRBs per
user = 2.84 users. Since the AMR speech codec generates a new speech frame
only every 20 ms, we can have as many as 20 times more voice calls resulting
in 20 2.84 = 56.73, or 56 users roughly (we kept the fractions to simplify the
repeat of calculations for other cases). This procedure was repeated for other
channel conditions given in Table 12.1 except that for channel CQI of 10:15,
we used the modulation coding scheme same as for CQI 9, that is the 16 QAM
and coding rate of 0.61, due to the fact that only UE category 5 can support
268 From LTE to LTE-Advanced Pro and 5G

the 64 QAM, and may not be widely available. The results are shown in Table
12.1, column 6.
For the uplink side, the calculation of the signaling channel overhead
within each PRB is not as straightforward. As discussed in Chapter 4, we will
assume for the 10-MHz transmission bandwidth (i.e., 50 PRBs), a total of 6
PRBs are consumed for the uplink control channels. This might be less in the
case of VoIP due to the fact that semipersistent scheduling can be used, but pro-
vides a more conservative estimation. However, that leaves a total of 50 – 6 = 44
PRBs for carrying the voice payload. There are two symbols per subframe per
subcarrier within each of these voice-carrying PRBs used for the demodulation
reference symbols and one for the sounding reference symbols. That leaves 168
– 36 = 132 symbols per PRB for carrying the voice payloads. Repeating then
the calculations as we did for the downlink side for the same channel conditions
(CQIs) given in column 1 of Table 12.1, we obtain the voice call capacities sup-
ported on the uplink and are displayed in column 7 of Table 12.1. The uplink
voice capacity is seen to be somewhat lower than the downlink side, showing
that the voice capacity is limited by the uplink. The results are also plotted in
Figure 12.2.
Assuming users experience all channel conditions given in the table with
equal likelihood, and no category 5 UEs being available to support the 64
QAM, a simple statistical averaging of the results in columns 6 and 7 of Table
12.1 indicates an average voice call capacity of 580 and 536 calls on the down-
link and uplink, respectively, with the 10-MHz transmission bandwidth and
using the AMR 12.2 voice codec. However, higher capacities can be expected
than indicated here if about 50% accounts are made for voice activity ratio and
the category 5 UEs are available to better take advantage of channel conditions
for CQIs of 10 and higher. Simulation results presented in [23] show the LTE

Figure 12.2 Voice call capacities on DL and UL with AMR 12.2 kbps assuming no category 5
UEs available to support 64 QAM (these results assume 100% voice activity) in 10-MHz band-
width (based on results of Table 12.1).
Voice over LTE 269

capacity for voice over IP using the AMR codec at a rate of 12.2 kbps and a
cell bandwidth of 5 MHz varies from 289 to 317 calls on DL and from 123 to
241 calls on UL under various voice scheduling, control channel limitations,
and channel conditions. Similar results are obtained via simulations presented
in [24] for 5-MHz transmission bandwidth which indicate averages at around
320 calls on downlink and 240 calls on uplink per sector which is much higher
compared to UMTS voice call capacity, which is estimated at 50 voice calls
under average channel conditions with AMR 12.2 kbps [25]. The referenced
simulation results from [24] should be basically doubled to provide results for
the 10-MHz bandwidth.

References
[1] Russell, N., “Official Document IR.92 - IMS Profile for Voice and SMS,” GSMA, March
2013.
[2] 3GPP TS 36.321, “Medium Access Control (MAC) Protocol Specification,” V10.10.0,
Release 10, 2009.
[3] 3GPP TS 36.331, “Radio Resource Control (RRC) Protocol Specification,” V10.0.0, Re-
lease 10, 2009.
[4] 3GPP TS 23.228, “IP Multimedia Subsystem (IMS); Stage 2,” Release 11, 2013.
[5] Camarillo, G., and M. -A. Garcia-Martin, The 3G IP Multimedia Subsystem (IMS): Merg-
ing the Internet and the Cellular Worlds, 3rd ed., New York: John Wiley & Sons, 2008.
[6] Noldus, R., et al., IMS Application Developer’s Handbook: Creating and Deploying Innova-
tive IMS Applications, New York: Academic Press, 2011.
[7] Cox, C., An Introduction to LTE, 2nd ed., New York: John Wiley & Sons, 2014.
[8] 3GPP TS 24.229, “IP Multimedia Call Control Protocol Based on Session Initiation Pro-
tocol (SIP) and Session Description Protocol (SDP); Stage 3,” Release 11, 2010.
[9] 3GPP TS 23.218, “IP Multimedia (IM) Session Handling; IM Call Model; Stage 2,”
Release 11, 2010.
[10] IETF RFC 3261, “SIP: Session Initiation Protocol,” June 2002.
[11] IETF RFC 3986, “Uniform Resource Identifier (URI): Generic Syntax,” January 2005.
[12] IETF RFC 3966, “The tel URI for Telephone Numbers,” December 2004.
[13] 3GPP TS 23.003, “Technical Specification Group Core Network; Numbering, Addressing
and Identification,” Release 10, 2009.
[14] 3GPP TS 23.221, “Architectural Requirements.”
[15] 3GPP TS 31.103, “Characteristics of the IP Multimedia Services Identity Module (ISIM)
Application,” Release 11, 2010.
[16] IETF RFC 3455, “Private Header (P-Header) Extensions to the Session Initiation Protocol
(SIP) for the 3rd-Generation Partnership Project,” January 2003.
270 From LTE to LTE-Advanced Pro and 5G

[17] 32. 3GPP TS 24.229, “IP Multimedia Call Control Protocol Based on Session Initiation
Protocol (SIP) and Session Description Protocol (SDP), Stage 3,” Release 11, 2010.
[18] IETF RFC 4566, “SDP: Session Description Protocol,” July 2006.
[19] IETF RFC 3264, “An Offer/Answer Model with the Session Description Protocol (SDP),”
June 2002.
[20] IETF RFC 4825, “The Extensible Markup Language (XML) Configuration Access
Protocol (XCAP),” May 2007.
[21] 3GPP TS 24.623, “Extensible Markup Language (XML) Configuration Access Protocol
(XCAP) over the Ut Interface for Manipulating Supplementary Services,” Release 11,
December 2012.
[22] 3GPP TS 29.229, “Cx and Dx Interfaces Based on the Diameter Protocol; Protocol
Details,” Release 11, 2013.
[23] 3GPP R1-072570, “Performance Evaluation Checkpoint: VoIP Summary,” 2007.
[24] Holma, H., et al., (eds.), LTE for UMTS, New York: John Wiley & Sons, 2009.
[25] Rahnema, M., UMTS Network Planning, Optimization and Inter-Operation with GSM,
New York: John Wiley & Sons, 2007.
13
LTE-Advanced Pro: Enhanced LTE
Features
The LTE system is still continuously being developed and enhanced with more
features allowing one to improve the system performance and to introduce new
services such as smart metering and vehicular communications, which are pos-
ing significantly different requirements compared to mobile broadband, for
which this system was originally designed. This chapter goes through the lat-
est LTE standard developments and covers aspects, such as link aggregation
between two LTE nodes [i.e., dual connectivity (DC)], between LTE and un-
licensed versions of LTE air interface [i.e., licensed-assisted access (LAA)] and
tight interworking with WiFi [i.e., LTE-WiFi aggregation (LWA1)], along with
massive CA, which are aiming at improvements for system capacity, higher
throughputs, and connectivity robustness. Another set of functionalities cov-
ered in this chapter relates to machine-type communications, including nar-
rowband Internet-of-Things (NB-IoT) and device-to-device (D2D) commu-
nications. This chapter starts with the introduction to LTE-Advanced Pro and
outlines its features, which are discussed in detail later. Finally, a comparison
between the main evolutionary steps of LTE is provided together with through-
put calculations of LTE, LTE-Advanced, and LTE-Advanced Pro systems.

1. Besides LWA, LWIP and RCLWI features will also be covered so that the full scope of Release
13 interworking mechanisms is presented and evaluated.

271
272 From LTE to LTE-Advanced Pro and 5G

13.1 LTE-Advanced Pro Introduction and Main Features Overview


LTE-Advanced Pro is a new brand name for evolved LTE, starting with 3GPP
Release 13. The new marker is intended to reflect the point from which the
LTE system has been significantly enhanced for new services and improved sys-
tem efficiency. These enhancements were made by totally new features as well
as further enhancements to the already existing functionalities. The first release
of LTE-Advanced Pro was frozen in March 2016. In January 2017, the next
release of this system was being actively developed within 3GPP Release 14,
which was frozen in June 2017. A quite extensive set of features was introduced
with LTE-Advanced Pro compared to LTE-Advanced. This section discusses
the two categories of these features, namely, system capacity and performance
improvements and the MTC type of services support. The full set of features
specified within Release 13 can be found in [1].
The system capacity and performance improvements of LTE-Advanced
Pro include the following set of features:

• DC:2 This feature enables aggregation of two radio links with nonideal
backhaul without low-latency requirement. To allow this, the links are
aggregated at the PDCP level, where the PDCP PDUs are combined
rather than MAC-layer transport blocks that are aggregated under CA
feature. The links of a single macro cell and small cell are combined,
where the macro cell acts as a mobility and signaling anchor.
• LTE-WiFi aggregation (LWA): Like in DC, this feature provides link
aggregation. However, in this case, the secondary link is provided via
Carrier WiFi at the 2.4-GHz or 5-GHz ISM band, thus enabling tight
interworking between LTE and WLAN. The point of aggregation is
also at the PDCP level of the LTE anchor carrier. In Release 13, this
is only possible in the DL direction (i.e., the WiFi link serves as a sup-
plemental DL carrier). Release 14 specifies the UL transmission within
this framework under the enhanced LWA (eLWA) feature. Addition-
ally, complementary features for interworking with WiFi standardized
within Release 13 include RAN Controlled LTE-WLAN Interworking
(RCLWI)—switching the UP between LTE and WiFi, and LTE-WLAN
Radio Level Integration with IPsec Tunnel (LWIP)—link aggregation
with legacy WLAN.
• Licensed-Assisted Access (LAA): This is another option for aggregation of
radio links for capacity and throughput improvements. LAA aggregates

2. This feature has actually been standardized within Release 12, but we put it here as it fits
within the overall framework of the main LTE improvements within LTE-Advanced Pro.
LTE-Advanced Pro: Enhanced LTE Features 273

the licensed primary LTE carrier with the secondary link using the new
LTE radio frame format 3 suited for unlicensed operation and fulfilling
the fair coexistence requirement with WiFi at the 5-GHz ISM band.
Like in LWA, the original proposal of LAA is for the DL aggregation
only. However, similar to eLWA, Release 14 proposes enhanced LAA
(eLAA) that adds UL support for the unlicensed spectrum link aggrega-
tion.
• Massive carrier aggregation (massive CA): This extends the regular CA
feature towards larger number of component carriers including the li-
censed and unlicensed bands. This feature, as per Release 13, enables
combination of up to 32 CCs, which theoretically provides up to 640
MHz of aggregated bandwidth (BW). Each of the CCs complies with
the LTE Release 8 channel BWs (and LAA frame type) and supports
backward compatibility.

The MTC-type service support of the LTE-Advanced Pro includes the


following set of features:

• Narrowband Internet-of-Things (NB-IoT): This is a quite novel system


design approach, with new PHY layer and protocol stack simplifica-
tions, which is operating under the legacy LTE framework. The aim is
to significantly simplify the system operation for low-end devices, with
reduced system BW to 1 PRB (i.e., 180 kHz), coverage enhancements
and reduced feature support for power consumption reduction.
• Device-to-device (D2D): This refers to the direct communications be-
tween devices under the mobile network’s supervision for resource con-
trol handling. In this feature, new transport and physical channels are
specified under the name sidelink (SL). Additionally, vehicular com-
munication further expands this framework within Release 14, using
SL for car communications to other cars, infrastructure, network, or
pedestrians.

13.2 Dual Connectivity


Release 10 LTE-Advanced brought the idea of using small cells (SC) within
the LTE system, which is supposed to provide the following benefits: increase
system capacity by network densification, improve indoor performance, and
offload macro networks for home usage. This has introduced the world of het-
erogeneous networks (HetNets), where regular macro sites are accompanied
274 From LTE to LTE-Advanced Pro and 5G

with full-fledged eNodeBs using low transmit power, thus providing much low-
er coverage. In the initial design, the UE could be connected to either a macro
cell or a small cell; thus, a handover (HO) procedure needs to be involved for
the cell change in the RRC_CONNECTED state. In the case of outdoor mo-
bile use, this may result in frequent handovers and requires fine optimization of
the handover parameters between those cells; otherwise, a lot of RLFs could be
experienced, especially in the case of interfrequency deployments (i.e., macro
cells deployed on different frequencies than small cells). However, if they are
both deployed on the same frequency, as small cells are within the macro sites’
coverage, interference is a major issue. This was addressed by the enhanced
intercell interference coordination (eICIC) feature provided within Release 10,
by decreasing the macro-cell power or even blanking the subframes for the cell-
edge users served by the small cells. Yet another aspect comes with the require-
ment of improving user data rates, which was initially covered by CA (Release
10) and CoMP (Release 11). In an intersite deployment, both require a very
tight backhaul latency (the ideal backhaul) as the multiple transmission points
must be in sync. This is due to the fact that resource aggregation is provided on
the MAC layer; thus, a single scheduler is responsible for scheduling resources
within the same TTI on different component carriers (CA) or the same carrier
(CoMP).
All the above issues were taken into account within Release 12 when dual
connectivity3 was introduced. The goals that DC is addressing include the fol-
lowing [2]:

• Increase user throughput by aggregating the resources from different


transmission points.
• Provide the possibility of using nonideal backhaul for resource aggrega-
tion (compared to CA or CoMP) by relaxing the latency requirement.
• Improve mobility robustness by decreasing the need for HOs and im-
prove performance for mobile users when connecting with small cells.
• Address the DL/UL imbalance between the macro cell and the SC;
when the UE is on the cell edge of small cell, it has a better DL link to
macro site (due to higher DL transmit power) and a better UL link to
small cell (due to lower path loss to small cell).
• Offload macro cells by high utilization of the available small cells.
• Decrease signaling load for small cells by decreasing the need for in-
forming the core network (CN) of frequent path changes.

3. As mentioned, DC is in fact a Release 12 feature, however it fits into the overall framework
of the LTE-A Pro advancements, and a lot of enhancements are delivered within Release 13,
therefore we treat it and describe within LTE-Advanced Pro context.
LTE-Advanced Pro: Enhanced LTE Features 275

• Apply different connections to different bearer types/traffic types; for


example, VoIP service is not resource-hungry, but requires tight latency
and can tolerate packet losses, whereas best effort (BE)/video traffic is
high throughput, but not that delay-critical.

13.2.1 DC Design, Operation, and Configuration


DC is a concept, where a UE in the RRC_CONNECTED state is allocated
two radio links for resource aggregation from two different network nodes
(eNodeBs) that are connected via a nonideal backhaul, utilizing regular X2
connectivity. The nodes play different roles; macro cell serves as a mobility and
signaling anchor (called master eNodeB, terminating S1-MME), and small cell
serves as a local capacity booster (called a secondary eNodeB, providing addi-
tional radio resources for the UE). DC can be combined with CA, where each
node may provide multiple component carriers, called cell groups (CGs), that
are associated with a single eNodeB. Therefore, the UE in this setup is config-
ured with [3]:
The master cell group (MCG), associated with MeNodeB, and compris-
ing a PCell and zero, one, or more SCells;
The secondary cell group (SCG), associated with SeNodeB, and compris-
ing a PSCell (Primary SCell) and zero, one, or more SCells.

13.2.1.1 Control Plane and User Plane Design and Operation


In the control plane [see Figure 13.1(a)], there is only one S1-MME connection
and only one RRC connection at the radio interface provided by MeNodeB,
which has the following implications on the system:

• Signaling related to mobility with small cells is decreased, as the SC be-


ing configured as SeNodeB is not visible to MME.
• RRC configuration of SCG that should be provided to the UE is trans-
ferred from SeNodeB to MeNodeB via X2-C and encapsulated within
RRCConnectionReconfiguration provided by MeNodeB.
• Any measurements related to SCG are sent within RRC signaling to-
wards MeNodeB and need to be transferred to SeNodeB.
• Radio link failure at PSCell does not result in RRC Connection Rees-
tablishment procedure of the SCG (as it happens with PCell); rather, the
UE notifies the MeNodeB about the failure of the secondary link.
• The signaling radio bearers (SRBs) are only provided via MCG.
276
From LTE to LTE-Advanced Pro and 5G

Figure 13.1 DC: (a) CP, (b) UP, and (c) Protocol architecture. [Figures 13.1(a, b) modified from [3]. Figure 13.1(c) © 2016. 3GPP™ TSs and TRs are the prop-
erty of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)
LTE-Advanced Pro: Enhanced LTE Features 277

The responsibility of resource allocation is distributed in the following


way: the MeNodeB orders the UE to provide measurements on SCG, which it
evaluates and decides to add the SeNodeB resources for DC operation (using
the new X2-AP procedure SeNodeB Addition); however, it is the SeNodeB that
decides which cell in the SCG will be the PSCell to the UE.
In the User Plane [see Figure 13.1(b)], the UE is simultaneously connect-
ed to both the MeNodeB and SeNodeB (hence, dual connectivity). In terms
of RAN-CN UP connection, MeNodeB is obtaining the UP data from SGW
using S1-U interface, whereas the SeNodeB can be either connected to MeNo-
deB (where the UP data are delivered via X2-U) or directly to SGW (where the
UP data are delivered via S1-U). Figure 13.1(c), namely, protocol architecture,
shows the corresponding idea of the three types of radio bearers that are possible
in DC. Starting from the left side, there is MCG bearer (whose radio protocols
are only located at the MeNodeB), split bearer (whose radio protocols are in
both MeNodeB and SeNodeB), and SCG bearer (whose radio protocols are
only located at the SeNodeB). In case of MCG and SCG bearers, the configura-
tion is straightforward (i.e., the S1-bearer is mapped directly to radio bearer of
the corresponding node, and the processing is fully performed by this node). In
the split bearer option, resource aggregation among MeNodeB and SeNodeB is
performed at the PDCP level (i.e., PDCP PDUs are combined and thus ideal
backhaul is not needed as in the case of CA, where transport blocks are aggre-
gated at MAC layer). In this option, there is a need for flow control mechanism,
that is, an additional high-level scheduler that maps an individual PDCP PDU
to either of the available links (i.e., MCG versus SCG). The advantages and
disadvantages of both bearer type configurations to help configure the more
suitable one are provided in Table 13.1.
According to [3], there can be one MCG bearer and one or more SCG
bearers or one split bearer. Thus, there cannot be both SCG bearer and split
bearer configured at the same time. However, the bearers can be reconfigured
on the fly from one type to the other using RRC Connection Reconfiguration
procedure towards the UE and using SeNodeB Modification procedure. Which
bearer configuration option is selected is decided by RRM at MeNodeB, for
example:

• If the UE is stationary, use SCG bearer for bandwidth hungry BE ser-


vice, while keeping VoIP connection at the wide area coverage cell from
MCG (i.e., utilizing MCG bearer for VoIP). This would fully offload
the MeNodeB resources by using X2 capacity for transferring UP data
(as if split bearer would be used);
• If the UE is mobile, use split bearer for a temporary capacity boost. This
would decrease signaling towards the core network and is much faster,
as it does not require switching the path from SGW towards SeNodeB.
278 From LTE to LTE-Advanced Pro and 5G

Table 13.1
SCG Bearer Versus Split Bearer*
Bearer Type SCG Bearer Split Bearer
Advantages No need for MeNodeB to buffer or SeNodeB mobility hidden to CN; utilization
process packets of the SeNodeB of radio resources between MeNodeB
bearer; no need to route traffic to and SeNodeB for the same bearer
MeNodeB; no need for flow control possible (flexible resource allocation);
functionality reconfiguration is performed at RAN
level (dynamic); at SeNodeB change, no
interruption time (the PDUs can be steered
via MeNodeB at SeNodeB change)
Disadvantages SeNodeB mobility visible to CN; Need to buffer, process the packets of the
reconfiguration needs to go via split bearer at MeNodeB; need to route
MME, thus cannot be very dynamic; traffic via MeNodeB; need for flow control
utilization of radio resources (i.e., additional “scheduler” at PDCP)
between MeNodeB and SeNodeB
for the same bearer not possible;
at SeNodeB change, handover-like
interruption time
*Based on input from [2].

13.2.1.2 MAC and PHY Configuration


As there are two different radio links with separate protocol stacks for any of
the bearer configuration option, there is totally independent operation of MAC
and PHY layer for each of the CG, thus [3, 4]:

• There are independent C-RNTIs allocated to the UE: one for MCG
and one for SCG (the SeNodeB configures the SCG C-RNTI, but as it
does not have the RRC signaling connection towards UE, the configura-
tion goes via MCG bearer).
• There are separate DRX configurations that can be applied to MCG
and SCG (i.e., CG-specific DRX operation applies to all configured and
activated serving cells in the same CG).
• Frame timing and system frame number (SFN) are aligned among the
CCs of the same CG and may or may not be aligned among different
CGs.
• There is one MAC entity and thus MAC scheduler per CG.
• PUCCH is transmitted only in PCell (MCG) and PSCell (SCG).
• The Timing Advanced Group (TAG) is configured per CG, and expiry
of one TAG from one CG does not imply expiry of the TAG in the other
CG.
LTE-Advanced Pro: Enhanced LTE Features 279

However, there is also an aspect that binds these two links together, that
is, the measurement gap that is configured as common, covering both MeNo-
deB and SeNodeB (i.e., when we configure measurement gap, the UE cannot
receive from neither MCG nor from SCG).

13.2.1.3 Conclusions and Key Practical Issues


DC serves as a first step and can be seen as a baseline towards more flexibility
in the LTE system as compared to LTE-Advanced in the resource aggregation
domain. As already can be seen from Release 13 developments, DC has its suc-
cessor features by the means of LWA and eLAA4 features as elaborated in the
next sections in this chapter. The key practical aspects and implications coming
out of DC are:

• In the split bearer case, DC requires flow control (i.e., additional RRM
algorithm, scheduling the PDCP PDUs to different links).
• The scheduling of individual packets at each link is performed sepa-
rately and independently of each other and thus can be optimized at
each link according to individual channel characteristics.
• The advantage of this design (i.e., single RRC connection at macro site
and possibility to flexibly assign UP data to different links) is that the
signaling overhead can be decreased by means of reduction of number
of handovers, as the CP/UP split concept can be realized by keeping
the user context at MeNodeB, while flexibly allocating radio resources
among MeNodeB and different SeNodeBs.
• There is a single link for signaling and it is always at MeNodeB, so
there is a risk of dropping the connection when MeNodeB link deterio-
rates, even if SeNodeB has a good channel quality. Additionally, because
of this, there is additional signaling ongoing between SeNodeB and
MeNodeB for encapsulation of the RRC signaling related to SCG (even
the measurements related to SCG must go through MCG).
• The mobility framework is expanded from a handover between two
cells, incorporating the following procedures: single connectivity-to-
DC, DC-to-DC, DC-to-single connectivity, and SeNodeB change.
• The traffic steering/mobility load balancing framework is expanded
from the intrafrequency/interfrequency/inter-RAT handover, towards a
more holistic framework with multiple options (i.e., should the con-
nection be moved from macro cell to small cell, should the small cell be

4. The Release 13 LAA is a CA-type feature; however, Release 14 WI discusses eLAA that en-
ables non-ideal-backhaul between licensed LTE-PCell and unlicensed LTE-LAA SCell.
280 From LTE to LTE-Advanced Pro and 5G

added to the DC operation, should the SeNodeB be released, or which


type of bearer should be used). A more detailed description is provided
in [5].
• The current multiconnectivity (by means of DC) is limited to only one
additional link; this is currently being discussed in 3GPP in terms of
multiconnectivity and make-before-break aspects, to further enhance
mobility performance.
• The current DC is limited to having either split bearer or SCG bearer,
so the potential enhancement could be to have both types concurrently
and use them per application requirements, for example.

13.3 LTE-Advanced Pro Interworking with WiFi


The interoperability between LTE and WiFi has evolved significantly from its
initial very loose interworking (with the use of ANDSF, based on policies, Re-
lease 8), through the RAN-rules based traffic steering (RALWI, Release 12)
towards a very tight interworking mechanism provided within Release 13. The
interworking evolved from CN-assisted, through RAN-assisted, to RAN-con-
trolled interworking. The recent enhancements aim at improving WiFi utiliza-
tion, giving more control to the network operator (include WiFi measurements
and incorporate WiFi into the MNO’s RRM), and improve the user experience
of the WiFi connectivity (considering, for example, the WiFi load, link qual-
ity) [6]. More specifically, 3GPP has recently standardized the following three
features for RAN level integration of LTE and WiFi5 (see Figure 13.2):

• LTE-WLAN Aggregation (LWA): This is based on the DC architecture


and concept, with switch or split bearer operation (i.e., with the ca-
pability to receive data transmission of the same DRB on both LTE
and WiFi links simultaneously). Release 13 standardizes this operation
where the secondary link via WiFi is provided only in DL, whereas all
UL data for the corresponding bearer is provided via LTE [see Figure
13.2(a)]. The data is aggregated at PDCP, thus requires the following
network upgrades: new protocol, LWA Adaptation Protocol (LWAAP),
new interface between LTE and WiFi nodes (in case of non-collocated
deployment), Xw, and new logical entity, WLAN Termination (WT).
• LTE-WLAN Radio Level Integration with IPsec Tunnel (LWIP): This is
a similar concept to LWA; however, it is applicable for legacy WLAN,

5. All the features provide interworking with WiFi at existing 2.4-GHz and 5-GHz bands; how-
ever, the enhancements are envisioned for 60-GHz band WiFi (e.g., 802.11ad and 802.11ay).
LTE-Advanced Pro: Enhanced LTE Features

Figure 13.2 Release 13 LTE-WiFi Interworking schemes. (After: [3].)


281
282 From LTE to LTE-Advanced Pro and 5G

without any WLAN network upgrade, that is, the IPsec tunnel is es-
tablished between the UE and the eNodeB [see Figure 13.2(b)]. In this
case, the links are aggregated above PDCP, that is, on the IP level (IPsec
tunneling); thus, if a bearer would be split and provided over two links,
the packets could arrive out of order due to lack of in-order delivery
function provided by PDCP, therefore split bearer is not used in LWIP.
Contrary to LWA, in LWIP, both directions (i.e., DL and UL) are sup-
ported via the secondary WiFi link. In both cases, LWA and LWIP, the
EPS bearer is mapped to the radio bearer, that is, distributed and con-
trolled by the eNodeB.
• RAN Controlled LTE WLAN Interworking (RCLWI): This is an offload-
ing mechanism, compared to LWA and LWIP, which are aggregation
schemes. RCLWI is a RAT switch, being an evolution from ANDSF-
based and RAN-assisted interworking schemes. In this case, there is
no UP connectivity between LTE eNodeB and WLAN AP, but a total
offload of the data flow from the core network and both DL and UL
directions are supported by the WiFi link. However, the architecture on
the RAN level is the same as with LWA, that is, Xw interface and WT
node are present to exchange configuration and measurements for the
traffic-steering decision [see Figure 13.2(c)].

The following sections provide specifics on each of the above features that
are then summarized in Section 13.3.4.

13.3.1 LTE-WLAN Aggregation


The most flexible and dynamic scheme for utilizing aggregated radio resources
of both LTE and WLAN for a single UE in RRC_CONNECTED mode. This
mechanism addresses both, co-located (RAN logical node has functionality of
eNodeB and WiFi AP) and non-co-located (with the eNodeB and WiFi being
connected via nonideal backhaul) scenarios. This approach is based on the dual
connectivity framework (presented in Section 13.2), with the eNodeB serving
as master-node/link (signaling and mobility anchor) and WiFi as secondary-
node/link (additional radio resources for data), and the aggregation is done
at PDCP level (per PDCP PDU, when speaking of split LWA bearer; and per
bearer, when speaking of switched LWA bearer). As multiple DRBs can be
transmitted over the WiFi link, the LWAAP is specified in the eNodeB protocol
stack, to encapsulate PDCP-PDUs by adding a header with DRB identity. This
serves two purposes: (1) to determine at the receiver side that a specific PDU
coming from WLAN L2/L1 is a LWA bearer, and (2) to which DRB this spe-
cific LWA bearer maps to (details on protocol stacks for LWA can be found in
LTE-Advanced Pro: Enhanced LTE Features 283

[3]). Additionally, PDCP is equipped with LWA status report control PDU to
provide information on the received sequence numbers from LWA bearer; this
is particularly important as WiFi links are assumed to be less reliable than LTE
transmission.
Designing LWA based on DC framework has the following advantages:

• In the scenario of split bearer, as in the case of DC, a flow control func-
tion is required to decide on the individual PDCP-PDU transmission
path, either through LTE or WLAN link. Since, as already mentioned,
the RAN can either configure a UE with DC or LWA, a common frame-
work for management of the resources can be designed, that is, a unified
RRM decision for selecting either DC or LWA, and then a single algo-
rithm for flow control (i.e., PDCP-PDU scheduler).
• Additionally, this design for resource aggregation does not require
WLAN-specific CN nodes, interfaces, and CN signaling, as the second-
ary WiFi link is transparent to the EPC.

One important difference (also a drawback) is that in the Release 13


LWA, the aggregation with secondary link via WiFi is supported only for DL
direction.6

13.3.1.1 WLAN Termination Node and Mobility Management


In case of non-co-located deployment, a new Xw interface is defined for both
the UP-data and the CP-signaling exchange. As WLAN does not support 3GPP
interfaces and protocols, for this solution to work, the WT (WLAN Termina-
tion) logical node is defined. WT terminates 3GPP signaling for the WLAN
APs; however, it is very flexible in terms of implementation, that is, it can be
integrated with access point (AP), integrated with access controller (AC), or
implemented as a stand-alone network node. Additionally, it can be config-
ured per single AP or per multiple APs. If the WT is the termination point for
multiple APs, the UE can freely attach to any AP without notifying the eNo-
deB depending on the WLAN mobility set. The WLAN mobility set, specifies
the group of WLAN APs identified by one or more SSID(s), HESSID(s), or
BSSID(s) that are controlled by the same WT and within which the mobility
is controlled by the UE (i.e., transparent to the eNodeB). However, the mobil-
ity between WLAN mobility sets (or outside a single WLAN mobility set) is
controlled by the eNodeB. Measurement reporting reflects that approach where
the UE is reporting when [7]:

6. However, this limitation is addressed by Release 14 eLWA that also treats the UL (together
with other improvements, for example, support for 60-GHz WiFi version or SON for LWA).
284 From LTE to LTE-Advanced Pro and 5G

• WLAN becomes better than threshold (measurement event W1), by which


the eNodeB may decide to add these WLAN nodes to WLAN mobility
set and establish WLAN link with the associated WT (LWA activation).
• All WLANs inside WLAN mobility set become worse than threshold1 and
a WLAN outside WLAN mobility set becomes better than threshold2 (mea-
surement event W2), by which the eNodeB may decide to change the
WLAN mobility set (inter-WLAN mobility set mobility, by means of
LWA handover as explained below).
• All WLANs inside WLAN mobility set become worse than a threshold (mea-
surement event W3), by which the eNodeB may decide to release the
LWA link with current WT (LWA deactivation).

To perform the above actions related to the LWA measurement reporting


on WLAN link the following procedures are defined [3]: WT Addition (to
establish UE context in WT), WT Modification (to establish, modify or release
bearer context, or modify UE context), and WT Release (to release UE context
in WT). The (re)configuration of the radio interface is provided to the UE
using the RRC Connection Reconfiguration procedure with the lwa-Configu-
ration IE (within or outside mobilityControlInfo IE). As can be seen from this
procedure set, there is no explicit handover procedure as per event W2 (i.e., to
move the UE context from the source to the target WT). Therefore, the WT
Change procedure consists of the WT Release of the currently serving WT (i.e.,
UE context release from the old WT) and WT Addition of the target WT (i.e.,
establishment of the UE context in the new WT). Note that as the WLAN mo-
bility set may be a subset from all the WLAN APs or SSIDs, BSSIDs, HESSIDs
covered by a single WT, for moving between WLAN mobility sets within that
WT, WT Modification procedure is used to simplify the process.

13.3.2 LTE-WLAN Radio Level Integration with IPsec Tunnel


LWIP is a similar feature to LWA in terms of the high-level concept, that is, to
aggregate resources from LTE and WLAN for a UE in RRC_CONNECTED
mode, and hide the WLAN from the CN for decreased signaling. However, the
difference between them is that in case of LWA, the WLAN network needs an
upgrade, with WT and Xw interface, whereas LWIP can be applied to legacy
WLAN deployments, without the need to upgrade WLAN, as it uses IPsec tun-
neling between eNodeB and UE for the bearer that is provided through WiFi.
Other differences (compared to LWA) are: LWIP supports both DL and UL
for the secondary link; in LWIP aggregation takes place on IP level, thus only
switched bearer can be supported due to lack of the PDU reordering function
from PDCP (i.e., eNodeB in DL and UE in UL do not transmit packets for
LTE-Advanced Pro: Enhanced LTE Features 285

the same DRB simultaneously through LTE and WLAN links). In terms of
mobility, the same measurement types, measurement reporting and mobility
framework are used as in LWA (with the difference that the eNodeB does not
have the CP connectivity to the WLAN, as WT does not exist in LWIP).
The initial phase of the signaling procedure for LWA and LWIP is the
same and includes: UE Capability Information (where the UE indicates which
features it supports), RRC Connection Reconfiguration (setting up the mea-
surements for WLAN), Measurements Report (where the UE provides mea-
surements on WLAN that meet the thresholds), and RRC Connection Recon-
figuration (where the eNodeB provides the UE with WLAN mobility set and
configures either LWA or LWIP). At establishment, LWIP-SeGW IP address
is sent to the UE together with WLAN mobility set and bearer configuration
(using RRCConnectionReconfiguration message with lwip-Configuration). After
WLAN association (where UE selects specific WLAN AP and authenticates
using EAP/AKA), UE establishes IPsec tunnel, where the IPsec keys are derived
based on LTE KeNodeB, and there is one IPsec tunnel for all the data bearers that
are configured for LWIP. In terms of UP upgrades, at the eNodeB and UE side
for the UL,7 LWIP uses LWIPEP (LWIP Encapsulation Protocol) to encap-
sulate the IP packets with GRE header and transfer them through the LWIP
tunnel8 [3]. The LWIP tunnel management (i.e., establishment and release) is
independent from the data bearer management (i.e. configuration and resource
release) through the LWIP procedures. Through this, the WLAN mobility set
updates can be decoupled from the actual data transmission and the UE can
have tunnels ready, even if the data is not there at the moment.

13.3.3 RAN Controlled LTE-WLAN Interworking (RCLWI)


Compared to LWA or LWIP, RCLWI is a slightly different concept of LTE-
WiFi interworking where instead of aggregating resources on the radio level,
RCLWI is a harmonized traffic steering decision added to the 3GPP hando-
ver concept (also for UEs in the RRC_CONNECTED state). The Release 12
RAN-Assisted LTE-WLAN Interworking (RALWI) served as a starting point
for this feature, where the RAN provided rules (using system information or
dedicated signaling) to switch from LTE to WLAN or vice versa, based on UE
radio measurements and load conditions between the two systems, with RAT
selection decision made by the UE (see details in [8]). Release 13 RCLWI goes
one step further, where the traffic steering decision is taken by the RAN, based
on UE measurement report, so the roles are switched: the UE provides mea-

7. For the DL, the packets received from the IPsec tunnel are directly forwarded to the upper
layers [3].
8. More specifically, LWIP tunnel is a tunnel between eNodeB LWIPEP entity and UE LWIPEP
entity, whereas IPsec tunnel is established between the LWIP-SeGW and UE.
286 From LTE to LTE-Advanced Pro and 5G

surements, and RAN decides on offloading, giving more control to the MNO
(however, the RALWI mechanism with RAN-rules can be used in parallel as it
is applied for RRC_IDLE). What is common to LWA, is the architecture (Xw
interface and WT entity) and deployment scenarios, that is, co-located (WLAN
AP and LTE Small Cell node) and non-co-located (with nonideal backhaul).
The mobility measurements are the same as for LWA, and WLAN mobility set
management is common for both. However, the actual mobility decision is dif-
ferent in the following manner: in LWA the mobility decision is to add second-
ary link for throughput improvement and the radio bearer is forwarded or split
from the eNodeB (i.e., EPS bearer is mapped to the radio bearer), whereas in
RCLWI the mobility decision is to switch the UP link from LTE to WLAN or
vice versa rather being a handover-like mechanism (thus, there is no UP inter-
face defined between eNodeB and WT). In order to be able to switch back from
WLAN to LTE, the signaling radio bearer (and thus RRC signaling) is kept at
the LTE, and the WLAN measurements and intra-WLAN (but inter-WLAN
mobility set) are handled by the LTE anchor cell. Both steering decisions are
provided to the UE using the RRC Connection Reconfiguration procedure
with mobilityControlInfo providing the RCLWI-specific IEs. Similar to LWA,
the UE is able to move freely among the WLAN APs that are under the WT of
the currently configured WLAN mobility set (i.e., the AP switch/association is
based on the UE decision, without informing the serving eNodeB). However,
UE is required to provide the feedback to the eNodeB about the WLAN con-
nection status, so that upon failure, the eNodeB can quickly act and switch the
UP connection back to LTE.

13.3.4 Summary of the LTE-WLAN Interworking Schemes


All the above schemes can be configured within the same eNodeB. However,
the RAN cannot configure more than one out of LWA, LWIP, and RCLWI (and
DC) simultaneously for a single UE.9 Thus, a single UE can use only two radio
links and a single feature at a certain time (independently if this is LTE-only
DC, or LTE with WiFi resource aggregation). All the above schemes use com-
mon WLAN measurements including: WLAN ID, WLAN band/frequency/
channel, RSSI, backhaul rate, admission capacity, and channel utilization. The
selection of the feature for a particular UE depends on the UE capabilities (i.e.,
UE needs to indicate LWA, LWIP and/or RCLWI support in the UECapa-
bilityInformation message) and the network configuration and capabilities (e.g.

9. However, as mentioned in Section 13.3.3, the UE can be simultaneously configured with


RALWI (i.e., the passive interworking scheme from Release 12) and any of the Release 13
active interworking features. In this case, the active ones take precedence over the passive one
(i.e., the RAN rules apply for RRC_IDLE mode only, whereas the active TS actions and ag-
gregation apply to the RRC_CONNECTED).
LTE-Advanced Pro: Enhanced LTE Features 287

LWIP does not require new entities and interfaces, whereas LWA is more flex-
ible, but requires upgrading the network with Xw interface, LWAAP, and WT
logical node). Table 13.2 compares these three features.

13.4 LTE Operation in Unlicensed Spectrum


Tight LTE interworking with WiFi is just one way for the MNOs to offload
traffic to unlicensed spectrum bands. The other one, discussed within this sec-
tion, is based on a specialized design of LTE system to be able to cope with
ISM/license exempt bands’ special requirements, for example, fair coexistence
with other users of this spectrum (e.g., Legacy WiFi or between LTE opera-
tors using unlicensed bands). This was defined under the 3GPP Release 13
Licensed Assisted Access (LAA) feature and is aimed at leveraging investments
on the existing or planned LTE infrastructure (i.e., reusing small cells). By do-
ing so, there is no need for integration of two independent systems (i.e., LTE
and WiFi as in LWA) for using unlicensed spectrum, but the operation is under
common LTE framework, being more flexible in terms of resource allocation.
However, there are two other systems/features that are present in the industry,
namely, LTE-Unlicensed (LTE-U) and MuLTEfire. Neither of them is a 3GPP
standardized feature; however, it is worth mentioning both to illustrate the dif-

Table 13.2
Release 13 LTE-WiFi Interworking Features Comparison
Feature LWA LWIP RCLWI
Type/purpose Aggregation/improved Aggregation/improved Offload/load balancing
throughput throughput
Deployment Co-located (ideal/internal Non-co-located (using Co-located or non-co-
backhaul), non-co-located legacy WLAN) located
(non-ideal backhaul)
Aggregation level PDCP (tight interworking IP Core Network
with flow control/PDCP
scheduler)
Bearer type Switched bearer, split Switched bearer CN offload
bearer (with in-order
delivery support)
WLAN type Upgraded WLAN Legacy WLAN Upgraded WLAN
Network upgrade Xw-C, Xw-U, WT (for LWIPEP, LWIP-SeGW Xw-C, WT
non-collocated scenario),
LWAAP
WiFi link DL (Release 13), DL and UL DL and UL DL and UL
(Release 14)
Flexibility/ Highest (dynamic bearer Medium (fast bearer Lowest (only one link
performance aggregation and fast switching) and handover-like
bearer switching) switch)
288 From LTE to LTE-Advanced Pro and 5G

ferences, as there was a huge debate on allowing the LTE system (i.e., a system
operated by big MNOs) to use the free-for-all spectrum, where IEEE 802.11
had a significant voice in evaluating the LTE in Unlicensed scheme (to make
sure that the fairness is achieved).

13.4.1 Licensed-Assisted Access


LAA is a concept of carrier aggregation between the licensed LTE carrier [pri-
mary component carrier (PCC)] and one or more (up to 4) unlicensed LTE
carriers [secondary component carrier(s) (SCC)] at 5-GHz band (more spe-
cifically, within frequency range of 5,150–5,950 MHz, band 46). This means
that up to 80-MHz BW can be aggregated within unlicensed spectrum. This
provides significant additional capacity for the system for boosting performance
(primarily for BE data), managed by licensed carrier, acting as reliable signaling
and mobility anchor (as well as resource for non-BE QoS services). As Rel-
13 specifies LAA for DL only, it is a SDL type of CA, where the feedback
(PUCCH) of the SCC is provided over licensed UL counterpart (UL PCC).

13.4.1.1 LAA Deployment


LAA design provides a different way of aggregation of resources from unlicensed
spectrum compared to, for example, LWA (i.e., the resources are managed by
MAC layer scheduler instead of flow control at PDCP level). This also implies
that LAA requires ideal backhaul between the licensed and unlicensed spectrum
transmission points when aggregated. One of the limitations for deployment is
that due to the use of unlicensed spectrum the transmit power has to be low
(according to ISM/license exempt band regulations). This excludes macro-only
deployment scenario. Therefore, the aggregation can take place either within a
stand-alone small cell or in a non-co-sited case with ideal backhaul (i.e., with
the use of RRH-based small cells). Alternatively, for nonideal backhaul, there
is a possibility to combine LAA with DC, where macro-cell providing PCG
is serving as mobility anchor and small cell with SCG providing PSCell over
licensed carrier is supporting SCell(s) with unlicensed LTE carriers. Figure 13.3
shows the different deployment options as per the above description [9].

13.4.1.2 Unlicensed Channel Access in LAA


One of the design principles for LAA was that it should not impact other users of
unlicensed band (e.g., WiFi or other LAA systems) more than other/additional
WiFi systems operating on the same carrier. The main changes to the PHY
layer design for LAA come from the fact that because the system is operating in
unlicensed spectrum, the PHY layer needs to support clear channel assessment
(CCA) mechanism, where the transmission is allowed, only if the transmission
channel is unoccupied by others. This is achieved by sensing, which allows one
to avoid collisions between the two systems operating in the same spectrum.
LTE-Advanced Pro: Enhanced LTE Features
289

Figure 13.3 LAA deployment scenarios. (© 2015. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)
290 From LTE to LTE-Advanced Pro and 5G

Thus, for LAA, the Listen Before Talk (LBT) with exponential back-off mecha-
nism is used. Specifically, in LBT, the transmitter senses the channel to verify if
it is occupied or free for transmission. If it is free, the transmission is possible;
if it is not free, transmitter performs a back-off procedure for waiting and tries
again [i.e., a random number N is selected within contention window and the
channel is sensed to be idle during the predefined time multiplied by N before
transmitting (the detailed procedure is described in [10])].
Important for LBT configuration, the timing rules and energy threshold-
related parameters are defined as:

• Contention window (CW) size: For the exponential back-off mechanism,


the CW defines the range to randomly select the number of waiting
periods after contention is detected. The larger the CW size, the longer
the waiting time (i.e., the longer the channel needs to be sensed), but
lower probability of contention; the lower the number, the shorter the
waiting time, but larger probability of contention. It can be adaptively
changed within the defined ranges of CWmin and CWmax (see [10]) based
on HARQ feedback (i.e., the more NACKs received the longer the CW
should be to allow for more reliable transmission). The procedure of up-
dating the CW is standardized together with the allowed values for CW.
• Energy detection (ED) threshold: Before entering the channel (i.e., before
transmitting), the eNodeB checks if the channel is available. This check
is carried out via the signal energy level: if the sensed energy is above this
threshold, the channel is assumed to be occupied. The higher the thresh-
old, the easier to get into the channel, but the fairness is lower. The lower
the threshold, the harder to get into the channel (i.e., channel is more
busy due to lower interference signals being visible).
• Maximum channel occupancy time (MCOT): Once the channel is avail-
able for transmission, this parameter defines how long the channel can
be occupied. As the basic unit in LTE is TTI (1 ms), the value is in milli-
seconds and depends on the Channel Access Priority Class (1...4), which
reflects different traffic models (QCIs mapped to Channel Access Priori-
ty Classes). When multiplexing different traffic types, the maximum and
minimum MCOTs should reflect the requirements of the multiplexed
data, that is, should support minimum and maximum values of the cor-
responding QCIs (see [10] for detailed values).

The major change in the PHY layer come exactly from the need of LBT,
that is, the PHY layer needs to support sensing procedure (as defined above by
the operational parameters), discontinuous transmission (DTX) and limited
maximum transmission duration. In LAA, the licensed signaling anchor (PCell)
LTE-Advanced Pro: Enhanced LTE Features 291

uses the standard frame type, whereas the unlicensed secondary carriers (SCells)
use the new LTE frame format. The format 3 radio frame type (applicable to
LAA SCell operation only), is a modified version of the standard LTE radio
frame. It is also (like regular LTE frame) 10 ms long with 10 subframes, while
the DL transmission can occupy one or more consecutive subframes (specifi-
cally, ranges from 2 to 10 consecutive subframes depending on the Channel
Access Priority Class). DL burst duration can start in the first or second slot
of the subframe (according to subframeStartPostion IE signaled via RRC within
DedicatedPhysicalConfiguration) and ends with the last subframe in the specific
transmission being a full subframe or a part subframe (i.e., any of the TDD
DwPTS configuration: 3 to 12 OFDM symbols, signaled via DCI Format 1C)
[11].

13.4.1.3 Multicarrier and Multiantenna Operation


As already mentioned, in LAA, there is a possibility to simultaneously support
up to 4 SCells (within Release 13) operating in the unlicensed spectrum. De-
pending on the configuration, there are different possibilities for channel access
when having more than one active LAA SCell [10]:

• The channel access procedure is performed separately on each SCell


with the random back-off number N determined individually for each
SCell.
• The value for N is determined for all active SCells based on the carrier
that has the largest CW.

Additionally, the SCells can be used simultaneously or can be randomly


switched in a one-by-one fashion with minimum 1-second periods.
For the measurement of the individual carrier to be selected for LAA
SCell operation, the eNodeB may configure UE for detection/measurements
of LAA carriers. The measurement is taken on the Discovery Reference Signals
during a specified DMTC (Discovery Signal Measurement Timing Configura-
tion) window, where the UE measures average RSSI and channel occupancy
(i.e., percentage of samples where RSSI is above threshold).
In addition to the above configuration aspects, LAA (depending on UE
capabilities) may support very high throughput transmission modes, that is:

• Transmission mode 9 (TM9): with up to 8 layers’ transmission and pre-


coded reference signals, using DCI format 2C;
• Transmission mode 10 (TM10): with up to 8 layers’ transmission, where
the transmit antennas may be distributed among different physical loca-
tions (CoMP-like), using DCI format 2D.
292 From LTE to LTE-Advanced Pro and 5G

The UE capabilities for this purpose are signaled differently than the reg-
ular LTE operation, through tm9-LAA-r13 and tm10-LAA-r13 IEs within the
UE Capability procedure [7] together with downlinkLAA-r13 support indicat-
ing, if the UE supports LAA in general.

13.4.1.4 Practical Issues


Some of the practical aspects with respect to LAA management and perfor-
mance aspects of LAA as a feature enabling the LTE system to enter into unli-
censed spectrum are:

• As the LAA SCell is operating on the unlicensed channel (i.e., on less


reliable resources, without predictive interference), not all traffic can and
should be transmitted through it (e.g., the services with tight latency
requirements should not be mapped to LAA resources). Thus, it is up to
the eNodeB RRM decision to decide which logical channel should be
mapped to LAA SCell(s).
• In the multioperator case using LAA in the same region, but using dif-
ferent nodes, the PCI configuration aspect is important due to potential
confusion. In such a case, a UE may be directed to use specific SCell
which is being used by two different operators (unlicensed spectrum is
shared) with two different PCIs. Thus, the conflict resolution and recon-
figuration of the PCI should be subject to SON function.
• In terms of comparison with LWA, LAA should have better performance,
due to tighter control of the unlicensed spectrum resources (LWA aggre-
gates resources on PDCP, whereas LAA aggregates resources on MAC),
possibility to capture more dynamic changes on the unlicensed spectrum
and centralized load control. However, LWA has less impact on existing
infrastructure and devices (only SW upgrade is needed), whereas LAA
requires the HW upgrade.

13.4.2 Nonstandardized Unlicensed LTE Access Schemes


LAA is a standardized 3GPP LTE operation in unlicensed spectrum that can
be applied globally due to fulfilling the regulatory requirements on the LBT
mechanism required by most countries. However, before this was standardized
within Release 13, an alternative was proposed as proprietary solution that is
based on LTE Release 12, namely, LTE-U. Another concept that is a proprietary
approach and non-3GPP standardized technology is currently being developed
for a stand-alone unlicensed operation and is called Multefire. Table 13.3 sum-
marizes the three technologies for the unlicensed LTE operation.
LTE-Advanced Pro: Enhanced LTE Features 293

Table 13.3
Comparison of the LTE Operation in Unlicensed Spectrum
System LAA LTE-U MulteFire
Specification body 3GPP LTE-U Forum MulteFire Alliance
Operating spectrum 5 GHz 5 GHz 3.5 GHz (GAA for the
United States), 5 GHz
Operation Aggregation: CA SCell Aggregation: CA SCell Stand-alone (based on
(SDL) (Rel-13), DC SCG (SDL) LAA and eLAA)
(Rel-14 eLAA)
Coexistence LBT CSAT LBT
mechanism
Deployment Worldwide (compliant China, Korea, India, Worldwide (supports
possibility with regulations of United States LBT)
most countries)
Transmission DL only (Release 13), DL only DL (LAA based) and UL
direction DL and UL (Release 14 (eLAA based)
eLAA)
Licensed anchor Yes, FDD or TDD Yes, FDD No (stand-alone)
cells
Unlicensed frame Frame type 3 (LAA) Frame type 1 (FDD) Frame type 3 (LAA)
structure with enhancements
Changes to licensed High (LAA PHY) Low (fast to deploy, High (stand-alone
LTE regular LTE PHY) RAN, LAA PHY)
Support for neutral No (bound to specific No (bound to specific Yes (MNO agnostic,
host MNO due to licensed MNO due to licensed connected to EPC)
anchor) anchor)

13.4.2.1 LTE-U
LTE-Unlicensed is specified by the LTE-U Forum [12] as a proprietary tech-
nology. The high-level concept of accessing the unlicensed resources is similar
to LAA (i.e., it is based on CA with SCell anchored at LTE-licensed PCell).
However, the SCell design and unlicensed channel access is different to LAA.
First, it is not using any special radio frame structure, but is based on regu-
lar LTE FDD DL radio frame (i.e., frame type 1) for LTE-U SCell, with the
small cell ON/OFF scheme and Discovery Reference Signals to support DTX.
Second, instead of LBT, it is using the Channel Selection and Carrier Sensing
Adaptive Transmission (CSAT) scheme. The mechanism works as follows: the
eNodeB senses the available unlicensed channels to choose the empty channel
to avoid interference to/from WiFi and does that on an ongoing basis, that is,
constantly measures channels and switches, when the currently used one is oc-
cupied. Only if there are no empty/clean channels, it enters into the coexistence
CSAT scheme. This is based on adaptive duty cycle, where the LTE is on (i.e.,
transmitting regular DL frames) during a specified percentage of time, and then
it is off for the rest of the period where the WiFi is allowed to use this time.
The eNodeB senses the chosen channel for up to 200 ms and based on channel
294 From LTE to LTE-Advanced Pro and 5G

occupancy, the percentage of LTE ON time is adjusted (i.e., the duty cycle is
subject to WiFi activity). As the channel gets more or less congested, the duty
cycle is adaptively changed to support fair sharing. The major problem with
this approach (that some stakeholders raised) is that it is the LTE system that
decides on how much time it occupies, and WiFi needs to follow. Thus, this is
not considered to be fair. Also, this solution then can only be applied in several
countries, like the United States, Korea, China, and India where there are no
regulatory restrictions on such an approach. The regulatory bodies in the other
countries require the LBT mechanism for assuring fair sharing, thus LTE-U
cannot be used there.

13.4.2.2 MulteFire
MulteFire system is being specified by MulteFire Alliance [13], where the basic
approach is to support LTE in unlicensed spectrum as a stand-alone opera-
tion (i.e., without being anchored to licensed PCell counterpart). Thus, this
technology aims at enabling small cell operation solely in unlicensed spectrum.
Similar to LTE-U, it is also non-3GPP standardized system and therefore may
be considered as a stand-alone RAT (connected to EPC); however, it is solely
based on Release 13 LAA for DL and Release 14 eLAA for UL operation, to
enable global reach (i.e., MulteFire fulfills regulatory requirements on using
LBT). Due to lack of the licensed anchor, MulteFire allows neutral host con-
cept, where multiple operators share the MulteFire resources. However, due
to the stand-alone operation, certain enhancements for the signaling support
are added to the regular LAA operation including mobility, signaling, paging,
and system information support. An additional difference to other unlicensed
spectrum access schemes is that MulteFire is aiming at support also 3.5-GHz
band in the United States under the Citizens Broadband Radio Service (CBRS)
framework as Generalized Authorized Access (GAA) allowing to use up to 80-
MHz BW, if this not occupied by incumbents. The first Release of Multefire
specification was released in April 2017 [13].

13.5 Carrier Aggregation Enhancements


LTE-Advanced Release 10 initially specified the carrier aggregation feature with
the limitation to utilize maximum 5 component carriers (CCs) as indicated in
Chapter 10. This was propagated throughout Release 11 and Release 12 with
the adjustments that relaxed other features of CA, for example, by enabling si-
multaneous aggregation of FDD and TDD CCs or by enabling support for dif-
ferent TDD configurations in the aggregated CCs. Release 11 also introduced
the possibility to use different timing advances for different CCs. This enabled
CA utilization in the non-co-located scenario where different CCs are provided
LTE-Advanced Pro: Enhanced LTE Features 295

by different TRPs (Transmission/Reception Points). Release 13 brought into


play the unlicensed spectrum with the introduction of LAA (see Section 13.4),
which also utilizes concept of CA. Even after such changes, the maximum con-
figuration of 5 CCs has become a limitation to fully utilize the spectrum ag-
gregation possibilities. For example, it is not possible combine some licensed
and some unlicensed SCells; it is not possible to compete with WiFi, where the
maximum aggregated BW can be up to 160 MHz.

13.5.1 Massive CA
Within LTE-Advanced Pro, a further extension for CA has been provided,
namely, massive CA with up to 32 aggregated CCs. In this, different frame
structures can be combined, that is, Frame Structure Type 1 (FDD), Type 2
(TDD), and Type 3 (LAA), so that licensed CCs can be combined with unli-
censed CCs. A total BW of 640 MHz can be utilized in a combined fashion,
while still assuring backwards compatibility with LTE Release 8 channel BWs
and frame structures (as well as LAA). Massive CA still requires a single PCell
that can be accompanied with up to 31 SCells. This provides a higher degree of
flexibility for resource handling with fast adaptation via MAC scheduling and
helps to achieve higher throughputs. Additionally, it reduces the need for costly
HO (in terms of signaling load and service interruption) for the purpose of
load balancing between carriers. This reduction is possible due to the scheduler
that takes over the responsibility for distributing traffic among carriers, for the
maximum efficiency and congestion avoidance. On the downside, the schedul-
ing complexity, PHY feedback signaling, and RF complexity increases. To sup-
port non-co-located CA, that is, to distribute the provisioning of the different
serving cells through different TRPs (Transmission-Reception Points), Release
11 specified Timing Advance Group that defines a group of serving cells that:
use the same timing reference, use the same TA value, and use a single RA pro-
cedure to establish timing alignment. Massive CA, inherited from Release 11
the maximum number of the TAGs, that is, one pTAG (primary TAG, used for
serving cell group associated with PCell) and up to 3 sTAGs (secondary TAG,
used for serving cell group associated with SCell), thus enabling to distribute
the 32 CCs onto 4 TRPs [3].

13.5.2 Uplink Enhancements


Up to Release 12, all the feedback from all SCells is required to be provided
via the PCell PUCCH, and the maximum number of bits that is provided by
PUCCH Format 3 is 22. This was a significant limitation for the introduction
of massive CA, in which, if there are 32 FDD DL CCs activated (theoretical
maximum value), and each of the CCs is configured with SU-MIMO with two
296 From LTE to LTE-Advanced Pro and 5G

transport blocks, the number of feedback HARQ bits is 32 * 2 = 64 (not men-


tioning the TDD configuration with heavy DL configurations, that can end up
in a maximum 576 bits). Therefore, Release 13 introduces two main improve-
ments to cope with the UL feedback:

• Possibility to configure one of the SCells with PUCCH; in this case,


there can be simultaneously one PUCCH at PCell and one PUCCH
at SCell (PUCCH SCell). In this configuration, some of the serving
cells’ UL feedback is associated (by means of RRC configuration) with
PUCCH on PCell (primary PUCCH group), while other serving cells’
UL feedback is associated with PUCCH on SCell (secondary PUCCH
group) [3].
• New PUCCH formats to transfer more feedback bits in a single shot,
including: PUCCH format 4 utilizing 144QPSK symbols (to support
largest payload out of all PUCCH formats), without support for code
division multiplexing (CDM) using a single- or multi-PRB transmis-
sion; and PUCCH format 5 with CDM (thus enables possibility for
two PUCCH format 5 simultaneous transmission in the same resources)
and utilizing 72 QPSK symbols and single-PRB transmission. Addition-
ally, a dynamic switching between those two options (and additionally
PUCCH format 3), enables to adapt the PUCCH format to the actual
uplink control information (UCI) load, which is a result of the number
of the active CCs and their configuration [11, 14].

13.5.3 UE Support
In order to utilize the full potential of the massive CA, the UE should be ca-
pable of supporting the features that are associated with it [15]. The param-
eters that need to be supported are: support of massive CA and accordingly the
number of supported CCs and their configuration. The UE may optionally
support: PUCCH transmission on SCell (pucch-SCell-r13), cross-carrier sched-
uling (crossCarrierScheduling-B5C-r13) for the beyond 5 DL CCs (as the CIF
specified before Release 13, did provide the possibility to address the CCs on 3
bits), and multiple TAGs (multipleTimingAdvance) for each band combination.
However, from a practical point of view, within Release 13, still a maximum of
5 DL CCs can be aggregated, as defined by RAN WG 4 (being the 3GPP group
specifying the allowed band combinations for CA). With 32 component car-
riers, there is a huge impact on the RF front end on the complexity and power
consumption, which specifically touches the UE devices. The details are out of
scope of this chapter.
LTE-Advanced Pro: Enhanced LTE Features 297

13.6 Device-to-Device Communications


3GPP device-to-device communications (D2D)10 enables the direct over-the-
air communication between devices within or outside mobile network cover-
age.11 For this, LTE network is still in control of communication as the devices
use the licensed spectrum. D2D was standardized to enable mainly public safety
servcies (e.g., to deal with disaster or emergency situations, where the network
is not available or congested), but also commercial uses (e.g., wearable devices
communicating with smartphone or content sharing to offload the network
resources). Additional advantages of using this direct communication include:

• Better resource utilization, where the users between themselves have bet-
ter direct channel conditions, compared to the individual links towards
eNodeB;
• Extending the coverage of the cell; decreasing power consumption of
the low-end devices that can communicate with a gateway close to them;
• Gains achieved due to trunking where a single UE is responsible for
communicating with the eNodeB on behalf of multiple devices (e.g.,
RACH occupancy decrease due to single RRC connection to the net-
work).

The D2D is an overall concept of the communication between devices


(which are close to each other). On top of that, 3GPP has standardized the
ProSe (Proximity-based Services) feature (all the details can be found in [15])
that spans across all network domains including services, core network func-
tionality, and RAN concept. On the radio side a concept of side link (SL) has
been introduced to encompass the actual communications between UEs, to
complement DL and UL related to the UE-to-network interface. The overall
design and individual protocol aspects are provided in [3] and individual RAN
protocol specifications.

13.6.1 D2D Scenarios and Architecture


Figure 13.4 shows the scenarios for the D2D operation, where the UEs use
their transmission and reception roles interchangeably, that are divided into [3]:

• In-coverage: Where both UEs are located within network coverage and
have communication link between themselves and another communica-

10. D2D has been originally specified within Release 12, with major updates within Release 13.
11. Direct D2D communications must be operational in the absence of network infrastructure.
298 From LTE to LTE-Advanced Pro and 5G

Figure 13.4 D2D deployment and communication scenarios.

tion link to the RAN, thus the RAN controls the resource usage between
them for ProSe communications. In this case, the synchronization is
provided to both devices from the eNodeB. The nonpublic safety (i.e.,
commercial) ProSe applications can operate only in this mode.
• Out-of-coverage:12 Where all UEs having active D2D communication
are located outside network coverage (i.e., none of them has active RRC
connection to an eNodeB). In this case, one of the UEs provides a signal
reference to the others for synchronization purposes. As the RAN is not
able to communicate the resource to be used by the UEs, the UEs are
preconfigured with the available resources, to be used for transmission.
This operation is available only for the public-safety use cases.
• Partial coverage: Where one of the devices is located within network
coverage (and has the active connection to the network) and the other
is located outside the cell’s coverage. The synchronization to the out-
of-coverage UE is relayed by the in-coverage UE. The resources of the
in-coverage UE are controlled by the RAN, whereas the out-of-coverage
UE uses the preconfigured resources. This operation is available only for
the public-safety use cases.

The communication within D2D framework encompasses: ProSe Direct


Discovery (where the two UEs are identified to be in proximity) and ProSe Di-
rect Communications (where the UEs communicate with each other using the
EUTRA technology and licensed resources, which are reserved by the network
for that purpose). Additionally, the devices may act as relays between the NW

12. Note that in-coverage or out-of-coverage relates only to the lack of coverage of the carriers/
frequencies that are available for ProSe/D2D communications (so the UE may still be in
coverage of the general cellular communications cells).
LTE-Advanced Pro: Enhanced LTE Features 299

and remote UE (ProSe UE-to-Network Relay, where the UE relays DL and UL


between remote UE and the NW, for coverage enhancement purposes) or to
each other, or the transmission from a single UE may be received by multiple
UEs (broadcast/groupcast-like services). The UE optionally may support dif-
ferent sets of features as specified by the UE capabilities, namely, ProSe Direct
Discovery, ProSe Direct Communication, and ProSe UE-to-Network Relaying
[7]. There are two models for ProSe Direct Discovery, namely, model A, where
the UE announces (i.e., broadcasts discovery messages at predefined intervals)
its presence to the potential UEs in proximity (by means of, for example, ProSe
Application ID); and model B, where the UE transmits a request to obtain
certain information (e.g., ProSe Application ID) and the receiving UEs may
respond with some information to the requesting UE.

13.6.2 Architecture and Protocols


The architecture of ProSe is provided in Figure 13.5(a). The add-ons to the
legacy EPS network architecture related to the D2D include: ProSe function,
ProSe application at the device side, the application server and new interfaces.
The subscription information for the ProSe is provided through PC4a to the
ProSe function, while PC3 is used to authorize discovery request and configure
communication between UE and ProSe function, including security param-
eters, IDs and resources configuration for out-of-coverage situation.13 PC5, in
turn, is the interface between the ProSe-enabled devices for CP and UP (includ-
ing discovery, communications and UE-to-NW relay). The details of all the ref-
erence points and entities can be found in [15]. Focusing on the PC5 interface,
the right side of Figure 13.5 shows the relevant protocol stacks that are required
to provide specific functions for ProSe operation [15], namely:

• PC5-D, using L3 ProSe Protocol, is used solely for sending and receiv-
ing discovery message and operating above the MAC layer being an
application-level protocol (NAS-like).
• PC5-C, using L3 PC5 Signaling Protocol (on top of PDCP) is used
for CP signaling between devices, including establishment, modifica-
tion and release of the logical side link connection. It is being exchanged
between devices on unicast L2 ID (i.e., MAC PDU IEs).
• PC5-C is used for broadcasting system information and synchroniza-
tion signals for the out-of-coverage and partial coverage scenarios, using
RRC.

13. This can be interpreted as a “NAS” for D2D.


300
From LTE to LTE-Advanced Pro and 5G

Figure 13.5 ProSe architecture and PC5 Protocol stacks. (© 2015. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)
LTE-Advanced Pro: Enhanced LTE Features 301

• PC5-U is used for UP data exchange between devices, utilizing the IP


protocol stack with ProSe Application running on top.

The regular L3/L2/L1 protocols of the corresponding PC5 protocol stacks


are modified with respect to the legacy LTE operation, defining the side link.

13.6.3 D2D Radio Interface: Side Link (PC5)


Side link (SL) defines logical CP and UP connections for D2D communica-
tion, where the UEs are talking to each other over the PC5 interface using
E-UTRA (in coverage, partial coverage) or without E-UTRA support (out of
coverage). The most important aspects of the side link communications are
mutual device discovery, providing synchronization to the out-of-coverage us-
ers, and performing transmission/reception of the data. The D2D communica-
tion takes place in the UL resources (i.e., on the UL carrier and using basic UL
frame structure). The modified (compared to DL and UL) channel architecture
is defined for SL and encompasses:

• Physical SL Shared Channel (PSSCH) for the D2D data, where the data
is mapped from the SL Transport Channel over the SL Shared Channel;
• Physical SL Control Channel (PSCCH) for scheduling assignments, us-
ing SL Control Information (SCI);
• Physical SL Broadcast Channel (PSBCH) for broadcasting system in-
formation related to D2D communication, with the system information
provided through SL-BCCH over SL-BCH;
• Physical SL Discovery Channel (PSDCH) for providing discovery
signals.

13.6.4 D2D Direct Communication


For transmission, the resources can be scheduled by the eNodeB (for an in-
coverage scenario) or selected by the UEs by themselves from the resource pool
that is provided by the RAN (for either in-coverage or out-of-coverage sce-
nario). For this purpose, there are two SL transmission modes provided: Mode
1 is an eNodeB-controlled resource allocation, and Mode 2 is where UE does
autonomous resource selection with resource reservation by eNodeB. In Mode
1, the eNodeB uses PDCCH format 5 for scheduling of the SL resources (us-
ing SL-RNTI). Resource configuration in terms of resource pool for Mode 2
can be obtained from SIB18 or via dedicated RRC signaling. This is possible
when the UE is in-coverage, and in this case the eNodeB defines which mode
the UE shall use. For out-of-coverage scenarios, the UEs are preconfigured with
302 From LTE to LTE-Advanced Pro and 5G

specific resource pools, from which they can select resources autonomously.
The important thing in this discussion is that there is no new RRC state for
the D2D communication; thus, as it can be expected, Mode 1 is only possible
when UE is in RRC_CONNECTED state, while Mode 2 can be used in both
RRC_CONNECTED or RRC_IDLE states meaning that the operation can
be either bonded to the regular cellular connection or independent (thus, both
SL communication and SL discovery procedures are independent of the RRC
state). This also means that there is no RRC connection between the UEs need-
ed to be established, rather the L2 association (where the UE are identified us-
ing MAC PDU with source and destination IDs) and direct data transmission.
In case of joint operation (dual connectivity with Uu and PC5), the UL and SL
transmissions using the same UL carrier have to use different subframes (i.e., it
is not possible for the UE to transmit UL and SL in the same time). As it was
already mentioned, at a particular time, there is always one UE that is in trans-
mit mode and the other is in receiver mode (and these roles may be exchanged).
For the transmitting UE to define the transmission of the data transport block,
the Sidelink Control Information 0 (SCI0) is standardized (specifying, for ex-
ample, MCS, resource block assignment, RV order, and timing advance). Note
that, in the case of Mode 1, the eNodeB schedules the resources using DCI
format 5, but this scheduling grant is for the PSCCH (whereas the SCI0 config-
ures the resources of PSSCH). Therefore, the PSCCH always precedes PSSCH
transmission in the allocated resources. The PSSCH uses HARQ, but without
feedback (i.e., no ACK/NACK is provided), being a blind transmission. Thus,
for robustness reasons, the initial transmission is always followed by the 3 con-
secutive retransmissions using the HARQ redundancy versions. Therefore, a
single transport block transmission is occupying the 4 consecutive subframes.
This is important with respect to the dimensioning of the resource pools alloca-
tion. The resources of the UL carrier are always configured as resource pools,
where a resource pool defines two sets of consecutive PRBs (accompanied with
selected set of subframes), from which the users select resources for particular
transmission (in Mode 2), or from which the eNodeB selects the resources for
particular D2D transmission (in Mode 1) [3, 10, 11].

13.6.5 Synchronization and Broadcast


Besides the actual transmission of the data, an important aspect of side link
procedures is the synchronization between UEs. In case of in-coverage scenario,
the case is simple: both UEs synchronize to the DL reference signals (including
PSS, SSS and PBCH), whereas the two other scenarios, namely, partial coverage
and out-of-coverage are more interesting. In the former case, the in-coverage
UE provides synchronization signal to the out-of-coverage UE using timing
reference received from the serving cell. In the latter case, either of the UEs
LTE-Advanced Pro: Enhanced LTE Features 303

creates own timing reference and provides that to the associated second UE.
For this purpose, SL synchronization channels are specified [including primary
and secondary SL synchronization signal, PSSS and SSSS, respectively, and SL
Broadcast Channel (SL-BCH)]. Their configuration may be provided by SIB18
from eNodeB or may be preconfigured in the UE UICC for out-of-coverage
scenario. Compared to the DL synchronization signals, the SL SS use two dif-
ferent sets of the cell identifiers, that is, the first one is called id_net and ranges
between 0 and 167 and the second one is called id_oon and ranges between 168
and 335. id_net is a set dedicated for in-coverage UEs to provide the reference
to the out-of-coverage users, whereas the id_oon is for the out-of-coverage UEs
sourcing timing reference for other out-of-coverage users. Due to this design,
the receiving UEs know the status of the UE providing the synchronization.
The SLSS-transmitting UE sends also the MIB SL (within SL-BCH) including
SL system BW, direct subframe, and direct frame number as well as the inCover-
age flag, stating if the UE is in-coverage or not. The subframe/frame numbers
are present if the UE relays the synchronization from the eNodeB to tell the
recipients how the MIB-SL transmission is located with respect to the original
EUTRAN timing [3, 7, 11].

13.7 Evolution of Machine-Type Communications


The initial scope for the support of MTC in the EPS was that the system should
be able to handle this special type of communication for certain services and
special features (as described in Chapter 10) using standard LTE design. From
that point, the major focus in the standardization was to simplify the devices
and system design. 3GPP Release 13 specified the 3 solutions, namely, NB-IoT,
LTE-M (also called eMTC, enhanced MTC) and Extended Coverage GSM
(EC-GSM) to address the following requirements/challenges for the cellular
IoT [16, 17]:

• Support for low throughput and sporadic transmission and limited mo-
bility at the same time enabling one to serve a massive number of devices
(e.g., up to 50,000 devices per cell);
• Achieve very low device cost (by limiting its functionality) and improve
energy efficiency (thus enable battery life with up to 10 years);
• Provide enhanced coverage (up to 20 dB better compared to regular
LTE system to reach problematic places, like basements, achieving 164
dB of MCL);
• Reuse existing infrastructure (thus enable low-cost deployment).
304 From LTE to LTE-Advanced Pro and 5G

To fulfill these requirements, the key solutions for massive MTC include
reduced BW; extended coverage by, for example, TTI bundling; reduced maxi-
mum transmit power, reduced support for DL transmission modes and number
of antenna ports (to SISO and TxDiversity in the DL) together with the use of
single receive RF chain; reduced TB size; and half-duplex FDD operation. The
remainder of this section discusses NB-IoT being the new radio added to LTE
framework (eMTC and EC-GSM are the enhancements to LTE and GSM for
the existing systems and are not covered by this chapter).

13.7.1 Narrowband IoT


NB-IoT has been standardized within Release 13 to support network services
via E-UTRAN with limited channel bandwidth of 180 kHz targeting low-end
IoT devices and applications. It is designed under the LTE system framework
and thus can reuse the existing infrastructure with modified operation of the
different protocols and a self-contained radio frame. It can be deployed using 3
modes of operation:

• Stand-alone, as an independent 180-kHz carrier outside or inside spec-


trum of currently existing GSM system as a replacement of GSM car-
riers;
• In LTE guard-band just next to the LTE carrier (e.g., for deployments
where part of the spectrum is unused due to mismatch between the al-
located spectrum chunk and the deployed LTE BW);
• In-band of the LTE carrier, using its resource blocks, allowing kind of
dynamic spectrum allocation between LTE and NB-IoT.

The operating bands include 700 MHz, 800 MHz, 900 MHz, 1,800
MHz, and 2,000 MHz, thus mostly low frequencies, to provide largest cover-
age. The main simplifications of NB-IoT with respect to LTE are the lack of
connected mobility support (i.e., handover and accompanied measurements),
reduced system BW and system optimizations for efficient data transmission
(with the signaling reduction being the key requirement). Full set of simplifica-
tions and system features is described in [3].

13.7.1.1 Air Interface Aspects


NB-IoT operates within a single carrier of 180 kHz for both DL and UL, fit-
ting into one LTE PRB. In the DL, OFDMA multitone is used with subcarrier
separation of 15 kHz, 12 subcarriers in PRB and subframe duration of 1 ms,
thus fitting into the LTE frame structure. In the UL, however, two modes are
available: single-tone with 3.75-kHz and 15-kHz BW, and multitone with 15-
LTE-Advanced Pro: Enhanced LTE Features 305

kHz subcarrier separation (thus providing either 12 or 48 subcarriers within the


NB-IoT carrier).
In the UL, the transmission is provided with the narrowband PUSCH
(NPUSCH) and organized in resource units (RUs). For the UL, the multi-
plexing of users can be done within a single TTI in a single NB-IoT carrier.
NPUSCH format 1 is used to carry the user data with a single tone of 3.75 kHz
spanning across 16 slots14 or multitone with 15-kHz subcarrier separation of 1,
3, 6, or 12 subcarriers with 16, 8, 4, and 2 slots, respectively.15 The NPUSCH
format 2 is reserved for UCI feedback (ACK/NACK) using single subcarrier of
3.75 kHz or 15 kHz and spanning across 4 slots [3, 11].
In the DL, as the resource unit always occupies full NB-IoT carrier (i.e.,
whole PRB of 12 subcarriers), the different transmissions (UE data and control
channels) need to occupy different time slots (the channels are multiplexed in
time domain; there is a single resource allocation per TTI). The actual trans-
mission is preceded with the scheduling grants within NPDCCH, by at least
8 subframes for NPUSCH (scheduled with DCI N0), at least 5 subframes for
NPDSCH data (DCI N1) and with fixed period of 10 subframes for paging
(DCI N2) [3, 10].
The NB-IoT design is targeting very low data rate with sporadic traffic
in the UL. One of the system optimizations targeting this behavior is extended
DRX, which enables the device to stay longer in the sleep mode, thus saving
the battery. In RRC_CONNECTED the DRX cycle of 5.12 and 10.24 sec-
onds has been added, compared to the regular LTE design. In the RRC_IDLE,
extended DRX has been extended to cover to up to a 3-hour duty cycle for
paging. For that purpose, the concept of hyperframes was introduced, count-
ing the number of system frame periods16 where the extended paging DRX
can be up to 1,025 hyperframes (signaled via MIB as HFN). Furthermore, to
reduce the device complexity, one of the solutions was to decrease the transport
block size. Therefore, for NB-IoT, 3GPP defines UE category NB1 that has
maximum number of transport block bits for DL and UL of 680 [18]. Another
simplification is the use of a single HARQ process in the DL and single HARQ
process in the UL. In order to cope with the targeted improved coverage, cover-
age modes were introduced. The coverage enhancement (CE) levels are used to,
for example, carry out the improved PRACH operation with the help of a large
number of maximum preamble transmission attempts (preambleTransMax-CE)
for the preamble repetition (for up to 128 times). Additionally, special RA pre-
amble sets used for different CE levels. For the data transmission, a way for
improving coverage is also by collecting power in the UL for the NPUSCH

14. Lasting in this case 32 ms, as the symbol duration is 4 times longer, because the subcarrier size
is 4 times smaller than the regular 15 kHz.
15. Lasting 8, 4, 2, and 1 ms, respectively.
16. Where the system frame number spans from 0 to 1,023 using 10-bit SFN in the BCH.
306 From LTE to LTE-Advanced Pro and 5G

using UL_REPETITION_NUMBER, specifying number of transmission rep-


etitions in the bundle [4].

13.7.1.2 System Aspects: Data Transfer Modes


NB-IoT is targeting a very lean design optimized for small data applications.
From the system perspective, the main issue is the CP signaling accompanying
data transfers (because a single transmitted data packet requires a set of con-
nections/bearers to be established). For this, 3GPP standardized two modes of
communication to optimize EPS signaling, namely, Control Plane CIoT EPS
Optimization (CP Solution) with data transfer over NAS, and User Plane CIoT
EPS Optimization (UP Solution) with RRC suspend and resume procedures
[3, 19] (also known as Solution 2 and Solution 18, respectively, in the study
phase [20]).

CP CIoT EPS Optimization (CP Solution)


In this mode, the UP data is transmitted over NAS signaling being carried
transparently over RAN between UE and MME. The UP protocol stack is
shown in Figure 13.6, where the IP (or non-IP) data is encapsulated in the NAS
messages, transferred over SRB on the radio interface, and through S1-MME
interface to the CN. For this purpose, SRB1bis is specified, with PDCP in
transparent mode without RoHC and AS security (thus different to SRB1 from
LTE), moving those UP functions to MME. The transmission of the UP data is
very simplified, as it does not require dedicated radio resource configuration in
terms of DRB. Upon connection establishment, the RRCConnectionSetupCom-
plete message includes the UL NAS message that in turn includes the UP-data
packet [3]. Therefore, the CP solution is particularly suited for very infrequent
transmission of small data packets and it is the mandatory solution for the NB-
IoT UE. As the Figure 13.6 suggests, besides the NAS carrying IP data and new
functions to MME, additional upgrade is needed at the network side. Namely,
MME needs to support GTP-U towards SGW to transport IP data, as in legacy
LTE, the S11 interface only support GTP-C for signaling messages between
MME and SGW.

UP CIoT EPS Optimization (UP Solution)


The second possibility for small data transmission is with the use of UP solu-
tion, where the data is delivered within the DRB on the radio interface and
through the legacy CN UP interfaces (S1-U and S5). The optimization in this
case consists of storing the UE context at the eNodeB even when the UE is in
RRC_IDLE.17 This enables a fast connection setup to only deliver the data and

17. Where in legacy LTE, the context in the RAN is deleted in RRC_IDLE.
LTE-Advanced Pro: Enhanced LTE Features

Figure 13.6 NB-IoT UP Protocol Stack for Control Plane CIoT EPS Optimization (© 2016. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI,
307

TTA, and TTC.)


308 From LTE to LTE-Advanced Pro and 5G

get back to idle state, without the cumbersome process of establishing the AS
security, DRBs, and connections in the core network. It is achieved through the
new RRC procedures: RRC Connection Suspend/Resume, as well as S1 pro-
cedures: UE Context Suspend/Resume. As the CP solution (presented above)
is the default one, during the initial attach, the SRB1bis is used, and can be
switched to SRB1 later on (with AS security activated and PDCP in regular
mode), upon the MME decision to use UP solution. After that, the DRB and
UE context in the serving eNodeB is established. Once the UE is done with the
data transmission, the eNodeB may decide to suspend the RRC connection,
where the UE keeps the AS context and suspends SRBs and DRB(s) moving
to RRC_IDLE, and the eNodeB stores AS context of the idle UE under the
ResumeID. Next time, when there is UL data to be transmitted (or when UE
is paged), the UE uses RRCConnectionResumeRequest message (instead of RRC-
ConnectionRequest as in regular LTE case) providing the ResumeID, so that
the eNodeB restores the context and the bearers’ configuration. This enables to
significantly decrease the connection setup time. As already mentioned in this
section, in NB-IoT, there is no handover procedure (i.e., no connected mode
mobility). This is due to the fact that the majority of the NB-IoT application
assumes stationary UE behavior, or that the data in those applications is trans-
mitted very infrequently. Thus, it is better for the UE to be kept in idle mode,
and only wake up for short while, send a packet, and go back to sleep (to save
the battery power). Therefore, even if the UE moves in between sending pack-
ets, it can be done in the idle mode (as the time of resuming the connection
and sending the packet is very short). For that case, when the UE moves from
one cell to another and tries to resume RRC connection there, the context can
be provided to the target eNodeB via the new X2-AP procedure, Retrieve UE
Context Request.
The decision of which solution for CIoT data transfer is used for a par-
ticular UE is decided by the MME. The comparison of both solutions is sum-
marized in the Table 13.4.

13.8 From LTE over LTE-Advanced to LTE-Advanced Pro: A


Summary
As we can see from the previous sections, from the initial LTE system design,
released in 2009, significant improvements have been done through the evo-
lutionary steps within 3GPP until today. LTE Release 8 was a pre-4G system,
later accompanied with a set of features to fulfill 4G requirements from IMT-
Advanced, currently is still being developed to go beyond 4G, and is also as-
sumed to be part of the 5G framework. Table 13.5 provides a high-level com-
parison of those key evolution steps.
LTE-Advanced Pro: Enhanced LTE Features 309

Table 13.4
Comparison of the Data Transfer Modes for NB-IoT
Data Transfer Data over NAS (CP Solution) Data over DRB (UP Solution)
Mode
Standard CP CIoT EPS Optimization UP CIoT EPS Optimization
reference name
Key signaling No need to set up DRB (UP No need for establishing RRC connection
optimization piggybacking within signaling) and set up UE context every time (RRC
context stored in RRC_IDLE)
Support by UE Mandatory Optional
Data transfer Data transfer over NAS using NAS Data transfer over standard UP path with
PDU the use of DRB
Radio bearers SRB0 and SRB1bis SRB0, SRB1, and DRB
Security Only NAS security NAS and AS security (Legacy PDCP used)
Header RoHC in CN (MME is providing RoHC at PDCP
compression RoHC)
UE context in Not available (Regular RRC_IDLE) Available (Suspend/Resume operation)
RRC_IDLE
The removed Totally removed (no need as DRB Removed during the suspend/resume
RRC messages is not being establishment), RRC operation (but present for the initial
Security Mode Command, RRC establishment of the DRB), RRC
Security Mode Complete, RRC Connection Setup Complete, RRC
Connection Reconfiguration, Security Mode Command, RRC Security
RRC Connection Reconfiguration Mode Complete, RRC Connection
Complete Reconfiguration, RRC Connection
Reconfiguration Complete

Table 13.5
Comparison of LTE System Evolution Steps
System LTE LTE-Advanced LTE-Advanced Pro
Branding
3GPP Release 8 Release 10* Release 13 and beyond
Release
Freezing date March 2009 June 2011 March 2016
Main Provide high throughput Fulfill IMT-Advanced Mark evolution point with
purpose for MBB, prepare mobile requirements for 4G significant improvements
system for evolution system to the LTE-Advanced
towards 4G
Key features OFDMA, DL MIMO (4 × CA (extending system BW DC, LAA, LWA,
4), Modulation with up to to 100 MHz), Enhanced Modulation 256QAM,
64QAM, Flat architecture DL MIMO (8 × 8), UL EB/FD-MIMO, D2D, V2X,
(eNodeB), Flexible system MIMO (4 × 4), small cells, NB-IoT, eMTC
BW (1.4–20 MHz) HetNet, eICIC, SON,
CoMP, ePDCCH
*LTE-Advanced is defined in 3GPP as Release 10, but in fact it also covers the enhancements from Release 11 and
Release 12.
310 From LTE to LTE-Advanced Pro and 5G

As one of the key system’s performance indicators is its maximum through-


put, below we provide simplified calculations to show how those numbers are
achieved. This also serves the purpose to explain how the system evolves and
what are the parameters and new features that influence those numbers. Note
that below the DL throughputs are calculated, but a similar logic can be applied
for the UL direction18.

13.8.1 LTE Release 8


Assuming the following configuration:

• Max. number of spatial layers: NoLayers = 4;


• Max. modulation: Mod = 64QAM;
• Max. coding rate: CR = 0.9258 (LTE MCS Index 13);
• Max. BW size: BW = 20 MHz (channel BW);
• Max. occupied bandwidth size: BWoccupied = 18 MHz (occupied by
OFDMA subcarriers).

and taking the following PHY overhead:

• 6% of the PBCH and PSS/SSS (in 20-MHz system BW);


• 7% of the PDCCH (assuming minimum PDCCH overhead of 1 sym-
bol per TTI);
• 12% of RS (for 4 antenna transmission);
• Thus, the total overhead is: Overhead = 25%.

The maximum LTE Release 8 throughput can be calculated in the fol-


lowing way:

• Bandwidth efficiency: BWeff = BWoccupied/BW = 18 MHz/20 MHz =


0.9;
• Maximum DL spectral efficiency for a single stream: SE_Layer =
Log2(Mod) * CR * BWeff = 6 * 0.9258 * 0.9 = ~5 bps/Hz;
• Total maximum DL spectral efficiency: Total_SE = NoLayers * SE_Layer
= 4 * 5 = 20 bps/Hz;

18. Note that these calculations are simplifed to show the principles of features’ impacts on the
system performance. The more accurate calculations are provided in Chapter 3.
LTE-Advanced Pro: Enhanced LTE Features 311

• Spectral efficiency considering the PHY-layer signaling overhead: Effec-


tive_SE = Total_SE * (1-Overhead) = 20 * (1 – 0.25) = 15 bps/Hz;
• Maximum DL throughput: DLthpt = Effective_SE * BW = 15 bps/Hz *
20 MHz = 300 Mbps.

13.8.2 LTE-Advanced Release 10


Considering the above detailed calculation as a baseline, the improvements that
have been done in LTE-Advanced are added on top in the following ways:

• The number of antennas and thus spatial layers were increased to 8


(compared to 4 in the LTE Release 8). Therefore, the maximum spectral
efficiency has increased roughly by a factor of 2 (of course the overhead
has been slightly increased due to additional reference signals): NoLay-
ersImprovement = 2.
• The maximum BW size has been increased by a factor of 5, due to car-
rier aggregation with 5 component carriers: BW_LTEA = 5 * 20 MHz
= 100 MHz.

Thus, the maximum LTE-Advanced throughput can be calculated in the


following way:

• Effective spectral efficiency, considering 2 times the improvement with


respect to baseline LTE Release 8: Effective_SE_LTEA = Effective_SE *
NoLayersImprovement = 15 * 2 = 30 bps/Hz;
• Maximum DL throughput: DLthrpt = Effective_SE_LTEA * BW_LTEA
= 30 bps/Hz * 100 MHz = 3 Gbps.

13.8.3 LTE-Advanced Pro Release 13


Finally, taking into account the LTE-Advanced Pro improvements, the maxi-
mum throughput could be improved by the following ways:

• The maximum modulation scheme has been improved to 256 QAM,


which improves maximum spectral efficiency by a factor of 8/6 (i.e.,
maximum of 8 bits/symbol can be transmitted instead of 6 bits/symbol
as in case of 64 QAM): Mod_Improvement = 8/6.
• The maximum spectrum BW size can be increased to 640 MHz due
to massive CA (32 CCs × 20 MHz). The possibility of using the full
312 From LTE to LTE-Advanced Pro and 5G

Table 13.6
Comparison of the Systems’ Key Parameters and Throughputs
System LTE LTE-Advanced LTE-Advanced Pro
Max. system BW 20 MHz 100 MHz 100 MHz, 640 MHz*
Max. DL modulation 64QAM 64QAM 256QAM
Max. DL number of 4 8 8
spatial layers
Max. DL spectral 15 bps/Hz 30 bps/Hz 40 bps/Hz
efficiency
Max. DL throughput 300 Mbps 3,000 Mbps (3 Gbps) 4,000 Mbps (4 Gbps), 25,600
Mbps (25.6 Gbps)**
*In case of 32 CCs. ** In case of 32 CCs. As discussed above, this is not likely to happen.

number of CCs in a single location is not likely to happen in the nearest


future, thus in our calculations: BW_LTEAPro = 100 MHz.

Therefore, we take only the first aspect for the calculation of the maxi-
mum reasonable throughput for LTE-Advanced Pro:

• Effective spectral efficiency: Effective_SE_LTEAPro = Effective_SE_


LTEA * Mod_Improvement = 30 bps/Hz * 8/6 = 40 bps/Hz;
• Maximum DL throughput: DLthrpt = Effective_SE_LTEAPro * BW_
LTEAPro = 40 bps/Hz * 100 MHz = 4 Gbps.

However, if we would really want to calculate the theoretical maximum,


we would end up in:

• Max. theoretical throughput: DLthrpt_theoretical = 40 bps/Hz * 640


MHz = 25.6 Gbps.

The summary of the above considerations is gathered in Table 13.6.

References
[1] 3GPP RP-151569, “Release 13 Analytical View Version,” September 2015.
[2] 3GPP TR 36.842, v12.0.0, “Study on Small Cell Enhancements for E-UTRA and E-
UTRAN; Higher Layer Aspects,” December 2013.
[3] 3GPP TS 36.300, v14.0.0, “EUTRA and EUTRAN: Overall Description,” September
2016.
LTE-Advanced Pro: Enhanced LTE Features 313

[4] 3GPP TS 36.321, v14.0.0, “EUTRA: Medium Access Control Protocol Specification,”
September 2016.
[5] Dryjanski, M., and M. Szydelko, “A Unified Traffic Steering Framework for LTE Radio
Access Network Coordination,” IEEE Communications Magazine, July 2016, pp. 84–92.
[6] 3GPP TR 37.843, v12.0.0, “Study on WLAN-3GPP Radio Interworking,” December
2013.
[7] 3GPP TS 36.331, v14.0.0, “EUTRA: Radio Resource Control Protocol Specification,”
September 2016.
[8] 3GPP TS 36.304, v14.0.0, “EUTRA: UE Procedures in Idle Mode,” September 2016.
[9] 3GPP TR 36.889, v13.0.0, “Study on Licensed-Assisted Access to Unlicensed Spectrum,”
June 2015.
[10] 3GPP TS 36.213, v14.0.0, “EUTRA: Physical Layer Procedures,” September 2016.
[11] 3GPP TS 36.211, v14.0.0, “EUTRA: Physical Channels and Modulation,” September
2016.
[12] www.lteuforum.org.
[13] www.multefire.org.
[14] Bhamri, A., K. Hooli, and T. Lunttila, “Massive Carrier Aggregation in LTE-Advanced
Pro: Impact on Uplink Control Information and Corresponding Enhancements,” IEEE
Communications Magazine, May 2016, pp. 92–97.
[15] 3GPP TS 23.303, v14.1.0, “Proximity-Based Services (ProSe),” December 2016.
[16] 3GPP TR 36.888, v12.0.0, “Study on Provision of Low-Cost Machine-Type
Communications (MTC) User Equipments (UEs) Based on LTE,” June 2013.
[17] 3GPP TR 45.820, v13.1.0, “Cellular System Support for Ultra-Low Complexity and Low
Throughput Internet of Things (CIoT),” November 2015.
[18] 3GPP TS 36.306, v14.0.0, “UE Radio Access Capabilities,” September 2016.
[19] 3GPP TS 23.401, v14.2.0, “General Packet Radio Service (GPRS) Enhancements for
Evolved Universal Terrestrial Radio Access Network (E-UTRAN) Access,” December
2016.
[20] 3GPP TR 23.720, v13.0.0, “Study on Architecture Enhancements for Cellular Internet of
Things,” March 2016.
14
Toward 5G
As the standardization of the enhancements for LTE progresses with LTE-Ad-
vanced Pro within Releases 13 and 14, the 3GPP has recently kicked off the
work on the next-generation system, namely, 5G, in parallel. The seeds of 5G
were sown within Release 13 by working on defining potential use cases for fu-
ture networks in the “Feasibility Study on New Services and Markets Technol-
ogy Enablers (SMARTER)” [1]. This is now being followed-up by the Release
14 architecture and RAN solution studies and is moving to work items within
the Release 15 and 16 timeframe. The next-generation system being specified
within 3GPP should fulfill the ITU-R requirements for IMT-2020, a successor
of IMT-Advanced. The key requirement for 5G (beyond the regular expected
improvements for higher throughputs and capacity and lower latencies) is to
specify a flexible system to natively support large variety of use cases that may
have very different requirements (e.g., from low-end MTC to the high-end vir-
tual reality MBB). This chapter outlines the main use cases, requirements, and
spectrum aspects related to 5G followed by the potential solutions for the air in-
terface and system architecture. Tight interworking with evolved LTE (eLTE1)
and migration path toward full 5G system is also discussed later on. Compared
to previous chapters, this chapter is slightly different, as the 5G standardiza-
tion has just started and is in its study phase (and early normative work) at the
time of writing. Therefore, instead of discussing well-defined functionalities,
planning, or optimization guidelines, this chapter serves as an overview of the
potential solutions and scope of 5G. The references for this chapter are mostly
3GPP technical reports (TRs), presenting the requirements and proposed solu-
tions to capture the recent status of the 5G standardization.

1. eLTE has been used during the study item phase.

315
316 From LTE to LTE-Advanced Pro and 5G

14.1 Standardization Timeline and 5G Phases


International Mobile Telecommunications for 2020 (IMT-2020) framework
provided by ITU-R specifies the recommendation for the 5G mobile commu-
nication system (5G). According to the timeline for IMT-2020 [2], the require-
ments and evaluation criteria specification phase was scheduled to be finalized
in Q2 20172. Following this, the proposals phase targets the timespan from Q4
2017 to Q2 2019, during which the evaluation of the 5G proposed solutions
will be held, and finally the IMT-2020 specifications phase was scheduled to
start in Q4 2019. To fit into this time plan, the 3GPP proposal for 5G standard
is going to be specified within the Release 15 and 16 timeframes. The standard-
ization work will be broken into two phases [3, 4]:

• 5G Phase 1: This covers work items to be developed within Release 15,


targeting finalization in Q2 2018 (stage 3 freeze) and ASN1 completion
in September 2018. This phase will specify the system for immediate
commercial needs, mainly for enhanced mobile broadband (eMBB), to
be fed into the IMT-2020 proposals phase. In this phase, the 5G RAT
should cover both, the non-stand-alone (NSA) deployment with the
evolved LTE serving as CP anchor (with stage 3 completion in Decem-
ber 2017), and stand-alone deployment without LTE support (i.e., 5G
RAT with full CP). The requirement for this phase of standardization is
that the new 5G radio should be forward compatible to be able to incor-
porate the second phase’s developments in a smooth way. Additionally,
the target is to cover both sub-6-GHz bands as well as frequencies above
6 GHz.
• 5G Phase 2: This covers work items to be developed within Release 16,
targeting finalization in Q4 2019 (stage 3 freeze). This phase will specify
the system to fulfill all the IMT-2020 requirements to cater to long-term
commercial needs (all the identified use cases) to be fed into IMT-2020
specifications phase. This should cope with the more futuristic applica-
tions of the mobile system.

The above phasing relates to the actual normative work on 5G system


specification has just started in Q2 2017. However, the use cases for 5G have al-
ready been developed within Release 13 under the SMARTER feasibility study,
followed by the system requirements’ normative work (stage 1) within Release
14 [5]. Also, Release 14 covers the next-generation radio and system architec-
ture study items, which were developed and finalized in March 2017, to set the

2. At the time of writing, a draft document on requirements for 5G was released.


Toward 5G 317

foundation for the work items in this area. The 3GPP’s 5G proposal that will
be submitted to ITU-R will include both, the new, nonbackward-compatible
radio technology [called New Radio (NR)] together with the evolution of LTE,
called eLTE. The eLTE is going to be developed to support the next generation
core network [called Next Generation Core (NGC)] in parallel to the develop-
ment of the NR. The set of the recent 3GPP documentation related to the
above discussion is presented in the Table 14.1, and is used throughout the rest
of this chapter.

Table 14.1
5G-Related 3GPP Standardization Documents
Document Responsible
Number Document Title TSG Reference
TR 22.891 “Feasibility Study on New Services and Markets SA [1]
Technology Enablers (SMARTER)”
TR 22.861 “Feasibility Study on New Services and Markets SA [6]
Technology Enablers for Massive Internet of Things”
TR 22.862 “Feasibility Study on New Services and Markets SA [7]
Technology Enablers for Critical Communications”
TR 22.863 “Feasibility Study on New Services and Markets SA [8]
Technology Enablers for Enhanced Mobile Broadband”
TR 22.864 “Feasibility Study on New Services and Markets SA [9]
Technology Enablers - Network Operation”
TR 23.799 “Study on Architecture for Next Generation System” SA [10]
TR 33.899 “Study on the Security Aspects of the Next Generation SA [11]
System”
TS 22.261 “Service Requirements for Next Generation New SA [5]
Services and Markets”
TR 38.900 “Study on Channel Model for Frequency Spectrum RAN [12]
Above 6 GHz”
TR 38.801 “Study on New Radio Access Technology; Radio RAN [13]
Access Architecture and Interfaces”
TR 38.802 “Study on New Radio (NR) Access Technology; Physical RAN [14]
Layer Aspects”
TR 38.803 “Study on New Radio (NR) Access Technology; RF and RAN [15]
Coexistence Aspects”
TR 38.804 “Study on New Radio (NR) Access Technology; Radio RAN [16]
Interface Protocol Aspects”
TR 38.805 “Study on New Radio (NR) Access Technology: 60GHz RAN [17]
Unlicensed Spectrum”
TR 38.912 “Study on New Radio Access Technology” RAN [18]
TR 38.913 “Study on Scenarios and Requirements for Next RAN [19]
Generation Access Technologies”
318 From LTE to LTE-Advanced Pro and 5G

14.2 5G Use Cases and System Performance Requirements


The previous generations of the mobile systems were designed to focus on a
single type of service (e.g., voice or Internet access). With the emerging Internet
of Things (IoT) and new vertical markets, as well as with the ever-increasing
need for capacity and throughput for broadband access, this legacy approach is
no longer going to succeed in 5G. The main reason for this is that a different
variety of services bring diverging requirements, ranging from the high-end
services (enhanced mobile broadband), through extremely low latency commu-
nications, down to low-end MTC applications with very sporadic transmissions
using several bytes. To support the different connectivity requirements (coming
from the wide range of services) within a single block of spectrum, the 5G sys-
tem should be very flexible [19].
The 3GPP SMARTER study identified 74 use cases that could not be met
using the EPS system [1]. These use cases were then grouped into the following
categories: eMBB, focusing on high data rates, high user density, indoor cover-
age, and high user mobility; critical communications (CriC)3, focusing on high
reliability and availability, low-latency, mission-critical, and high-positioning
accuracy; massive Internet of Things (mIoT), with focus on low data through-
put, high battery efficiency, large number of connections; network operation
(NEO) coping with system flexibility, scalability, efficient content delivery, self-
backhauling, interworking, and mobility-on-demand support; and enhanced
vehicular communication (eV2X) for connected vehicles over wide area cover-
age.4 Compared to the above categories, ITU-R has defined 3 groups of vertical
services [20], namely, enhanced mobile broadband (eMBB), massive machine-
type communications (mMTC), and ultra-reliable and low-latency communi-
cations (URLLC). In the RAN area, 5G targets a design where a single techni-
cal framework should address all the use cases, requirements and deployment
scenarios including the 3 mentioned service groups [19, 21]:

• eMBB: Requiring support for high network capacity, high user density,
and uniform user experience;
• mMTC: Calling for massive connectivity, highly efficient small packets
transmission;
• URLLC: Requiring ultralow latency and/or ultrahigh reliability trans-
mission.

3. The critical communications or ultrareliable and low-latency communication group has been
grouped together. However, within this group the two aspects may be separated, with URC
focusing on reliability and low latency (e.g., industrial control, gaming, remote control-UAV)
and ULLC focusing on ultralow latency (e.g., tactile Internet).
4. The initial [1] document with high level use case definition evolved into 4 feasibility studies
on the specific system requirement on the group, namely, [6–9].
Toward 5G 319

If the requirements from different services would be taken collectively,


then the IMT-2020 system would need to fulfill the following key capabilities
(with the indicative target numbers) [20]:

• Peak data rate: 20 Gbps;


• User experienced data rate: 100 Mbps;
• Spectrum efficiency: 3 times larger than IMT-Advanced;
• Mobility: up to 500 km/hr;
• User plane latency (one-way): less than 1 ms;
• Connection density: 1,000,000 devices/km2;
• Network energy efficiency: 100 times higher than IMT-Advanced;
• Area traffic capacity: 10 Mbps/m2.

However, the different vertical services have set different requirements;


thus, the system should not fulfill all of them at the same time but rather fulfill
specific requirements of the specific service as and when required. Based on
[1, 19, 20, 22], Table 14.2 gathers the key requirements that are imposed on
the system coming from these different vertical services (the requirements are
mapped to the service types). It can be seen from this table that the different
services require different types of optimizations. For instance, eMBB requires
very high throughput and support for mobility that varies from very stationary
application to very high mobile applications. mMTC, in turn, requires very
low throughput; thus, the connection efficiency is very important to reduce the
signaling-to-data ratio. Also, for mMTC it is important to assure high coverage
to get deep into basements (as has been discussed already in Chapter 13), re-
ducing battery consumption and minimal mobility support as mMTC is typi-
cally used for services in which the UEs are stationary (e.g., sensors). The other
extreme comes with the URLLC requirements, where very low latency (or in
some cases predictable latency and low jitter are more important) plays crucial
role. Reliability is another aspect of these types of services that require very high
probability of small packet delivery in short time. Along with reliability, in mo-
bility scenarios, the interruption time should be very low, potentially coming
down to 0 ms. This requires a make-before-break cell change type, where the
source connection is released only after the new one is established so that the
packets can be delivered to the network without any time loss. Another interest-
ing aspect is the traffic density (or area capacity), where it can be high in both
eMBB and mMTC scenarios. However, it is due to different reasons, namely,
in eMBB, each user generates significant amount of traffic, whereas in mMTC,
there is a significant number of devices transmitting very low amount of traf-
fic. Thus, the aggregated throughputs can be significant, while the individual
320 From LTE to LTE-Advanced Pro and 5G

Table 14.2
Key Performance Requirements from Different Services*
KPI eMBB mMTC URLLC
Data rate Very high, ~10 Gbps Low, ~kbps Low
Spectral High, ~30 bps/Hz
efficiency
Latency Low latency Very low latency/real time
(1 ms end-to-end, 0.5 ms
for DL, 0.5 ms for UL)
Reliability High, (10−5 in 1 ms)
Mobility From 0 up to 500 km/hr Low/none From low to high
Mobility Low Very low, 0 ms (make-
interruption before break)
time
Coverage High, 164 dB (maximum
coupling loss)
UE battery life Long, 10 year
Traffic density High, resulting from high Medium/high, resulting
traffic per connection from large number of
devices
Communication High (efficient
efficiency signaling and resource
utilization)
Connection Very high (1,000,000/
density km2)
*The blank fields mean that the particular KPI is not crucial for the particular service vertical.

link characteristics are very different, requiring different optimizations at the


network side.5
In a short summary, the resulting network should be flexible enough to
simultaneously support optimized configurations for different services posing
the diverse requirements that may be contradicting to each other. This tailoring
of the network operation to specific use case has been addressed by NGMN and
3GPP by the term “network slicing” that is discussed in Section 14.4.
As already stated in the previous section, the focus of the 5G Phase 1 stan-
dardization is on the most immediate needs. Thus, the detailed requirements
that are analyzed in [5] include eMBB service type, requiring very high data
rates, and some of the URLLC services requiring low latency and high reliabil-
ity. Within each of those service families, the requirements can also diverge de-
pending on the scenario [e.g., in eMBB type, the area traffic capacity can span
from 1 Gbps (in rural macro, to several terabytes per second in the “broadband
access in the crowd” scenario) due to the different characteristics of the traffic

5. This discussion focuses mostly on performance and service characteristics’ type of require-
ments, whereas the requirements for system capabilities are described in [5].
Toward 5G 321

and user density], or on the actual service [e.g., in the URLLC type, the latency
requirement can span from 0.5 ms (in tactile interaction application) to 10 ms
(in industrial automation applications), where it is more about stable latency
rather than the extreme low latency values].6 Those internal differences within
the group call for even more flexibility of the system, also in the deployment
and connectivity aspects, that is, to vary the deployment with respect to use
cases (e.g., macro sites, small cells, on-demand relay nodes, mobile relay nodes,
direct D2D communication) and optimize the mobility to the use case (e.g., for
factory robots, no mobility; for vehicular communication, very high mobility).

14.3 5G Air Interface: New Radio


The 3GPP started the study item on the 5G air interface under the name
New Radio (NR) in March 2016. Its basic requirement is to provide a unified
framework able to cope with the diverse traffic types from the 3 key use cases
described in Section 14.2, namely, eMBB, mMTC, and URLLC [23]. The re-
quirements posed by these use cases result in set of solutions and optimizations
tailored for each, for example, use of high-frequency bands (i.e., frequencies
above 6 GHz) for achieving required throughputs and capacity from eMBB;
an introduction of new RRC state for reducing signaling overhead; or using
short TTI to achieve low latency for URLLC. To be able to cope with the
different latency, spectral efficiency, and bit-rate requirements within a single
air interface, the concept of unified radio frame is proposed with the use of
multiple numerologies (i.e., different TTI duration and subcarrier spacing)
within a single carrier bandwidth. Additionally, massive MIMO solutions with
narrow-beamforming are considered, specifically being tempted by the use of
millimeter-wave frequencies7 allowing one to put large antenna arrays (e.g., 64,
128 antenna elements) in reasonable size antenna panels.

14.3.1 Spectrum Considerations


According to [19], the NR should be able to support frequencies spanning from
the few hundred megahertz up to 100 GHz [24].8 This huge spectrum range
has been split into two groups, namely, frequencies below 6 GHz (known as
sub-6 GHz) and above 6 GHz (known as super-6 GHz, or millimeter-wave9).
This is mostly due to the very different propagation behavior between those

6. See [5] for detailed performance requirements for those two groups.
7. That is, frequencies above 30 GHz, where the wavelength is smaller than 1 cm.
8. NR in Rel-15 should support up to 52.6 GHz according to the NR WID [24].
9. The super-6-GHz band is sometimes referred to as millimeter wave. However, the millimeter-
wave spectrum starts actually at 30 GHz, being the boundary, where the wavelength is 1 cm
and goes to the millimeter region above 30 GHz.
322 From LTE to LTE-Advanced Pro and 5G

groups, as discussed in Section 14.3.1.1. Because of these differences, there can


be potentially two 5G RATs defined with PHY layers designed to cope with the
specific spectrum range, while L2 and L3 should be using a common design as
much as possible [25]. The sub-6-GHz bands cover the spectrum range cur-
rently being used by the cellular systems (and where the spectrum availability
is very limited), while the super-6-GHz bands should provide the capacity ex-
tension for the growing traffic demand (as there is a lot of unused spectrum
available). In terms of the actual frequency ranges, the potential allocations for
early 5G can differ from region to region, but most likely will cover the follow-
ing major blocks [26]: 700 MHz (enabling widespread/nationwide and indoor
5G coverage), 3.3–4.99 GHz, 5.15–5.85 GHz (unlicensed), and super-6 GHz/
millimeter-wave. Speaking of the super-6-GHz/millimeter-wave spectrum, the
considered frequency blocks for NR include [15]: 24.25–27.5 GHz, 31.8–33.4
GHz, 37–43.5 GHz, 45.5–50.2 GHz, 50.4–52.6 GHz, 66–76 GHz, and 81–
86 GHz.10 Another important aspect regarding spectrum support for NR is that
it should be built upon a framework covering natively all three possible licens-
ing schemes, including licensed, licensed shared [e.g., License-Shared Access
(LSA)] and unlicensed spectrum (with both, the non-stand-alone, LAA-like,
and stand-alone operation, Multefire-like) [19, 26]. Additionally, the 5G RAT
(NR) should be able to coexist together with LTE in the same or overlapping
spectrum, with dynamic coexistence (i.e., it should be possible to dynamically
reconfigure the NR resources to take over the LTE spectrum, if the LTE carrier
is switched off due to low traffic) [19, 26].

14.3.1.1 Millimeter-Wave Characteristics


The frequencies around and beyond 30 GHz encounter significantly different
behavior from the sub-6-GHz frequencies from the propagation perspective.
The considered spectrum in that range for 5G include frequencies spanning
from 24 GHz up to 86 GHz, where the wavelength is in the range of 1 cm
and 2.5 to 3 mm. In this range, the path loss is much larger than in the cur-
rently used spectrum for mobile networks. For example, the path loss difference
between 90 GHz and 3 GHz is about 30 dB = 20log10(90 GHz) – 20log10
(3 GHz). Additionally, as the wavelength is very small, there is very low diffrac-
tion/scattering (almost negligible); thus, the nonline-of-sight (NLOS) trans-
mission almost does not exist in this way, the NLOS works in sub-6 GHz
(this is also referred to as blockage effect). Instead, the reflection effect is much
stronger. In certain frequencies, there is atmospheric absorption effect, increas-
ing the attenuation locally on the frequency range: around 23 GHz, there is
rain absorption effect, while around 60 GHz it is oxygen absorption. Also, as

10. These are the spectrum bands to be considered to be allocated for mobile usage during World
Radio Conference 2019 (WRC 2019).
Toward 5G 323

the system BW in the millimeter wave can be much larger compared to sub-6
GHz (due to spectrum availability), the noise power is much larger. Finally, as
the diffraction is not dominant in this range, the multipath profile of the chan-
nel in the millimeter-wave region is composed of small number of multipath
components (i.e., sparse channels) [27].
The advantages of millimeter-wave frequencies are the following [27, 28]:

• Due to large attenuation in free space and blockage through walls, the
spectrum is highly reusable.
• Large spectrum chunks are available in the millimeter-wave range, en-
abling high transmission rates and capacity.
• They are very suited for massive MIMO applications, as the antenna
size (and separation between antennas) can be small; thus, large number
of antennas can be packed in a single array of a reasonable size.
• They allow for narrow beamwidth and precise beamforming.
• Due to the pencil beams and high directivity needs for their operation,
they improve security, as it is harder to eavesdrop when the direct vis-
ibility is needed between transmitter and receiver.

On the limitations side of millimeter-wave frequencies, the following can


be mentioned:

• They require greater precision of the elements manufacturing; thus,


they are of higher cost.
• The applicability of millimeter-wave communication is limited, as sig-
nificant attenuation is experienced; thus, long-distance applications are
not possible.
• They are less reliable due to blockage effects; thus, for mobility, they
require multiconnectivity for fallback reasons (i.e., to make sure that if
one link is blocked, there is another one to take the transmission over).
• Not all frequencies in the millimeter-wave range are usable for cellular
communications as certain frequencies interfere with oxygen and rain;
however, those could be applicable for very short-range communications
(e.g., for wearables to communicate with a smartphone).

The millimeter-wave communication can potentially be used for the fol-


lowing: indoor access; very dense deployment of ultradense network (UDN),
where to be able to cope with blockage effect multiconnectivity can be used;
D2D having a very short-range, private communication with high throughput;
324 From LTE to LTE-Advanced Pro and 5G

and for wireless backhaul/fronthaul using LOS communication. The last-men-


tioned application is referred to as self-backhauling, with the dynamically re-
configurable links that enable easily deployable new sites.

14.3.1.2 Spectrum Applicability to the Key Use Cases


Table 14.3 presents the spectrum types mapping to the key use cases with re-
spect to their requirements. The sub-6-GHz spectrum is suitable for all use
cases, as its mostly providing good coverage (especially the frequencies in the
sub-1-GHz bands). The main drawback of this spectrum range is the low avail-
ability of the spectrum chunks, as it is highly occupied. Therefore, for high-
capacity demands of eMBB, this range requires some support from other bands
to be able to fulfill the requirements of the future. Therefore, the super-6-GHz
spectrum (i.e., above 6 GHz) is mainly considered as an extension for eMBB for
extra capacity provisioning. However, the drawback of super-6 GHz is the short
range due to the already-mentioned effects in the previous section (e.g., block-
age); thus, it is not considered for URLLC requiring high robustness. The other
aspect presented in this table is licensing. The licensed mode is applicable for all
the services, while the unlicensed spectrum is not recommended for URLLC,
due to its unpredictability of interference and availability. The license shared
access may serve as additional resources for both mMTC and URLLC, but is
not assumed to be a primary choice for those.

14.3.2 NR PHY Layer


To cope with the different services, requirements, scenarios, and frequency
bands discussed in the previous sections, the key aspect of NR PHY layer is its
flexible design, which includes the support for [14]:

• The transmission directions of: DL, UL, SL, and backhaul link;

Table 14.3
Spectrum Suitability [29]
Suitability
Spectrum/Licensing Characteristic eMBB mMTC URLLC
Sub-6 GHz High coverage, low Yes Yes Yes
spectrum availability
Above 6 GHz Low coverage, high Yes (as extra Possible N/A (not
spectrum availability capacity) reliable)
Licensed spectrum Exclusive use Yes Yes Yes
Licensed shared Shared exclusive use Yes Possible Possible
spectrum
Unlicensed spectrum Shared use Yes Possible N/A (no QoS)
Toward 5G 325

• Multiple duplexing schemes: FDD paired, FDD unpaired for either di-
rection (similar to the supplemental DL concept in LTE CA), TDD
with semistatic DL/UL direction configuration, and TDD with dynam-
ic DL/UL resources configuration change;
• Different PHY layer numerologies: slot size, subcarrier separation;
• Component carrier bandwidth spanning from narrowband up to very
wideband that is going to be available in the millimeter-wave region;
• Forward compatibility to ensure smooth introduction of new features
via reserved blank resources.

In terms of transmission scheme, the DL is assumed to be based on CP-


OFDM, while for the UL the support of both, CP-OFDM and DFT-S-OFDM
is envisioned (and the UE should support both).11 In the UL, the possibility
for dynamic change between the two schemes should be possible (signaled from
the 5G RAN to the UE) allowing to utilize the benefits of both, namely, SU/
MU-MIMO schemes with the use of CP-OFDM, and high coverage scenarios
to be covered by a single stream DFT-S-OFDM. The modulation schemes are
to be similar to LTE, and include QPSK, 16 QAM, 64 QAM, and 256 QAM
for both DL and UL. As for 5G Phase I, the standardization focuses on eMBB
and URLLC scenarios, and the dynamic resource sharing between those should
be supported. While for eMBB and URLLC, a regular scheduled type transmis-
sion is expected, in terms of the study for mMTC services, the nonorthogonal
multiple access is considered for the UL. In this, the grant-free schemes allow
for the multiple UEs to transmit within the same resources. This does not re-
quire explicit scheduling grants. The detection of the individual user’s data is
possible due to the use of pseudo-randomly selected signatures (to scramble the
data with) at the transmitter and multiuser detector at the receiver [14].

14.3.2.1 PHY Layer NR Numerology


To capture different requirements for latency along with different frequency
bands a concept of a frame structure with multiple numerologies has been pro-
posed [14]. The numerologies are defined by scaling the baseline subcarrier
spacing (15 kHz) by an integer number to support a maximum of 480 kHz.
The larger the subcarrier spacing, the shorter the symbol duration. At 15 kHz,
the symbol duration (without CP) is equal to 66.67 µs, whereas for the maxi-
mum of 480 kHz, the symbol duration shortens down to 2.08 µs. The CP over-
head is also scaled, so that the ratio between the useful symbol and overhead

11. This configuration is currently considered for at least up to 40 GHz for eMBB and URLLC.
The discussion for the waveform for mMTC and frequencies beyond 40 GHz was still ongo-
ing at the time of writing.
326 From LTE to LTE-Advanced Pro and 5G

is kept. The assumption is that the numerology is not tightened together with
the frequency band. However, it is reasonable to assume that the low-frequency
bands (sub-6 GHz) will be using the smaller subcarrier separation, (as they typi-
cally operate in NLOS scenarios) to overcome the multipath causing high fre-
quency selectivity. High frequencies (millimeter-wave) operate in LOS and do
not experience strong multipath (as elaborated in Section 14.3.1.1). Therefore,
they can cope with larger subcarrier separation. This is also important at mil-
limeter wave for mobility scenarios, for which the larger subcarrier separation is
recommended, as the higher the frequency, the higher the Doppler shift at the
same speed (detailed values can be found in [30]).
Speaking of the time domain, there is currently an assumption of using a
single fixed subframe duration of 1 ms. However, as the slot can be either 7 or
14 OFDM symbols for the subcarrier spacing of up to 60 kHz and 14 OFDM
symbols for higher subcarrier spacing, the duration of the slots (that define the
schedulable units) will be different. The range of a slot can be from 1 ms (for
14 OFDM symbols and subcarrier spacing of 15 kHz) down to 31.25 µs (for
14 OFDM symbols and subcarrier spacing of 480 kHz). A single slot can be
fully used for DL or UL, or covering DL transmission part, gap period and UL
transmission part. This can be dynamically changed from slot to slot to allow
for capturing the immediate traffic changes. The data can be allocated for a
single or multiple slots. Independent of the subcarrier separation, the physical
resource block (PRB) is composed of 12 consecutive subcarriers and 1 time slot.
The different numerologies can be multiplexed in a single NR carrier band-
width both, in the frequency domain (FDM) and in the time domain (TDM);
for example, one subframe can use 15 kHz, while the next one can use 60 kHz.
Multiplexing can be done, for example, for the reliability/latency requirements
for different services, where, for instance, in eMBB the UP latency can be set
to be 4 ms, while for URLLC it can be configured to be 0.5 ms. Shortening
the slot by increasing the subcarrier spacing is one way in which the URLLC
short latency can be achieved. The other is with the use of the mini-slots, be-
ing the lowest schedulable resource. The mini-slots are located within the slot
and are much shorter than 0.5 ms to meet the 1ms E2E latency requirement
for some of the URLLC’s application [14, 30]. Additionally, a flexible channel
bandwidth should be supported for NR (as for LTE), with the widest single
component carrier’s channel bandwidth being not smaller than 80 MHz (com-
pared to LTE maximum CC BW being 20 MHz) [14].

14.3.2.2 Multiantenna Schemes and Beam Management


The legacy-type MIMO schemes considered for NR include [14]: DMRS-
based SU/MU-MIMO with at least 8 layers (relating to 8 spatial streams) for
DL; DMRS-based SU/MU-MIMO with at least 4 layers for UL, with dynamic
switching between modes using both codebook and non-codebook-based trans-
Toward 5G 327

mission; and spatial diversity for control channels and CoMP-type schemes.
Additionally, an important aspect in multiantenna operation in NR is beam-
forming, which is specifically a good match for millimeter wave (however, it can
be also used with the sub-6-GHz bands as standardized in LTE). Beamforming
plays an important role as the higher the frequency, the lower the wavelength;
thus, antenna sizes and separation between individual antennas can be very
small, enabling to put large number of antenna elements in the panel achieving
the very precise and narrow beams. Also, the larger the number of antennas
(massive MIMO), the more stable and less fading is experienced in the channel,
resulting in the channel hardening effect [31]. Beam management is an incor-
porated set of functionalities in the NR design to enable establishing and main-
taining the transmission/reception point’s (TRP12) and UE’s beams for DL and
UL transmission. In NR, the cell is defined by a single set of synchronization
signals and can be composed of multiple beams; thus, switching between the
beams in this scenario is not considered as a handover, but is managed at the
L1/L2 level. With respect to this, NR supports both, cell-level mobility with
RRC involvement and beam-level mobility with minimum or without RRC
involvement; see Figure 14.1 [32]. Therefore, both DL and UL designs incor-
porate the beam management functionality, by introducing a set of procedures,
like beam determination (operation of TRP or UE to select transmit/receive

Figure 14.1 Beam-level and cell-level mobility. (After: [32].)

12. A TRP is defined as an antenna array located at specific location.


328 From LTE to LTE-Advanced Pro and 5G

beam), measurement, reporting, and beam sweeping [14]. TRP sweeps through
the cell coverage in time domain by activating certain beams. In the DL, the
synchronization signals (NR-SS) and NR-PBCH can be transmitted in the dif-
ferent time intervals for the UE to determine which beam is the best one (for
initial access). In the UL, the TRP can obtain knowledge of the best UL beam
acquired by RACH procedure where for each beam the RAP is repeated [14].

14.3.3 Protocol Stack Consideration


The baseline for Layer 2 protocols and RRC functions for NR is the LTE proto-
col stack, while the optimizations of individual protocol instances for different
services are being studied.

14.3.3.1 Layer 2
Some modifications include a study regarding if the segmentation function
should be configurable per service or if concatenation function should be
moved to low L2 sublayer. On the MAC level, there are considerations to use
only asynchronous HARQ in both UL and DL with multiple HARQ processes.
Speaking of the different numerologies on the PHY layer, discussed in Section
14.3.2.1, the NR should support mapping of the logical channels to the new
numerology and TTI duration. In terms of flow aggregation, both CA and DC
are being considered in NR in a similar fashion as it is done in LTE [16].

14.3.3.2 Layer 3: RRC


For the system information (SI), there is a study ongoing considering the move
from the periodic transmission of SI in LTE, toward more on-demand system
information provisioning (broadcasted or provided via dedicated signaling),
where only the minimum access information should be provided in a periodic
fashion [16]. Another RRC aspect under study is the introduction of a new
RRC state, RRC_CONNECTED_INACTIVE (also being referred to as “new
state”) that overcomes the problem with high signaling overhead resulting from
the frequent state changes between RRC_IDLE and RRC_CONNECTED in
certain uses. In this new design, the RAN-CN connection is maintained, and
the UE context is stored, while the UE is in the power-saving mode, not having
active RRC connection (i.e., UE is not having dedicated resources). The aim is
to enable quick connection setup for data transmission once the UE is attached
in the network to not use RRC_IDLE mode. This is similar to the NB-IoT
resume/request procedure discussed in Chapter 13. However, in NB-IoT, there
is no new state defined, and limited UE mobility is assumed. In NR’s RRC_
CONNECTED_INACTIVE state considerations, to reach the UE, the RAN-
based paging could be used. Thus, instead of typical tracking area-based pag-
ing known from LTE, the RAN-based notification area can be defined, within
which the UE is able to move freely without notifying the network [16, 33].
Toward 5G 329

14.4 5G Architecture Concepts


Similar to the air interface considerations from the previous section, the key
requirement for the overall 5G architecture is to support the variety of use cases,
services, and needs. Additionally, different business models such as network-
as-a-service (NaaS), need to be supported. The capabilities that are needed to
achieve this include the possibility to tailor and optimize the network opera-
tion for each usage together with high programmability and scalability. The
introduction of software-defined networking (SDN) [34] and network func-
tions virtualization (NFV) [35] enables the shift from specialized hardware-
based networks to programmable software-based architectures. The key in this
approach is to define a set of network functions that should be interconnected
and mapped to different locations [points of presence (PoP)] per need using
commercial/commodity hardware, instead of defining the specialized network
elements. The other aspect for 5G is the possibility to reuse the LTE network
(with the required improvements) as per “support for multiple access technolo-
gies” requirement [5]. Taking these considerations into account, the following
studies are being done within the 3GPP: network slicing, RAN flexibility and
tight interworking with LTE. The 5G system will encapsulate both, the NR
and the evolved LTE (eLTE), connected to the new core network (NGC). The
smooth migration from the current LTE toward fully fledged 5G (including
eLTE and NR) is also a required and is a part of the study. 5G from the scratch
assumes tight interworking with WiFi also along with LTE, where LWA, LWIP,
and RCLWI features (see Chapter 13) are supposed to be the baseline for NR-
WLAN interworking [16]. In terms of legacy systems, the 5G system is as-
sumed not to consider 2G and 3G in the following aspects: no support for CS
voice call continuity or fallback, no support for 2G/3G access to NGC, and no
support for seamless handover between those [5].

14.4.1 5G Network Flexibility


The concept of network slicing had been originally introduced by NGMN in
its 5G white paper [36] and then was detailed in [37]. This concept enables use
of single network infrastructure to deploy different (or similar) logical networks
in terms of network slices, being the set of network functions and resources (in-
cluding computation, storage and transport resources) to provide the required
services and capabilities per use case. A single tenant (e.g., a company requiring
resources in the network) may require one or multiple slices (i.e., a complete
and fully operational network customized to the business case) that can be dy-
namically created, modified, and deleted [5]. The architectural proposal from
3GPP is shown in the Figure 14.2 [10]. Each of the slice can have dedicated
functions (also called slice-specific), for example, different session management
330
From LTE to LTE-Advanced Pro and 5G

Figure 14.2 Network slicing concept. (After: [10].)


Toward 5G 331

for each service. Some slices can have common functions (e.g., mobility man-
agement). The slice selection function handles the initial UE attach or new ses-
sion establishment request to select the proper slice for a UE depending on UE
capabilities (e.g., slice support), subscription, and service type. It acts as a rout-
ing function to link the UE with proper CN part of the NW slice based on slice
ID [10]. By customizing the NFs and their location, the operation of the net-
work can be optimized for each service types with respect to their requirements.
For example, services requiring low latency can have the critical NFs placed
close to the network edge; or stationary mMTC services that may not require
mobility support (or with limited capabilities) could do away with mobility-
related NFs. In the MOCN scenarios, where multiple operators use the same
network for the same service type, the slicing should assure the independency
level (also known as isolation), in terms of, for example, congestion, where the
congestion in one slice should not have impact on the other slice. The other do-
main requiring isolation may be the security, where one tenant (being MNO)
uses independent security mechanisms than the other. The UE can be precon-
figured (or not) with the supported slice IDs, but it is always up to the network
to decide which slices the UE can be connected. The slice ID should reflect the
tenant ID (i.e., operator) and service type (e.g., eMBB, URLLC, mMTC). The
UE may or may not support connectivity to multiple network slices at the same
time (the slices in this case should belong to the same operator).
The RAN should support the network slicing in terms of [13]:

• Being able to differently handle the traffic from different slices;


• Being able to select the RAN part for the particular NW slice;
• Being able to support multiple slices in a single node and being able to
apply different RRM policies as per SLA for each of the supported slices;
• Being able to support QoS differentiation within a slice, in which the
proposal is that the RAN can map the flows coming from the CN to the
data radio bearers (DRB) by itself without the one-to-one binding with
PDU session requirement as it is in the legacy EPS (this is reflected in
the proposal of a new L2 protocol for mapping QoS flows to radio bear-
ers). This enables to differentiate the traffic handling even within a flow,
for example, in the case of TCP session, there is a slow start problem,
which can be handled differently from the later stage of the TCP session
handling (e.g., by using the different latency settings);
• Being able to isolate the resources between slices by means of RRM
policies and protection mechanisms to avoid the situation, where the
congestion in one slice breaks the SLA of any of the other slices;
332 From LTE to LTE-Advanced Pro and 5G

• Being aware of the support of the slices in the neighboring cells (not all
slices may be available in all cells in the NW);
• Being able to support the UE association with multiple slices simultane-
ously with one signaling (RRC) connection.

14.4.2 System Architecture


The RAN for 5G (New RAN) will support both the NR and the evolved LTE
(eLTE) connected to the NGC network [13]. The eLTE is the evolution of the
legacy LTE system that supports both the NGC and EPC and uses the eLTE
eNodeB13. The NR base station is called gNB. The NGC is composed of the
UP and CP functions. The NG-UP is composed of user plane functions (UPF).
From the RAN perspective, the termination point of NG-U is also known as
UP gateway (UPGW). The NG-CP is composed of a set of NFs including, for
example, core access and mobility management function (AMF), session man-
agement function (SMF), policy control function (PCF), authentication server
function (AUSF), and unified data management (UDM) [10]. The NAS sig-
naling is terminated at the UE and AMF function that is responsible for access
authentication, authorization, and mobility management. Compared to EPC,
the NG-CP is designed with higher granularity; for example, the MME is split
into AMF and SMF. This enables to apply the slicing concept as described in
the previous section. For instance, the AMF can be common to multiple slices
and a single contact point for the UE, while the SMF can be a slice specific
session handling combined with individual UPF to realize the isolation and
optimization for a service type.
The new RAN architecture is shown in Figure 14.3, as seen from the
RAN3 perspective [13]. In this architecture, both the gNB and eLTE eNodeB
are connected to NGC via NG interface, composed of CP part (NG-C) and
UP part (NG-U14) (similar to S1-C and S1-U of the EPC architecture). The
NG interface supports the many-to-many relation. Xn is a similar interface to
X2 (known from the legacy LTE), while it is supporting both, the intra-RAT
and inter-RAT connectivity (i.e., Xn can be configured between eLTE eNo-
deBs, between NR gNBs, and between NR gNB–eLTE eNodeB). The new
RAN-specific functions should include: network slicing support (as elaborated
in Section 14.4.1), tight interworking with LTE (data flow aggregation, as elab-
orated in Section 14.4.3), multiconnectivity (one step beyond DC, i.e., support
of link aggregation with more than 2 links), handover between eLTE and NR
using the direct Xn interface (similar to the regular intra-RAT handover), and

13. eLTE is a name carried out in 3GPP documents during study item phase, thus is used herein.
14. The SA2 [10] uses different naming for these interfaces; for example, NG-C is referred to as
NG2 and NG-U to as NG3.
Toward 5G

Figure 14.3 New RAN architecture. (© 2016. 3GPP™ TSs and TRs are the property of ARIB, ATIS, CCSA, ETSI, TTA, and TTC.)
333
334 From LTE to LTE-Advanced Pro and 5G

contacting UEs in RRC_CONNECTED_INACTIVE (if the new state is in-


troduced in 5G) [13].

14.4.2.1 Internal RAN Split


In 5G RAN four deployment options are considered, namely [13]: noncentral-
ized deployment, where the full protocol stack is supported at gNB, which can
connect to other gNBs via Xn interface; cosited deployment, where NR func-
tionality is cosited with eLTE; shared deployment, where the multiple operators
share the RAN; and centralized deployment, where the NR supports centraliza-
tion of the upper layers of the NR protocol stack, while the lower layers of the
NR protocol stack are distributed.
In the centralized deployment, the gNB is split into the central unit (CU)
and the distributed unit (DU), and both are connected via fronthaul interface.
The functional split between them may depend on the transport layer and sce-
nario requirements, while different options for RAN architecture split should
be possible. There are 8 potential split options considered and shown in Table
14.4.15 The higher in the protocol stack is the split, the lower the performance
of the transport network is required (e.g., low transmission bandwidth). In this
configuration, DC types of features are feasible. On the other end, the lower
the split in the protocol stack, the more advanced and high-performance in-
tersite resource management are achievable, while requiring high-performance
transport with low latency and high bandwidth. If there will be flexibility to
configure freely any of the option, it will allow for the adaptation of the con-
figuration to the use case requirement (e.g., depending on the required latency
or high throughput) [13].

14.4.3 Tight Interworking with eLTE


3GPP RAN specifies multiple options for the new RAN deployment configu-
rations: stand-alone eLTE, stand-alone NR, non-stand-alone eLTE, and non-
stand-alone NR. The stand-alone approach uses a configuration, where either
NR or gNB are directly connected to NGC using NG-C and NG-U. The non-
stand-alone approach is a configuration where either eLTE or NR requires the
other RAT serving as a signaling anchor (i.e., NR gNB, LTE eNodeB, eLTE
eNodeB serving as a master node). The currently considered options are provid-
ed in Table 14.5 [13, 38]. Note that these are not separate logical architecture
options, rather the scenarios showing how the UE can connect to the system,
thus, a combination is possible.
In the non-stand-alone operation, the new RAN should be able to pro-
vide the transmission using the NR and eLTE simultaneously to a single UE (if

15. The LTE protocol stack is taken as a basis for the table as NR protocol stack was still under
development at the time of this writing.
Toward 5G 335

Table 14.4
Functional Split Options for NR
Split DU Transport
Option [13] Split point CU Functionality Functionality Characteristics
Option 1 Above PDCP split RRC PDCP, RLC, High latency, small
MAC, PHY, RF bandwidth
Option 2 Below PDCP RRC, PDCP RLC, MAC, High latency, small
split (DC-like PHY, RF bandwidth
architecture,
allows for traffic
aggregation of NR
and E-UTRA)
Option 3 Intra-RLC split RRC, PDCP, Synchronous High latency, small
asynchronous part of RLC bandwidth
part of RLC (e.g., (e.g., ARQ),
segmentation) MAC, PHY, RF
Option 4 RLC-MAC split RRC, PDCP, RLC MAC, PHY Medium latency,
medium bandwidth
(increased by RLC
headers)
Option 5 Intra-MAC split RRC, PDCP, RLC, Lower MAC Low latency, medium
(allows e.g., upper MAC (e.g., (e.g., HARQ), bandwidth (increased
centralized priority handling) PHY, RF by MAC headers and
scheduling padding)
schemes such as
CoMP)
Option 6 MAC-PHY split RRC, PDCP, RLC, PHY, RF Low latency, medium
MAC bandwidth (increased
by MAC headers and
padding)
Option 7 Intra-PHY split RRC, PDCP, RLC, Lower PHY Very low latency, high
MAC, Upper PHY (e.g., FFT, CP), bandwidth (increased
(e.g., channel RF PHY layer overhead)
coding)
Option 8 PHY-RF split (CPRI- RRC, PDCP, RLC, RF Very low latency, high
like) MAC, PHY bandwidth

the UE is dual radio capable) through tight interworking by means of dual con-
nectivity. In this, one RAT is configured as providing master CG and the other
is configured as the secondary CG. Once the UE is connected, the RAN should
be able to select the radio access for each data flow depending on, for example,
service, traffic, radio characteristics, UE mobility, and device type [9, 13]. For
the tight interworking dual connectivity, the following bearer configurations
are considered: MCG bearer, split bearer via MCG, and SCG bearer (and those
are similar to the legacy DC). The split bearer via SCG is also being evaluated
with the Option 3; however, it requires further justification. Figure 14.4 pres-
ents the protocol architectures with regard to the specific options from Table
14.5 for the tight interworking between LTE-NR and eLTE-NR [13] (during
336 From LTE to LTE-Advanced Pro and 5G

Table 14.5
Architecture Options (Selected Subset Considered Currently in 3GPP)
Architecture
Option Definition Description LTE Role NR Role
Option 1 Stand-alone LTE eNodeB connected to EPC Single N/A
LTE, EPC (baseline, legacy) connectivity
connected
Option 2 Stand-alone NR gNB connected to NGC N/A Single
NR, NGC connectivity
connected
Option 3 Non-stand- Data-flow aggregation Legacy LTE as NR for UP only
alone/LTE- across LTE eNodeB and Master node in as secondary
assisted, EPC NR gNB via EPC. S1-U and DC, signaling node in DC,
connected S1-C connection to LTE, Xx anchor capacity
interface for DC booster
Option 3a Non-stand- Data-flow aggregation Legacy LTE as NR for UP only
alone/“LTE- across LTE eNodeB and NR Master node in as secondary
assisted,” EPC gNB via EPC. S1-U and S1-C DC, signaling node in DC,
connected connection to LTE, S1-U anchor capacity
connection to NR booster
Option 4 Non-stand- Data-flow aggregation eLTE for NR as master
alone/NR across NR gNB and eLTE UP only as node in DC,
assisted, NGC eNodeB via NGC. NG-C and secondary node signaling
connected NG-U connection to NR, Xn in DC, capacity anchor
interface for eLTE UP booster
Option 4a Non-stand- Data-flow aggregation across eLTE for NR as master
alone/NR NR gNB and eLTE eNodeB UP only as node in DC,
assisted, NGC via NGC. NG-C and NG-U secondary node signaling
connected connection to NR, NG-U in DC, capacity anchor
connection to eLTE booster
Option 5 Stand-alone eLTE eNodeB connected to Single N/A
eLTE, NGC NGC connectivity
connected
Option 7 Non-stand- Data-flow aggregation eLTE as master NR for UP only
alone/LTE across eLTE eNodeB and node in DC, as secondary
assisted, NGC NR gNB via NGC. NG-C and signaling node in DC,
connected NG-U connection to eLTE, Xn anchor capacity
interface for NR UP booster
Option 7a Non-stand- Data-flow aggregation across eLTE as master NR for UP only
alone/LTE eLTE eNodeB and NR gNB node in DC, as secondary
assisted, NGC via NGC. NG-C and NG-U signaling node in DC,
connected connection to eLTE, NG-U anchor capacity
connection to NR booster

the production of this book the standardization has progressed, and for NR a
new sublayer above PDCP has been proposed). Option 3 enables the possibility
to introduce NR, still using the EPC and legacy LTE, by reusing the DC base-
line from legacy LTE (the EPC is not impacted as NR is hidden from CN). The
Xx interface in this option allows to provide split bearer and CP functions (will
be very similar to X2), while Option 3a requires the NR SgNB to be connected
Toward 5G 337

Figure 14.4 Protocol architecture options for tight interworking. (After: [13].)
338 From LTE to LTE-Advanced Pro and 5G

directly to EPC via S1-U for SCG bearer. Options 7 and 7a are based on a simi-
lar configuration, that is, where the eNodeB serves as master, and the difference
between the two is that the eNodeB is connected to NGC, thus enhancements
are required to the LTE for supporting NG interface (hence, eLTE). Options
4 and 4a are based on the opposite configuration (i.e., the NR gNB serves as
master, while the eLTE eNodeB is connected either through Xn or NG-U, for
split bearer or SCG bearer, respectively). eLTE requires the upgraded PDCP
(on the UP stack), while the lower layers are the same as for legacy LTE. In
terms of RRC signaling, the UE is assumed to have one RRC state machine
related to MCG, while the RAN can have two RRC entities (one for LTE and
one for NR). They can both generate ASN.1, but the ASN.1 provided by the
secondary node/RAT is transported via the master [16].

14.4.4 Migration Toward the 5G System


The evolution of the mobile networks toward the full 5G system should be
smooth to utilize the existing infrastructure as much as possible, while still en-
abling early investments and introduction of the NR. Therefore, a phased mi-
gration path is envisioned. There are multiple options on how to migrate from
the LTE baseline to a fully capable 5G system, taking into account the possible
connectivity options provided in Table 14.5. One of the approaches for migra-
tion is shown in Figure 14.5 and is based on [13, 39]:

• The starting point is with the legacy LTE connected to the EPC. This
can serve as a coverage layer, while introducing NR in the higher fre-
quencies for the 5G Phase 1 (with NR to provide higher capacity in the
hotspots). For this scenario, the aim is to utilize the existing configura-
tion as much as possible; thus, Option 3/3a is a reasonable approach,
where the NR gNBs are added using the legacy DC configurations. This
option minimizes the impact on the EPC, while the NR is hidden from
the CN.
• In the second step, Options 7/7a enables the introduction of NGC,
while still keeping LTE as an anchor and NR as secondary node. This
requires upgrade on the LTE side to support NG interface toward NGC.
One potential solution for smooth transition from EPC to NGC is to
encapsulate EPC as a slice within the NGC.
• In the final step, the NR can be connected to NGC, in stand-alone
mode (Option 2). The other configurations with eLTE, being on equal
terms with NR, supporting also stand-alone mode (Option 5) and dual
connectivity in both directions (Option 4/4a and 7/7a), can be possible.
Toward 5G
339

Figure 14.5 Potential migration path. (After: [13, 38].)


340 From LTE to LTE-Advanced Pro and 5G

This approach helps to keep the impact on S1 and EPC minimal, to en-
able smooth introduction of NR from Release 15 and achieve certain econo-
mies of scale before introducing full-blown NGC and other use cases that can
be enabled by the Release 16. For those operators who are interested in being
the first to upgrade the CN to NGC, another migration path is possible where
they go from Option 1 to Option 7/7a (phase 1) and finally to Option 2, Op-
tion 4, and Option 5 (phase 2). Another, more aggressive deployment approach
is where the network is evolved from Option 1 directly to Option 2 and Option
4; this requires an upgrade of LTE to eLTE and EPC to NGC in a single step
[13].

References
[1] 3GPP TR 22.891, v14.2.0, “Feasibility Study on New Services and Markets Technology
Enablers (SMARTER),” September 2016.
[2] ITU-R WP 5D, “Workplan, Timeline, Process and Deliverables for the Future Develop-
ment of IMT.”
[3] 3GPP RWS-150073 “RAN Workshop on 5G: Chairman Summary,” September 2015.
[4] www.3gpp.org/specifications/work-plan.
[5] 3GPP TS 22.261, v1.0.0, “Service Requirements for Next Generation New Services and
Markets,” December 2016.
[6] 3GPP TR 22.861, v14.1.0, “Feasibility Study on New Services and Markets Technology
Enablers for Massive Internet of Things,” September 2016.
[7] 3GPP TR 22.862, v14.1.0, “Feasibility Study on New Services and Markets Technology
Enablers for Critical Communications,” September 2016.
[8] 3GPP TR 22.863, v14.1.0, “Feasibility Study on New Services and Markets Technology
Enablers for Enhanced Mobile Broadband,” September 2016.
[9] 3GPP TR 22.864, v15.0.0, “Feasibility Study on New Services and Markets Technology
Enablers - Network Operation,” September 2016.
[10] 3GPP TR 23.799, v14.0.0, “Study on Architecture for Next Generation System,”
December 2016.
[11] 3GPP TR 33.899, v0.6.0, “Study on the Security Aspects of the Next Generation System,”
November 2016.
[12] 3GPP TR 38.900, v14.2.0, “Study on Channel Model for Frequency Spectrum Above 6
GHz,” December 2016.
[13] 3GPP TR 38.801, v1.0.0, “Study on New Radio Access Technology; Radio Access
Architecture and Interfaces,” December 2016.
[14] 3GPP TR 38.802, v1.0.0, “Study on New Radio (NR) Access Technology; Physical Layer
Aspects,” November 2016.
Toward 5G 341

[15] 3GPP TR 38.803, v1.0.0, “Study on New Radio Access Technology; RF and Coexistence
Aspects,” December 2016.
[16] 3GPP TR 38.804, v0.4.0, “Study on New Radio Access Technology; Radio Interface
Protocol Aspects,” November 2016.
[17] 3GPP TR 38.805, v0.0.2, “Study on New Radio Access Technology; 60 GHz Unlicensed
Spectrum,” December 2016.
[18] 3GPP TR 38.912, v0.0.2, “Study on New Radio Access Technology,” September 2016.
[19] 3GPP TR 38.913, v14.1.0, “Study on Scenarios and Requirements for Next Generation
Access Technologies,” December 2016.
[20] Recommendation ITU-R M.2083, “IMT Vision – Framework and Overall Objectives of
the Future Development of IMT for 2020 and Beyond,” September 2015.
[21] 3GPP R1-162153, “Overview of Non-Orthogonal Multiple Access for 5G,” April 2016.
[22] 3GPP RP-160324, “Text Proposal to TR 38.913 for Mapping of KPIs to Scenarios,”
March 2016.
[23] 3GPP RP-160671, “New SID Proposal: Study on New Radio Access Technology,” March
2016.
[24] 3GPP RP-170855, “Work Item on New Radio (NR) Access Technology,”
[25] 3GPP RPa-160064, “Technical Requirements for Next Generation Radio Access
Technologies,” January 2016.
[26] 3GPP RP-162213, “Considerations on New Band Structure for 5G,” December 2016.
[27] U.S. FCC, “Millimeter Wave Propagation: Spectrum Management Implications,” July
1997.
[28] Yu, Y., et al., “Integrated 60GHz RF Beamforming in CMOS,” Analog Circuits and Signal
Processing, 2011.
[29] METIS D5.4, “Future Spectrum System Concept,” April 2015, https://www.metis2020.
com.
[30] 3GPP R1-1611366, “NR Time Domain Structure: Slot and Mini-Slot and Time Interval,”
November 2016.
[31] Hochwald, B., T. Marzetta, and V. Tarokh, “Multi-Antenna Channel-Hardening and
Its Implications for Rate Feedback and Scheduling,” IEEE Transactions on Information
Theory, September 2004.
[32] 3GPP R2-163437, “Beam Terminology,” May 2016.
[33] 3GPP R2-162520, “General Principles and Paging Optimization in Light Connection,”
April 2016.
[34] Open Networking Foundation, “Software-Defined Networking (SDN) Definition,”
https://www.opennetworking.org/software-defined-standards/.
[35] ETSI, “Network Functions Virtualization – Introductory White-Paper,” October 2012.
[36] NGMN Alliance, “5G White-Paper,” February 2015.
342 From LTE to LTE-Advanced Pro and 5G

[37] NGMN Alliance, “Description of Network Slicing Concept,” September 2016.


[38] 3GPP RP-161266, “5G Architecture Options – Full Set,” June 2016.
[39] 3GPP R2-165031, “AT&T Views on 5G Architecture Evolution,” September 2016.
About the Authors
Moe Rahnema received an M.Sc and a degree in engineering sciences from
the Aeronautics and Astronautics Department of the Massachusetts Institute of
Technology (MIT) in 1980 and 1981, and the M.Sc./engineer degree from the
Department of Electrical and Computer Engineering at Northeastern Univer-
sity in 1987, all with Ph.D. equivalent coursework.
Mr. Rahnema has served as a lead R&D engineer, design and development
engineer, technical trainer, and consultant in GTE Labs, Motorola, Hughes
Network Systems, and LCC International in the United States and as an inde-
pendent senior-level industry consultant in the planning and optimization of
cellular networks particularly from the RF/radio access side in the past 15 years
for various operators and vendors such as Ericsson, Nokia, Telsel, Telkomsel,
Excel, Axis and Huawei in Asia, Latin America, and Russia.
Mr. Rahnema was closely involved in the planning and analysis of the first
LTE network in Mexico, where he received the Huawei certificate of recogni-
tion for the most outstanding individual and team leadership performance in
July 2011. He was also granted membership in the Motorola Scientific and
Technical Society for significant technical contributions and received certificate
of teamwork recognition. He has been granted nine patents (some joint), all
in telecommunications, and has published numerous technical papers in IEEE
magazines and journals, as well as a design and optimization-oriented technical
book on UMTS, published in 2008 by Wiley. He has also developed and pre-
sented expert level technical training workshops on GSM, GPRS, 3G and LTE
in Telcel/Mexico, HNS/USA, Maxis/Malaysia, and Huawei.
Marcin Dryjanski received an M.Sc. degree in telecommunications from
the Poznan University of Technology in Poland in June 2008. During the
past 10 years, Marcin has served as an R&D engineer, lead researcher, R&D

343
344 From LTE to LTE-Advanced Pro and 5G

consultant, technical trainer, and technical leader. He has extensive experience


in the PHY/MAC/RRM/RAN design for systems like LTE/LTE-Advanced/
LTE-Advanced Pro and 5G.
From 2008 to 2014, Mr. Dryjanski worked at IS-Wireless where he was
responsible for being the architect for software solutions targeting LTE/LTE-
Advanced PHY layer and L2/L3 protocol stack design and providing expert-
level training courses on LTE/LTE-Advanced, SON, radio interface design,
MIMO, OFDMA, and Radio and Core Network Protocols. The courses have
been provided to the leading mobile operators, vendors, and research institu-
tions all around the world including Saudi Arabia, Chile, the Netherlands, Swe-
den, Spain, and Poland.
Mr. Dryjanski has been involved in 5G design since 2012 when he served
as a the work-package leader for EU-funded research projects aiming at radio
interface design for 5G including 5GNOW and SOLDER. He coauthored a
number of research articles, including in IEEE magazines, targeting 5G radio
interface design and LTE-Advanced Pro features.
Since October 2014, he has been serving as an external contractor at Hua-
wei Technologies Sweden AB providing both an algorithm and architectural
RAN design for LTE-Advanced Pro and 5G systems working on features like
NB-IoT, traffic steering, 5G scheduler design, RAN functional framework, and
radio environment maps.
Mr. Dryjanski is a co-founder of Grandmetric, a competence delivery
company, heading the field of mobile wireless systems. In this role, he provides
consulting services and training courses in the area of 5G-related topics and
providing technical insights on the company’s blog.
Index
Access and mobility management function Automatic retransmission request (ARQ)
(AMF), 332 client, 245, 246
Access class barring, 215–16 defined, 236
Advanced Antenna Systems (AAS), 6 proxy, 245, 246–47
Air interface
channels in LTE, 43–50 Bandwidth-delay product (BDP), 228–29
data rates and spectral efficiencies, Beam management, 326–28
63–69 Block error rate (BLER), 232
DL OFDMA and implementation, Break-Before-Connect (BBC) handovers,
34–40 131
framing and physical synchronization Broadcast channel (BCH), 48
signals, 26–32 Broadcast control channel (BCCH), 49
layer 2 protocol sublayers, 50–56 Busy hour data session set up attempts
LTE physical layer measurements, (BHDSA), 194–95
75–79
MBMS transmission, 33 Capacity
modulation and coding schemes and estimation, 108–10
mapping tables, 56–63 optimization, 168
OFDM multi-user access mechanism, quality analysis and verification,
23–25 107–11
spectrum allocation, 22 traffic demand estimation and site
timing advance function, 32–33 calculation and, 110–11
transmission modes and MIMO voice, analysis, 265–69
operation, 69–75 Capacity planning, 124
UE categories, 79 Carrier aggregation (CA)
uplink SC-FDMA, 40–43 connection signaling, 200–201
Application Server (AS), 212, 259, 261–62 enhancements, 294–96
Asymmetrical digital subscriber line (ADSL) massive, 295
modems, 9 resource scheduling, 201
Authentication server function (AUSF), 332 UE support, 296
Automatic neighbor relation (ANR), 167–68 uplink enhancements, 295–96

345
346 From LTE to LTE-Advanced Pro and 5G

Cell-edge SINR, 114 overview, 81


Cell outage optimization, 170–71 radio link budgeting (RLB), 82–107
Cell selection and reselection, 129–30 CQI (channel quality indicator)
Cell specific reference symbols (CRS), 203 channel condition of, 68
Channel coding estimation, 61 feedback, 151
Channels index, 59, 64
logical, 49–50 mapping to MCS, 61–63, 150
overheads from downlink side, 89–90 measurements, 57, 150–51
overheads from uplink side, 90–91 reporting, 57–58, 150–51
physical, 45–47 to SINR mappings, 64–65
transport, 47–48 Critical communications (CriC), 318
Channel sounding reference symbols, 31 Cyclic redundancy check (CRC), 52, 58–59
Channel state information (CSI), 204 Cyclic shifts of ZC sequences, 118–19
Commercial Mobile Alert Service (CMAS),
143 Data rates, 63–69
Common control channel (CCCH), 49 Dedicated control channel (DCCH), 49
Component carriers (CCs), 294–95 Dedicated traffic channel (DTCH), 49
Congestion Delays, handover, 140–42
algorithm bottlenecks, 226 Demodulation, within UE, 36–38
avoidance phase, 226 Demodulation reference symbols (DRS),
flow control and, 224–25 30–31
slow start control phase, 225 Device-to-device (D2D) communication
Congestion window (CWND), 224–25 advantages, 297
Connection signaling, 200–201 architecture and protocols, 299–301
Constant Amplitude Zero Auto-Correlation in-coverage, 297–98
(CAZAC), 31 defined, 297
Contention-free random access (CFRA), 120 direct communication, 301–2
Contention window (CW) size, 290 out-of-coverage, 298
Coordinated multipoint transmissions and partial coverage, 298
reception (CoMP), 209–10 PC5 protocol stacks, 300
Cost 231 Hata model, 103–4 ProSe architecture, 300
Coverage radio interface: side link, 301
dimensioning, 107–8 scenarios, 297–99
LTE, 6–7 synchronization and broadcast, 302–3
optimization, 168 Differentiated service code points (DSCP),
optimum system bandwidth for, 190
111–12 Digital audio broadcasting (DAB), 9
performance, network load trade-offs, Digital-to-analog converters (DAC), 36
112–15 Dirac Comb function, 10
Coverage based ratio access network Discontinuous reception (DRX)
dimensioning, 98 cycles, 145
Coverage-capacity dimensioning flowchart, paging cycle, 143
109 for UEs, 145
Coverage capacity planning Discrete Fourier transform (DFT)
capacity quality analysis and coefficients, 14
verification, 107–11 frequency bins within, 17
network load and coverage performance frequency resolution of, 16
trade-offs, 112–15 for OFDM modulation
optimum system bandwidth for implementation, 9
coverage, 111–12
Index 347

view of, 13 Energy saving, 171–72


Discrete-time Fourier transform (DTFT) Enhanced ICIC (eICIC), 274
defined, 9–10 Enhanced mobile broadband (eMBB), 7,
form, 11 318
practical computation, 11 Enhanced PDCCH (EPDCCH), 208–9
sampling, 11–12 Enhanced UTRA (E-UTRA), 21, 29
spectrum, 10 eNodeB
view of, 13 buffers, 143
Distributed dynamic ICIC schemes, 158–59 CQI mapping, 61
DL OFDMA modulation within, 34–36
demodulation within UE, 36–38 physical layer measurements in, 77–78
eNodeB, 34–36 VOLTE, 258
receiver block diagram, 39 EPC connection management (ECM)
susceptibility to frequency offsets, connected state, 188
38–40 defined, 128, 185
transmitter block diagram, 37 idle state, 187–88
Downlink (DL) state, 187
cell range, 84 state model in MME, 189
channel overheads, 89–90 state model in UE, 189
control channel coverage verification, EPC mobility management (EMM)
97–100 defined, 128, 185
link budgeting, 95–97 deregistered state, 186
multiantenna improvements, xviii registered state, 186–87
noise rise, 84 state, 186–87
noise rise analysis, 88–89 state model in MME, 187
reference symbols (RS), 203–4 state model in UE, 187
spatial multiplexing on, 202–3 EPS bearers, 189
Downlink reference symbol transmit power E-UTRAN to E-UTRAN handover delay,
(DL RS TX power), 77 140–41
Downlink shared channel (DL-SCH), 48 E-UTRAN to UTRAN handover delay,
Dual connectivity (DC) 141–42
architecture, 276 Evolution of machine-type communications
conclusions and practical issues, narroband IoT (NB-IoT), 304–8
279–80 overview, 303–4
control plane/user plane design and Evolved LTE (eLTE), 315, 329, 332, 334–38
operation, 275–78 Evolved packet core (EPC)
defined, 272 defined, 177
design, 275–78 dimensioning, 194–97
goals addressed by, 274–75 element interconnection and interfaces,
MAC and PHY configuration, 278–79 181–85
operation, 275–78 elements of, 177
overview, 273–74 E-SMLC, 178, 180
Duplicate ACKs (DUACKs), 224 GMLC, 178, 180
Dynamic ICIC schemes, 157–59 Gx interface, 183
Dynamic multiantenna configuration, 171 home subscriber server (HSS), 177, 180
IP multimedia subsystem (IMS), 181
Earthquake and Tsunami Warning System mobility management entity (MME),
(ETWS), 143 177, 178
End-to-end TCP solutions, 252–53 network topology considerations,
Energy detection (ED) threshold, 290 193–94
348 From LTE to LTE-Advanced Pro and 5G

Evolved packet core (continued) RAN architecture illustration, 333


packet data network gateway (PGW), system architecture, 332–34
177, 179–80 system performance requirements,
planning guidelines, 193–97 318–21
policy control and charging rules 3GPP standardization documents, 317
function (PCRF), 177, 180 tight interworking with eLTE, 334–38
protocols, 177 toward, 315–40
S1-C interface, 181–82 use cases, 318–21
S1-U interface, 181–82 4G mobile networking technology, 1
S5 interface, 182 Fractional frequency reuse (FFR), 156–57,
S6a interface, 182 159–60
S10 interface, 184 Frequency division duplexing (FDD), 5–6,
S12 interface, 184 21
S13 interface, 183–84 frame structure, 26
serving gateway (SGW), 177, 178–79 preamble burst formats, 120, 121
SGi interface, 183 subframe, frequency-time grid, 27
UE mobility and connection Frequency-domain equalization (FDE), 36
management, 185–88 Frequency offsets, 38–40
X2 interface, 184–85 Frequency resolution, 16–18
Evolved serving mobile location center Full-dimension MIMO (FD-MIMO), 6
(E-SMLC), 178, 180
Evolved UMTS Terrestrial Radio Access Gateway mobile location center (GMLC),
Network (E-UTRAN), 3, 140–42 178, 180
GBR bearers, 191
Fade margin, 115 Generalized path loss models, 104–5
Fast fading, 138 Gx interface, 183
Fast Fourier Transform (FFT)
inverse, 40 Handovers
output array, 18 Break-Before-Connect (BBC), 131
realization of DFT, 9 delays, 140–42
reduced binwidth, 18 E-UTRAN to E-UTRAN delay,
resolution, 16, 17 140–41
size and sample frequency, 35 E-UTRAN to UTRAN delay, 141–42
zero padding and, 15 inter-RAT, 135–36
5G, 317 intrasystem, 132–35
air interface, 321–28 in LTE, 131–42
architecture concepts, 329–40 measurement filtering impact, 137–38
architecture options, 336 measurement hysteresis impact, 138–39
internal RAN split, 334 optimization, 137
key performance requirements, 320 overview, 131–32
migration towards, 338–40 packet forwarding and, 140
network flexibility, 329–32 performance, 137
network slicing, 329–32 RLC/MAC protocol resets and, 140
New Radio (NR), 317, 321–28 time-to-trigger parameters, 138–39
Phase 1, 316 High speed packet access (HSPA), 1, 2
Phase 2, 316 Home subscriber server (HSS), 177, 180,
phases, standardization timeline and, 261–62
316–17 Hybrid automatic repeat request (HARQ)
protocol architecture options for tight failures, 52
interworking, 337
Index 349

implementation, 232 home subscriber server (HSS), 261–62


MAC sublayer, 51–53 Interconnection Border Control
retransmissions, 52 Function (IBCF), 259, 261
status reporting, 46 Interrogating Call Session Control
stop-and-wait, 52 Function (I-CSCF), 259, 261
use advantage, 53 naming and addressing, 262–63
Hypertext Transfer Protocol (HTTP), Proxy Call Session Control Function
264 (P-CSCF), 259, 260–61
Serving Call Session Control Function
Indirect TCP (I-TCP), 250–51 (S-CSCF), 259, 261
Intercell interference signaling protocols in VOLTE, 264–65
avoidance or coordination, 155–60 UE application software, 264
interference cancellation, 154
interference randomization, 154 Kronecker delta function, 13
management, 153–60
overview, 153–54 Layer 2 protocol sublayers
Intercell interference coordination (ICIC) MAC, 50–53
defined, 154 optimization, 50
distributed dynamic schemes, 158–59 PDCP, 55–56
dynamic, based on PFR/SFR concepts, RLC, 53–55
159–60 Licensed-assist access (LAA)
dynamic schemes, 157–59 defined, 272–73
enhanced (eICIC), 274 deployment, 288
mechanisms, 156–60 deployment scenarios, 289
static/quasi-static schemes, 156–57 Listen Before Talk (LBT), 290
Interconnection Border Control Function multicarrier and multiantenna
(IBCF), 259, 261 operation, 291–92
Interference multi-operator case, 292
cancellation, 154 overview, 288
intercell, 153–60 practical issues, 292
randomization, 154 SCell, 292
reduction, 171 unlicensed channel access in, 288–91
Interference bursts, 113 See also LTE-Advanced Pro
Interleaving depth factor, 237–38 Link layer protocol stack, 51
Inter-RAT handovers, 135–36 Listen Before Talk (LBT), 290
Interrogating Call Session Control Function Load-balancing optimization, 171
(I-CSCF), 259, 261 Logical channels, 49–50
Intersample spacing, 17 Lognormal fed margin and coverage
Intrasystem handovers, 132–35 probability, 105–7
Inverse DFT (IDFT) Lognormal shadowing, 138
algorithms, 34 Low-noise amplifier (LNA), 86
defined, 16 LTE
process, 35 air interface, 21
IP flow mobility, 207–8 basic unit of time in, 32
IP multimedia subsystem (IMS) channels, 43–50
Application Server (AS), 259, 261–62 coverage, 6–7
architecture for VOLTE, 259–64 defined, 1
components of, 259 early rollouts, 4
defined, 181 global momentum, 2
as external signaling network, 259
350 From LTE to LTE-Advanced Pro and 5G

LTE (continued) evolution of machine-type


handovers in, 131–42 communications, 303–8
intercell interference management in, improvements, 7
153 interworking summary, 286–87
MTC devices, 217–18 interworking with WiFi, 280–87
networks, 1–2 introduction and main features
operations in unlicensed spectrum, overview, 272–73
287–94 licensed-assist access (LAA), 272–73,
physical layer measurements, 75–78 288–92
reference symbols, 28–29 LTE-U, 293–94
Release 8, 310–11 LTE-WiFi aggregation (LWA), 272
RRC state model in, 127–31 LTE-WLAN Aggregation (LWA), 280,
security, 5 282–84
SON technologies in, 163–74 LTE-WLAN Radio Level Integration
specifications, 5–6 with IPsec Tunnel (LWIP),
spectrum allocation, 22–23 280–82, 284–85
system evolution comparison, 309 massive carrier aggregation (massive
3GPP standards, 5, 7, 70 CA), 273
transmission modes, 69–75 MulteFire, 294
LTE-Advanced nonstandardized unlicensed LTE access
carrier aggregation, 200–201 schemes, 292–94
connections, 2 operation in unlicensed spectrum,
connection signaling, 200–201 287–94
coordinated multipoint transmissions overview, 271
and reception (CoMP), 209–10 RAN-Controlled LTE-WLAN
defined, 1 Interworking (RCLWI), 282,
downlink reference symbols, 203–4 285–86
enhanced multiantenna transmissions, Release 13, 281, 287, 311–12
202–4 UE support, 296
enhanced PDCCH, 208–9 uplink enhancements, 295–96
IP flow mobility, 206–8 LTE RAN, 3
local and selective IP traffic offload, LTE-U, 293–94
206–7 LTE-WLAN Aggregation (LWA)
machine-type communication, 210–18 activation, 284
main enhancements, 199–218 advantages, 283
relay nodes, 204–6 bearer, 282–83
Release 10, 311 defined, 272
resource scheduling, 201 initial phase of signaling, 285
spatial multiplexing on downlink, overview, 280
202–3 WiFi exploitation, 6
spatial multiplexing on uplink, 202 WLAN termination node and mobility
WLAN offload, 206–8 management, 283–84
LTE-Advanced Pro LTE-WLAN Radio Level Integration with
carrier aggregation enhancements, IPsec Tunnel (LWIP)
294–96 initial phase of signaling, 285
device-to-device (D2D) LWIPEP, 285
communication, 297–302 overview, 280–82
dual connectivity (DC), 272, 273–80 support, 284
enhanced LTE features, 271–312 WiFi exploitation, 6
LWA Adaptation Protocol (LWAAP), 280
Index 351

LWIP Encapsulation Protocol (LWIPEP), transmit antennas, 3–4


285 Minimization of drive testing (MDT),
173–74
Machine-to-machine (M2M) devices, 217 Mobile-end transport protocol (METP), 251
Machine-type communication (MTC) Mobile TCP (M-TCP), 251
access class barring, 215–16 Mobility management entity (MME)
architectural enhancements, 211–14 defined, 178
architecture illustration, 212 ECM state model in, 189
defined, 210 EMM state model in, 187
devices, 213–14 in evolved core network, 177
group-based, 214 Modulation and coding schemes (MCS)
LTE devices, 217–18 CQI mapping to, 61–63
network access management, 215–17 defined, 56
RACH resource partitioning, 216 effective channel coding estimation, 61
resource allocation and, 211 index, 96
server, 212 index mapping, 59–61
services, 210–11 index specification, 82
See also LTE-Advanced SINR values and, 82
MAC sublayer MulteFire, 294
defined, 50 Multicast broadcast single frequency network
functions, 51 (MBSFN), 69, 70
HARQ and, 51–53 Multicast channel (MCH), 48
Mapping tables, 59–61 Multicast control channel (MCCH), 49
Massive carrier aggregation (massive CA), Multimedia Broadcast Multicast Service
273, 295 (MBMS), 27
Massive Machine-Type Communications
(mMTC), 7, 318 Narroband IoT (NB-IoT)
Masthead amplifier (MHA), 86 air interface aspects, 304–6
Maximum channel occupancy time CP CloT EPS optimization, 306
(MCOT), 290 data transfer modes comparison, 309
Maximum segment sizing (MSS), 243–44 defined, 304
MBMS transmission, 33 modes of operation, 304
Measurement filtering, handovers, 137–38 NB1, 305
Measurement hysteresis, handovers, 138–39 protocol stack, 307
Medium access control (MAC) header, 27 system aspects, 306–8
Migration toward 5G system UP CloT EPS optimization, 306–8
approach, 338–40 very low data rate target, 305
path, 339 Network access management, 215–17
steps, 338 Network functions virtualization (NFV),
See also 5G 329
Millimeter-wave frequencies, 322–24 Network load
MIMO (multiple input and multiple cell range versus, 115
output) coverage performance trade-offs,
antenna configuration, 73, 74 112–15
capacity, 73 Network slicing
multiple-user, 203 concept illustration, 330
operation in LTE, 69–76 defined, 329
single user (SU-MIMO), 199 RAN support, 331–32
spatial multiplexing, 3, 69, 71–72, 74 New Radio (NR)
system design, 75 beam-level/cell-level mobility, 327
352 From LTE to LTE-Advanced Pro and 5G

New Radio (continued) scheme use, 3–4


beam management, 326–28 signals, 32
defined, 317 single-channel transmission, 33
functional split options for, 335 symbols, 27, 29, 36
Layer 2, 328 symbol time, 24
Layer 3: RRC, 328 waveforms, 24
millimeter-wave characteristics, 322–24
multiantenna schemes, 326–28 Packet data network gateway (PGW), 177,
overview, 321 178, 179–80
PHY layer, 324–28 Packet forwarding, 140
PHY layer NR numerology, 325–26 Paging, 143–45
protocol stack consideration, 328 Paging channel (PCH), 48
spectrum applicability to use cases, 324 Paging control channel (PCCH), 49
spectrum considerations, 321–24 PDCP sublayer, 55–56
spectrum stability, 324 Performance
See also 5G 5G requirements, 320
Next Generation Core (NGC), 317 handover, 137
Non-Access Stratum (NAS) Protocols, 177 TCP, 241
Nyquist rate, 17 Physical broadcast channel (PBCH), 45–46
Physical cell identities (PCIs)
OFDM multi-user access mechanism, 23–25 allocation of, 117–18
Okumura-Hata path loss model, 102–3 configuration, 168–69
Optimization coordination among sites, 118
cell outage, 170–71 defining u independently from, 119
coverage and capacity, 168 Physical channels
handovers, 137 downlink, 45–46
layer 2 protocol sublayers, 50 uplink, 46–47
mobility robust, 169 Physical control format indicator channel
narroband IoT (NB-IoT), 306–8 (PCFICH), 30, 46
PRACH, 125–26 Physical downlink control channel
RACH, 125–26, 164 (PDCCH)
RACH parameter, 172–73 coverage verification, 97–100
SON, self-optimization, 165–66 defined, 46
TCP application-level, 253–54 distributed, 208
TCP operation, 221 enhanced, 199, 208–9
uplink parameters, 145–49 LTE-Advanced, 208–9
Optimum system bandwidth, 111–12 Physical downlink shared channel (PDSCH),
Organization, this book, xv–xx 45, 60, 64, 91–92
Orthogonal frequency division multiple Physical hybrid ARQ indicator channel
access (OFDMA) (PHICH), 46
channels, 23 Physical layer measurements
cyclic prefix, 41 in eNodeB, 77–78
defined, 2 in UE, 75–77
DL, 34–40 Physical multicast channel (PMCH), 46
implementation, 25 Physical random access channel (PRACH)
intracell interference and, 2 capacity planning, 124
Orthogonal frequency division multiplexing configurations, 119–20
(OFDM) defined, 47
DFT for, 9 optimization, 125–26
Index 353

performance, 122 Proxy solutions, TCP, 245–47


preamble format selection, 120 Pseudo-random u-hopping, 119
procedure, 124–25 Public land mobile network (PLMN), 128,
Physical reference signals, 28–31 129
Physical resource blocks (PRBs), 24, 27 Pull-based scheme, 217
Physical uplink control channel (PUCCH)
defined, 47 QAM
formats, 148–49 16-QAM, 24
power control on, 148–49 64-QAM, 23, 24
transmit power, 148, 149 format, 3
uplink coverage verification, 100–101 real data sequences, 14
Physical uplink shared channel (PUSCH) type modulations, 57
defined, 47 QoS class indicators (QCIs), 189–90
power control on, 146–48 QPSK/QAM symbols, 41
PMI (precoding matrix indicator), 47, 57 Quadrature permutation polynomial (QPP),
Poisson summation formula, 10 58
Policy and Charging Control (PCC), 5–6 Quality of service (QoS), 188–89
Policy control and charging rules function
(PCRF), 177, 180 Radio access technology (RAT), 130
Policy control function (PCF), 332 Radio link budgeting (RLB)
Power control channel and protocol overheads, 89
on PUCCH, 148–49 defined, 82
on PUSCH, 146–48 DL link budgeting, 95–97
transmit (TPC) commands, 147 DL noise rise analysis, 88–89
uplink, 145–49 formulations, 83–107
Preamble sequences, 120–23 number of sites estimation, 101
Precoding matrix indicators (PMIs), 72 propagation path loss models, 101–7
Prelaunch parameter planning template, 99
PCI allocation, 117–18 UL link budgeting, 93–95
PRACH optimization, 125–26 UL noise rise analysis, 87–88
PRACH procedure, 124–25 uplink control channel coverage
random access planning, 119–25 verification, 100–101
uplink reference signal allocation, Radio link control (RLC) header, 27
118–19 Radio link control (RLC) protocol
Primary synchronization signal (PSS), 29, 30 ARQ, 236
Probing TCP, 252 implementation, 232
Propagation path loss models retransmissions, 240, 242
categories of, 102 transmission buffers, 242
Cost 231 Hata model, 103–4 Radio link failures (RLF), 169
generalized path loss models, 104–5 Radio resource control (RRC)
lognormal fade margin and coverage cell selection and reselection, 129–30
probability, 105–7 connected state, 131
Okumura-Hata path loss model, 102–3 defined, 127
overview, 101–2 idle state, 128
role of, 101 protocol layer, 127
WINNER path loss models, 105 state model, 127–31
Protocol overheads, 91–93 Radio resource management (RRM), 166
Proxy Call Session Control Function Radio resource scheduling, 258
(P-CSCF), 259, 260–61
354 From LTE to LTE-Advanced Pro and 5G

RAN-Controlled LTE-WLAN Interworking S5 interface, 182


(RCLWI) S6a interface, 182
as harmonized traffic steering decision, S10 interface, 184
285 S12 interface, 184
overview, 282 S13 interface, 183–84
Release 13, 285–86 Secondary synchronization signal (SSS), 29,
WiFi exploitation, 6 30
Random access channel (RACH) Security, LTE, 5
configurations, 125 Selective acknowledgement (SACK)
defined, 48 defined, 230
dynamic allocation of resources, 216 extension to, 248
optimization, 125–26, 164 option for wireless, 247–48
parameter optimization, 172–73 Self-configuration, 165
resource assignment, 216 Self-healing, 166
resource partitioning, 216–17 Self-optimization, 165–66
Random access planning Self-organizing networks (SONs)
derivation and assignment of preamble algorithm, 166
sequences, 120–23 architectures, 166–67
overview, 119–20 automatic neighbor relation setting,
PRACH capacity planning, 124 167–68
PRACH preamble format selection, cell outage optimization, 170–71
120 centralized implementation, 167
RootSequenceIndex planning, 123–24 coverage and capacity optimization,
Random access response (RAR), 125 168
Real-Time Transport Protocol (RTP), 265 defined, 163
Received power (RSRP), 132, 133, 135 dynamic multiantenna configuration,
Received quality (RSRQ), 132, 133, 135 171
Reference symbols (RS), 28–29, 132, 203–4 energy saving, 171–72
Relay nodes interference reduction, 171
access stratum, 205–6 load-balancing optimization, 170
connection, 205 mobility robust optimization, 169
LTE-Advanced, 204–6 operating costs and, 5
Resource allocation, 143–45, 211 overview, 163–64
Resource scheduling, 201 PCI configuration, 168–69
Retransmission timeout (RTO), 224, RACH parameter optimization,
226–28 172–73
RI (rank indicator), 47, 57 self-configuration, 165
RLC/MAC protocol resets, 140 self-healing, 166
RLC sublayer self-optimization, 165–66
defined, 53 smart and intelligent functionality, 166
functions, 54–55 standardization history and status, 164
operation modes, 53–54 technologies, 163–74
PDU size, 53 use cases, 167
RootSequenceIndex planning, 123–24 utilization of automated UE
Round-trip time (RTT), 221 measurements, 173–74
RSRQ, 76–77 Service access points (SAPs), 43
Rxwnd tuning, TCP, 243–44 Service capability server (SCS), 212
Service data flows (SDFs), 192, 193
S1-C interface, 181–82 Service data units (SDUs), 92
S1-U interface, 181–82
Index 355

Serving Call Session Control Function TCP Vegas, 231


(S-CSCF), 259, 261 The UL Relative Time of Arrival (TUL-
Serving gateway (SGW), 177, 178–79 RTOA), 78
Session Description Protocol (SDP), 264, 3GPP standards, 5, 7, 70, 136
265 Time division duplexing (TDD), 3, 21
Session Initiation Protocol (SIP), 264 frame structure, 27–28
Session management function (SMF), 332 uplink-downlink frame timing, 28
SGi interface, 183 Time-to-trigger (TTT) parameters, 138–39
Signal to noise interference ratio (SINR) Timing advance function, 32–33
cell-edge, 114 Tower-mounted amplifier (TMA), 86
control channel, 100 Traffic demand estimation, 110–11
defined, 112–13 Traffic offload, 206–7
degradation, 156 Transmission Control Protocol (TCP)
MCS determination, 82 in acknowledged mode, 223
probability, 108 adoption, 222
Simultaneous evolved packet system bearers application-level optimization, 253–54
(SEPSB), 195 congestion and flow control, 224
Single-carrier frequency division multiple congestion avoidance phase, 226
access (SC-FDMA) as connection-oriented protocol, 223
complex quadrature symbols, 40 connection setup, 223–24
contiguous subcarrier tones, 43 connection termination, 224
defined, 21, 40 defined, 221
implementation, 25, 40–43 end-to-end solutions, 252–53
receiver block diagram, 44 enhanced lost recovery options, 229–30
signal generation differences, 41 fast recovery, 229–30
transmitter block diagram, 42 fast retransmit, 229
within UE and eNodeB, 41–43 fast retransmit and recovery option, 249
uplink, 40–43 fundamentals, 222
Site area calculation formulas, 101 impact of link layer parameters, 237–42
Slotted access, 216 implementation types and impacts,
SMARTER, 315 249–50
Soft frequency reuse (SFR), 156–57, 159–60 Indirect (I-TCP), 250–51
Software-defined networking (SDN), 329 initial transmission window setting, 245
Space frequency block coding (SFBC), 72 interleaving depth factor, 237–38
Space time block codes (STBCs), 72 link layer retransmissions, 239–41
Spatial multiplexing link layer solutions, 235–42
LTE-Advanced, 202–3 major implementations and features,
MIMO, 3, 69, 71–72, 74 250–51
Spectral efficiencies, 63–69 maximum segment sizing (MSS),
Spectrum allocation, 22–23 243–44
Split TCP solutions, 250–51 Mobile (M-TCP), 251
Sum probability formula, 107 mobile-end transport protocol
Symbol timing, 23 (METP), 251
options, selecting, 247–49
TCP/IP header, 241 overview, 221–22
TCP New Reno, 231 parameter tuning, 243–45
TCP Reno, 231 performance, 241
TCP SACK, 231 Probing, 252
TCP Santa Cruz, 252 proxy solutions, 245–47
TCP Tahoe, 231 RLC buffer influence, 242
356 From LTE to LTE-Advanced Pro and 5G

Transmission Control Protocol (continued) DRX for, 145


RTO estimation, 226–28 ECM state model in, 189
Rxwnd tuning, 243–44 EMM state model in, 187
SACK option for wireless, 247–48 mobility and connection management
selective acknowledgement (SACK), within EPC, 185–88
230 physical layer measurements in, 75–77
slow start congestion control phase, 225
split solutions, 250–51 Voice capacity analysis, 265–69
throughput, 228 Voice over LTE. See VOLTE
timer parameters, 238–39 VOLTE
timestamp option, 230, 248–49 defined, 257
variable size congestion window eNodeB, 258
(CWND), 224–25 IMS architecture in, 259–64
variations as used on fixed networks, IMS signaling protocols in, 264–65
230–31 overview, 257–58
Wireless (WTCP99), 253 radio resource scheduling, 258
wireless network characteristics and, voice capacity analysis, 265–69
232–35
Transmission modes (TM), 69–75 Wideband code division multiple access
Transmission time intervals (TTIs), 3, 44 (WCDMA), 1, 117
Transport block size (TBS), 59–61 WINNER path loss models, 105
Transport channels, 47–48 Wireless networks
Type 1 FDD frame structure, 26–27 asymmetry, 234–35
Type 2 TDD frame structure, 27–28 characteristics, 232–35
delay spikes, 233–34
Ultra-reliable and low-latency communica- dynamic variable bit rate, 234
tions (URLLC), 7, 318 packet losses, delays, and impacting
Unified data management (UDM), 332 parameters, 232–33
Universal Mobile Telecommunications TCP performance optimization for,
Service (UMTS) 235–53
interleaving depth factor, 237–38 Wireless TCP (WTCP99), 253
RRC and, 127 Wireless World Initiative New Radio
Uplink (UL) (WINNER) Consortium, 105
channel overheads, 90–91
control channel coverage verification, X2 interface, 184–85
100–101 XML Configuration Access Protocol
link budgeting, 93–95 (XCAP), 265
multiantenna improvements, xviii
noise rise analysis, 87–88
Zadoff-Chu sequences, 31
optimization parameters, 145–49
ZC sequences, 118–19, 122, 124
power control, 145–49
Zero padding
reference signal allocation, 118–19
in FFT implementation, 15
spatial multiplexing on, 202
impact of, 16–18
Uplink-downlink frame timing, 28
intersample spacing and, 17
Uplink shared channel (UL-SCH), 48
User equipment (UE)
categories, 79
demodulation within, 36–38

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy