Telefonica Views On Open RAN
Telefonica Views On Open RAN
January 2021
Open RAN White Paper
Abstract
Table of Contents
List of Tables
Table 1: Memory Channel Feature List 14
Table 2: External Port List 15
Table 3: Radio units for 4G/5G low bands (below 3 GHz) 21
Table 4: Radio Units for 5G mid band (3.5 GHz) 21
6 7
Open RAN White Paper Open RAN White Paper
The present paper describes the main design considerations of the key Open RAN hardware elements
that Telefónica, in collaboration with key technology partners, has developed to be prepared for
deployments of full-fledged 4G/5G networks based in Open RAN.
3. DU design details
3GPP defined a new architectural model in Release 15, where the gNB is logically split into three
entities denoted as CU, DU and RRU. The RAN functions that correspond to each of the three
entities are determined by the so-called split points. After a thorough analysis of the potential
split options, 3GPP decided to focus on just two split points: so-called split 2 and split 7, although,
only the former one was finally standardized. The resulting partitioning of network functions is
shown in Figure 2.
8 9
Open RAN White Paper Open RAN White Paper
The CU (Centralized Unit) hosts the RAN functions above split 2; the DU (Distributed Unit) runs
those below split 2 and above split 7; and the RRU hosts the functions below split 7 as well as
all the RF processing.
The O-RAN Alliance further specified a multi-vendor fronthaul interface between the Figure 3: Example of main components that can be identified in an Open RAN DU server.
RRU and DU, by introducing a specific category of split 7 called split 7-2x, whose control, data,
management, and synchronization planes are perfectly defined. The midhaul interface between
CU and DU is also specified by 3GPP and further upgraded by the O-RAN Alliance to work in multi- In the example shown above, the Central Processing Unit (CPU) is an Intel Xeon SP system that
vendor scenarios. performs the main baseband processing tasks. To make the processing more efficient, an ASIC-
The CU and DU can be co-located with the RRU (Remote Radio Unit) in purely distributed based acceleration card, like Intel’s ACC100, can be used to assist with the baseband workload
scenarios. However, the real benefit of the split architecture comes from the possibility to processing. The Intel-based network cards (NICs) with Time Sync capabilities can be used for both
centralize the CU, and sometimes also the DU, in suitable data centers where all RAN functions fronthaul and midhaul interfaces, with suitable clock circuits that provide the unit with the clock
can be fully virtualized and therefore run on suitable servers. signals required by digital processing tasks. PCI-e slots are standard expansion slots for additional
The infrastructure needed to build a DU is nothing else than a server based on Intel Architecture peripheral and auxiliary cards. Other essential components not shown in the figure are random-
optimized to run those real-time RAN functions located below split 2, and to connect with the access memory (RAM) for temporary storage of data, flash memory for codes and logs, and hard
RRUs through a fronthaul interface based on O-RAN split 7-2x. It is the real-time nature of the disk devices for persistent storage of data even when the unit is powered-off.
DU which motivates the need to optimize the servers required to run DU workloads. In what follows we describe in more detail the main characteristics of the key elements that
The DU hardware includes the chassis platform, mother board, peripheral devices, power supply comprise the DU.
and cooling devices.
When the DU must be physically located inside a cabinet, the chassis platform must meet 3.1. Central Processing Unit (CPU)
significant mechanical restrictions like a given DU depth, maximum operating temperature, or full To provide the DU with the best possible capacity and processing power, 3rd Generation Intel
front access, among others. The mother board contains processing unit, memory, the internal I/O Xeon Scalable Processor Ice Lake is employed to benefit from the latest improvements in I/O,
interfaces, and external connection ports. The DU design must also contain suitable expansion memory, storage, and network technologies.
ports for hardware acceleration. Other hardware functional components include the hardware and Intel Xeon Scalable processors comprise a range of CPU variants (called SKUs) with different
system debugging interfaces, and the board management controller, just to name a few. Figure core counts and clock frequency that can support from low-capacity deployments (for rural
3 shows a functional diagram of the DU as designed by Supermicro. scenarios) to higher capacity deployments (for dense urban) such as massive MIMO.
Intel Ice Lake CPU features, among other improvements, Intel Advanced Vector Extensions 512
(Intel AVX-512) and up to 2 Fused Multiply Add (FMA) instructions which boosts performance for
the most demanding computational tasks1 .
1
https://www.intel.com/content/www/us/en/products/docs/processors/xeon/3rd-gen-xeon-scalable-processors-brief.html
10 11
Open RAN White Paper Open RAN White Paper
3.2. Hardware Acceleration Card 3.3. Time, Phase and Frequency Synchronization
One of the most compute-intensive 4G and 5G workloads is RAN layer 1 (L1) forward error Open RAN solutions rely on stringent time synchronization requirements for end-to-end
correction (FEC), which resolves data transmission errors over unreliable or noisy communication latency and jitter. Timing synchronization has become a critical capability and now is fully
channels. FEC technology detects and corrects a limited number of errors in 4G or 5G data without available on COTS hardware using specific NICs with time synchronization support.
the need for retransmission. FEC is a very standard processing function that is not differentiated 5G requires support of time synchronization accuracy across the whole network below 3
across vendor implementations. microseconds for Time-Division Duplex (TDD) carriers, and even more stringent when using MIMO
FEC has been typically implemented on Field Programmable Gate Arrays (FPGA) accelerator or Carrier Aggregation. Contrary to non-Open RAN technologies, Frequency Division Duplex (FDD)
cards, like the Intel PAC N3000. However, recent accelerator cards feature a low-cost, power carriers also require stringent synchronization to sustain eCPRI-based fronthaul interface.
efficient, acceleration solution for vRAN deployments based on Intel eASIC technology, called To ensure this level of precision on COTS hardware, network-based synchronization protocols
the Intel vRAN Dedicated Accelerator ACC100. like Synchronous Ethernet (SyncE) and IEEE 1588v2 Precision Time Protocol (PTP) are key to
Intel eASIC™ devices are structured ASICs, an intermediate technology between FPGAs ensure synchronization at the physical layer. This will be even more essential as moving to
and standard Application-Specific Integrated Circuits (ASICs). These devices provide lower unit- higher frequency radio spectrums like millimeter wave (mmWave) with large MIMO antenna
cost and lower power compared to FPGAs and faster time to market and lower non-recurring configurations.
engineering cost compared to standard- ASICs. Both accelerator options connect to the server In addition, synchronization based on Global Navigation Satellite Systems (GNSS) like GPS,
processor via a standard PCIe Gen 3 x16 interface. Galileo, Glonass or Beidou can provide essential time and phase references in those cases where
Silicom’s FEC Accelerator Card “Pomona Lake” utilizes Intel ACC100 dedicated accelerator to network-based synchronization is not available, or as a back-up in case of network timing failure.
perform forward error correction (FEC) processing in real time, thus offloading such intensive task Open RAN DUs should be prepared for both GNSS and network synchronization when
from the CPU in the host server. The ACC100 implements the Low-Density Parity Check (LDPC) integrating them into a 4G/5G site.
FEC for 5G and Turbo FEC for 4G and supports both concurrently. The ACC100 supports the O-RAN
adopted DPDK BBDev API - an API which Intel contributed to the opensource community for FEC 3.4. Time Sync NIC Card
acceleration solutions. The ORAN fronthaul interface between the DU and the Radio Unit relies on the standard
Intel has invested heavily in a reference software architecture called FlexRAN to accelerate Ethernet protocol to enable multi-vendor interoperability between DUs and radio units.
RAN transformation. FlexRAN contains optimized libraries for LTE and for 5G NR Layer 1 workload Network Interface Cards (NICs) are standard elements in COTS hardware. As an example,
acceleration including optimizations for massive MIMO. The FlexRAN reference software enables the Intel Ethernet 800 Series card (also known as Columbiaville NIC) supports multiple port
customers to take a cloud native approach in implementing their software utilizing containers. speeds (from 100 Mb/s to 100 Gb/s in Intel’s E810) with a single architecture, to meet a range of
As illustrated in Figure 4, the FlexRAN software reference architecture supports the ACC100 fronthaul/midhaul/backhaul requirements for the transport network that makes it suitable for
which enables users to quickly evaluate and build platforms for the wide range of vRAN networks. Enterprise, Cloud, and Telco applications.
Enhancing NICs with support of time synchronization is however essential to make NICs usable
in Open RAN DUs. These cards are usually called Time Sync NICs (Figure 5).
Silicom design a family of NICs (called STS) for Time Synchronization services in 4 ports, 8
ports and 12 ports configurations including support for PTP and SyncE, suitable for the site types
shown in Figure 1.
Figure 5: Internal
architecture of a
Time Sync NIC
card based on Intel
Ethernet 800 series.
12 13
Open RAN White Paper Open RAN White Paper
As shown in Figure 6, Time Sync NICs allow removal of the fronthaul switch between the RU and 3.6. External Interface Ports
DU, thus saving costs and reducing network complexity. Time Sync NICs can provide an accurate The DU must be equipped with enough external ports to enable proper interfacing with hard
Clock to multiple Radio Units and at the same time recover the clock from the midhaul. drives, PCIe cards, Ethernet ports, and other peripherals. Below is a table with some of the main
external interface ports that the DUs should have for an Open RAN application, according to
Supermicro.
Table 2:
External Port List
Figure 6: While former Open RAN implementations rely on fronthaul switches for connection of RUs
to the DU, Time-Sync NIC cards allow direct connection for improved reliability.
Table 1:
Memory Channel
Feature List
14 15
Open RAN White Paper Open RAN White Paper
In what follows we describe the above main components of an Open RAN RRU. All the components power signals in FDD.
in Figure 7 can be implemented in one or several FPGAs with the exception of the PA, LNA and Filter
elements, as explained below. 4.1.4. RF Front End (RFFE)
The RF Front End comprises Power Amplifiers (PA), Low Noise Amplifiers (LNA), Digital to
4.1.1. Synchronization and Fronthaul Transport Function Block Analog Converters (DAC), and Analog to Digital Converters (ADC). Some of the latest RFSoC (RF
The PTP synchronization module is aimed to extract the main timing signal from the eCPRI System on Chip) devices, like Zynq RFSoC, integrate direct RF sampling Data Converters based
fronthaul interface. PTP provides accurate time and phase references to the RRUs for the on CMOS technology with improved power consumption [5].
transmissions of all sectors to be synchronized with each other. SyncE also provides additional The integrated RF DACs and RF ADCs perform direct RF sampling of the carrier signal instead
frequency stability, and acts as a backup source of synchronization in case PTP fails. Both PTP and of Intermediate Frequency (IF) sampling, thus avoiding analog up/down Converters. As a result,
SyncE are generally required and must be provided by the DU through the fronthaul link. RRUs can have reduced sizes thus enabling dual/triple band Radios in single mechanical enclosure.
The fronthaul connectivity between RRU and DU is usually realized by means of an optical Active Antenna Units (AAU) also integrate suitable antenna arrays and bandpass filters in the
Ethernet interface with the aid of suitable Small-Form Factor Pluggable (SFP) modules. RRUs are same enclosure.
usually equipped with two fronthaul ports to support daisy chain configurations where several radio
units can be cascaded to minimize the number of fronthaul links towards the DU. The presence of 4.2. FPGA selection criteria for the RRU
two fronthaul ports also enables network sharing scenarios, where the same RRU is shared by two Field Programmable Gate Arrays (FPGAs) from Xilinx in the RRU not only perform digital processing
different DUs and each DU performs the baseband functions corresponding to a different operator. tasks but can also integrate some of the analog subsystems. Xilinx has integrated mixed analog-
The Fronthaul Transport Function block involves specific processing of data packets to ensure digital subsystems (including DACs and ADCs) into its RFSoC device family. This is the case of the
interoperability in a multi-vendor environment. The use of an FPGA-based solution allows the Zynq® UltraScale+ RFSoC™ family from Xilinx used in the RRUs.
addition of features as O-RAN specifications evolve over time. The need for wider bandwidths in the radio unit (RU) is not just about increasing data rates
and performance, but also to support more complex and diverse radio configurations as needed
4.1.2. Lower PHY Baseband Processing for existing and new bands. The sheer number of global bands would be unmanageable if each
The lower PHY layer processing includes blocks for performing Fast Fourier Transform (FFT)/ required a unique radio. Radios are designed to support the widest possible bandwidth and
Inverse Fast Fourier Transform (iFFT), Cyclic Prefix addition and removal, Physical Random Access seemingly random carrier configurations to meet these requirements. Early 5G radios supported
Channel (PRACH) filtering, and digital beamforming. bandwidths up to 200MHz, but future bandwidths up to 400MHz are being requested. These radios
Beamforming is only required in Active Antenna Units (AAUs), where antennas are integrated support multiple bands and hence are called multi-band. In some cases, vendors use multiple
as part of the RRU (as in massive MIMO). Power Amplifiers (PAs) to cover multiple bands; in other cases, advanced wideband Gallium Nitride
(GaN) PAs are used, requiring state of the art wide-band DPD. Zynq UltraScale+ RFSoC family is
4.1.3. Digital Front End (DFE) designed for this purpose.
The digital front end comprises specialized blocks for the transmit (TX) and receive (RX) paths. RFSoC devices integrate, in addition to an FPGA for digital processing, a fully hardened digital
The TX path contains a spectrum shaping filter and a Digital Upconverter (DUC) towards the front-end (DFE) subsystem with all required processing blocks, and direct RF sampling ADC and
desired carrier frequency. In addition, it contains two fundamental blocks: Digital Pre-Distortion ADC converters thus eliminating power-hungry JESD204 interface. A hardened DFE is equivalent
(DPD), and Crest Factor Reduction (CFR) which are provided by Xilinx and integrated by Gigatera. to having an ASIC-based DFE embedded in the RFSoC with an optimized mix of programmability
CFR reduces the Peak-to-Average Ratio (PAR) of the 4G/5G signals by clipping those peaks that and ASIC functions. The benefit is a significant reduction in total power, board area, and complexity
create highest distortion. DPD compensates the Power Amplifier (PA) distortion in RFFE to improve of the radio solution. This is most apparent in 64T64R massive MIMO AAUs.
the RF linearity. Both CFR and DPD improve the energy efficiency of the RRU. Minimization of State-of-the-art Xilinx Zynq RFSoC DFE devices will support up to 7.125 GHz of analog RF
the PA power consumption is a source of continuous improvement and innovation because PAs bandwidth in 2021.
represent a large fraction of the overall power consumption in the RAN. An adaptable FPGA-based
solution enables customization for a range of PA output power requirements and technologies. 4.3. O-RAN Fronthaul interface
When digital beamforming is implemented, a beamforming calibration function in either time The O-RAN Alliance has defined a multi-vendor fronthaul interface between DU and RRU based
domain or frequency domain is also implemented. on Split 7-2x. In O-RAN terminology, RRU is denoted as O-RU and DU is denoted as O-DU.
The RX path contains a Digital Downconverter (DDC), a Low-Noise Amplifier (LNA) and an The fronthaul specifications include Control, User and Synchronization (CUS) & Management
optional PIMC (Passive Inter-Modulation Canceller). PIMC aims to compensate the interference (M) plane protocols, as indicated in Figure 8, whose elements can be summarized here:
appearing on the RX path that is generated by passive intermodulation distortion caused by high- · Control Plane (C-Plane) data between the O-DU and O-RU, such as data section information,
scheduling information, beamforming information, etc.
16 17
Open RAN White Paper Open RAN White Paper
· User Plane (U-Plane) data based on frequency-domain IQ samples. Based on O-RAN split 7-2x, an RRU can be configured to operate in two different modes,
· Synchronization Plane (S-Plane) including timing and synchronization information. denoted as “Category A” and “Category B” depending on the functionality of both the RRU
and DU. Depending on these modes and on the configuration parameters allowed by O-RAN, a
· Management Plane (M-Plane) enabling initialization, configuration, and management of the split 7-2x can actually correspond to three different split points (namely split 7-1, 7-2 and 7-3)
O-RU through suitable management and control commands between O-DU and O-RU. depending on the configuration chosen.
Figure 9:
Functional
basic block
diagram
of fronthaul
interface in
O-RU.
18 19
Open RAN White Paper Open RAN White Paper
4.3.2.2. O-RU “Category B” 4.4. Radio units for 4G/5G low and mid bands (up to 3.5 GHz)
In this case the precoding function is performed at the O-RU, hence making the radio more Table 3 and Table 4 contain as a reference the radio configurations required for proper 4G/5G
complex but also reducing fronthaul bitrate in massive MIMO implementations. Open RAN deployments in the most typical scenarios of Telefonica footprint.
This category allows so-called “Modulation Compression”, which can be performed in the
downlink to effectively send only the bits equivalent to the constellation points of the complex
IQ signals, hence reducing the downlink fronthaul throughput effectively. This is equivalent to a
split 7-3 in O-RAN terminology [6].
Figure 11 shows the Downlink signal processing for O-RUs Category B.
Figure 11: O-RU “Category B”. Blocks above the O-RAN FH line are executed at the
O-DU, whereas those below it are executed at the O-RU.
20 21
Open RAN White Paper Open RAN White Paper
5. Cloud Architecture
5.1. Open RAN virtualized solution
The Open RAN solution follows a fully cloudified network design. The CU and DU, as well as
the Element Management System (EMS) managing the RAN network elements, benefit from
a software-defined architecture. Suitable virtual instances of the vCU, vDU and vEMS can
be deployed over a scalable cloud-based platform managed by a Service Management and
Orchestration Framework. This is graphically shown in Figure 12 as implemented by Altiostar.
Figure 13: Schematic illustration of the main interfaces and network elements in a 5G Open vRAN architecture.
The Telco Cloud Open RAN concept can be taken one step further by implementing a container-
based cloud-native micro-service architecture. This new architecture enables advanced cloud-
based networks supporting new applications and services with advanced automation, newer
algorithms and improved Quality of Experience (QoE), while ensuring network slicing and full
support of Control/User-Plane Separation (CUPS). This is the means to achieve the true promise
of a service-based architecture for 5G.
Altiostar’s container-based 5G solution further disaggregates CUs and DUs into micro-
services comprising transport, management, monitoring, control plane and user plane functions.
Depending on the type of network slice and application being deployed, containerized network
functions can be rapidly deployed at various locations in a very light footprint and then scaled
up based on traffic.
Figure 12: Illustration of a Telco Cloud Open RAN architecture.
22 23
Open RAN White Paper Open RAN White Paper
scripting support. The Altiostar vEMS can also be flexibly deployed as a set of containerized Figure 15 illustrates another example where time and phase references are provided by the
network functions to meet evolving 5G deployment scenarios. network in a 2G/3G/4G/5G site. In this case there is no GNSS receiver and the CSR must propagate
Integration of the vEMS into the Service Management and Orchestration Framework paves the PTP and SyncE towards the DUs. A legacy 2G/3G BBU is also shown that must be interconnected
way to the extensive use of Artificial Intelligence (AI) and Machine Learning (ML) techniques with the other DUs through the CSR.
in multiple domains, such as RAN performance enhancement, radio resource management, Network-based synchronization is deemed as the best and most future-proof option because
or advanced traffic/service optimization, to name a few. AI/ML can benefit from the use of it avoids potential points of failure, like the GNSS receiver or the daisy-chain connection of DUs
automation in cloud-based software architectures, thus reducing operational complexity in to propagate synchronization. However, it requires full network support of PTP and can therefore
multi-vendor Open RAN scenarios. be applicable only in a limited set of scenarios.
AAUs can be directly connected to the DUs thanks to the use of Time Sync NICs.
24 25
Open RAN White Paper Open RAN White Paper
8. References 9. Glossary
[1] ORAN-WG4.CUS.0-v02 “Control, User and Synchronization Plane Specification”, AAU: Active Antenna Units KPIs: Key Performance Indicators
O-RAN Alliance, Working Group 4.
ADC: Analog to Digital Converters KVM: Kernel-based Virtual Machine
[2] ORAN-WG4.MP.0-v02 “Management Plane Specification”, O-RAN Allicance, Working AI: Artificial Intelligence LNA: Low-Noise Amplifier
Group 4. APIs: Application Programming Interfaces Massive MIMO: Massive Multiple-Input
26 27