D42 Roll-Out
D42 Roll-Out
Ares(2022)5237016 - 19/07/2022
Work package WP 4
Task Task 4.2
Due date 30th June 2022
Submission date 19th July 2022
Deliverable lead CELLNEX
Version 1.1
Authors
Reviewers
D4.2 5G roll-out and system testing report
Abstract
The goal of this deliverable is to define the principles and procedures that have been followed
to integrate the building blocks reported in the previous deliverables, mainly D1.2, D1.3, D2.2,
D3.2 to become part of the two 5G-NPN based platforms used for the final demonstrations and
trials in the project. The integration plan defines how the consortium managed to map the
Affordable5G architecture components and equipment to Castellolí and Malaga 5G-NPN
platforms. The focus is also on defining the test cases to ensure that the building blocks are
properly working in the target platforms, while also test results and integration and unit tests
are described. This document provides the second, and final, version of test cases that have
been expanded and implemented during the second year of the Affordable5G project lifetime.
List of Contributors
Partner Short name Contributor(s)
Sergio González, Borja Otura and
ATOS SPAIN SA ATOS
Josep Martrat
ADVA Optical Networking Israel Ltd ADVA Andrew Sergeev
RETEVISION I SA CEL Judit Bastida
ACCELLERAN ACC Simon Pryor
Nicola di Pietro, Daniele Munaretto
ATHONET SRL ATH
and Daniele Ronzani
THINK SILICON EREYNA KAI
THI Georgios Keramidas
TECHNOLOGIA ANONYMI ETAIRIA
RUNEL NGMT LTD REL Israel Koffman and Baruch Globen
Marta Amor, Eneko Atxutegi and
NEMERGENT SOLUTIONS S.L. NEM
Aarón Rodríguez
Rita Santiago , Roni Fernades
UBIWHERE LDA UBI
Sabença and Diogo Guedes
Gabriele Cerfoglio, Andrea Falconi
MARTEL GMBH MAR
and Giacomo Inches
EIGHT BELLS LTD 8BELLS George Kontopoulos
Oscar Trullols and Angelos
NEARBY COMPUTING SL NBC
Antonopoulos
Pablo Herrera Díaz, Javier Andrés
UNIVERSIDAD DE MALAGA UMA Jiménez Jiménez, Francisco Luque
Schempp and Pedro Merino Gómez
Panagiotis Trakadas, Lambros
ETHNIKO KAI KAPODISTRIAKO Sarakis, Panagiotis Gkonis, Sotirios
NKUA
PANEPISTIMIO ATHINON Spantideas and Anastasios
Giannopoulos
FUNDACIO PRIVADA I2CAT, INTERNET
I2CAT Juan Camargo, Wilson Ramirez
I INNOVACIO DIGITAL A CATALUNYA
UNIVERSITAT POLITECNICA DE
UPC Jordi Pérez-Romero, Oriol Sallent
CATALUNYA
EURECOM EUR Navid Nikaein, Sofia Pison
List of reviewers
Partner Name
ATOS Sergio Gonzalez, Rosana Valle (QA)
NKUA Panos Gkonis
UMA Pablo Herrera
Disclaimer
The information, documentation and figures available in this deliverable, is written by the Affordable5G
(High-tech and affordable 5G network roll-out to every corner) – project consortium under EC grant
agreement 957317 and does not necessarily reflect the views of the European Commission. The
European Commission is not liable for any use that may be made of the information contained herein.
EXECUTIVE SUMMARY
In the first part of this deliverable, an update of the architecture instantiation in each testbed
describing the updated building blocks and the integrations that have been performed is
provided.
In Malaga Campus platform, the main integrations are related to the OSM orchestration
system, 5G Core updates and O-RAN extensions and proper integrations between the
partners’ components. The 5G Core integrations with the dRAX, the integrations performed
with regards of the TSN over 5G and the OSM extension and integration and the description
of the status of the integration between the RU and the DU are provided as well.
In Castellolí platform, the described integrations are between the NearbyOne orchestrator, the
Slice Manager, the 5GCore, the NWDAF component and O-RAN components (followed by the
developed xApps) that have been integrated in the platform. The new version of dRAX has
also been implemented in Castellolí, and has been integrated with the Orchestrator, as well as
the Slice Manager and the last version of the 5GCore. Moreover, the Nemergent MCS
applications have been deployed as well, but their integration is an on-going work for the last
part of the project.
Additionally, a revised timeline for each of the testbeds is presented, showing the updated
strategy plan that the consortium has followed to achieve all the activities expected for the
second year of the project and, specifically, redefined to meet the deadlines for the end of the
project: perform end to end system test cases and pilot validation.
This deliverable also contains a detailed definition of the test cases: First of all, the individual
test cases, used to verify the behavior of each component operating in a standalone fashion
are described. After the component-individual test case definition, the integration test cases
are defined in order to describe all the test cases that took place to review the correct
interaction between affordable 5G components in each of the two platforms. The most
advanced integrations present the results of their test cases as well as the respective summary
tables describing the relevant KPIs.
Finally, Affordable 5G pilots are described in specific sections for TSN over 5G Proof of
Concept, Smartcity with beyond state-of-the-art video streaming application and Mission
Critical Services, covering several scenarios and requirements from the 5G network
perspective. Each pilot describes its building blocks and indicates a good integration progress
and provides current deployment status.
TABLE OF CONTENTS
LIST OF FIGURES
Figure 30 Latency and throughput for NEOX accelerator when the number of cores is
modified.............................................................................................................................. 50
Figure 31 Power and energy of NEOX accelerator when the number of cores is varied
51
Figure 32 E2E latency results ........................................................................................... 52
Figure 33 e2e jitter results................................................................................................. 53
Figure 34 UP shortcut testbed – baseline ........................................................................ 54
Figure 35 Concourse-ci screenshot showing the provisioning device tests. ............... 57
Figure 36 Cypress video’s screenshot showing the e2e-provision-vm2 in «
provisioning » status at time 24.20s of the test. .............................................................. 57
Figure 37 Cypress video’s screenshot showing the e2e-provision-vm2 in « Ready »
status at time 895.30s of the test. ..................................................................................... 57
Figure 38 Concourse-ci screenshot showing the provisioning device tests. ............... 59
Figure 39 Cypress video’s screenshot showing the sample-deployment « ambient-al
block » in « provisioning » status at time 9.90s of the test. ........................................... 59
Figure 40 Cypress video’s screenshot showing the sample-deployment « ambient-al
block » in « Ready » status at time 33.12 s of the test. ................................................... 59
Figure 41 Cypress video’s screenshot showing the sample-deployment « ambient-al
block » in « Undeploying » status at time 34.90 s of the test......................................... 60
Figure 42: PCAP captura of PTP, C-Plane and U-Plane traffic........................................ 65
Figure 43 NearbyOne orchestrator logs showing a GET request to the Slice Manager 73
Figure 44 NearbyOne Orchestrator logs showing POST creating a ran slice in Slice
Manager .............................................................................................................................. 73
Figure 45 Metric "container_cpu_usage_seconds_total" obtained from the Prometheus
instance by the Message Broker ...................................................................................... 75
Figure 46 Metric "container_network_transmit_bytes_total" obtained from the
Prometheus instance by the Message Broker ................................................................. 76
Figure 47 Message Broker logs showing the publication of the prediction in the queue
78
Figure 48 Test script showing that the prediction is properly published in the queue 78
Figure 49 Prometheus dashboard showing the prediction data that was published in
the Message Broker ........................................................................................................... 79
Figure 50 Prometheus dashboard showing the rules defined monitoring the
predictions ......................................................................................................................... 80
Figure 51 Prometheus dashboard showing the triggering alarms ................................. 80
Figure 52 Screenshot with the execution of test "Registration of Mocked Non-RT dRAX
5G" 83
Figure 53 Screenshot with the execution of test "Registration of dRAX 5G" ................ 84
Figure 54 Screenshot with the execution of test "Deployment of the Telemetry xApp"
85
Figure 55 Screenshot with the execution of test "Creation of dRAX’s Policy Type" .... 86
Figure 56 Screenshot with the execution of test "Creation of dRAX’s Policy Type" .... 87
Figure 57 Screenshot with the execution of test "List all the information stored about
dRAX" 88
Figure 58 Screenshot with the execution of the "Delete Policy test"............................. 89
Figure 59 Screenshot with the execution of test "Undeployment of the Telemetry
xApp" 89
Figure 60 Output of ptp4l command ................................................................................. 91
Figure 61 Output of phc2sys command ........................................................................... 92
Figure 62 TSN over 5G PoC final architecture ................................................................. 94
Figure 63 TSN over 5G PoC current architecture ............................................................ 96
Figure 64 Smartcity Pilot Architecture ............................................................................. 97
Figure 65 Smartcity steps for re-identification ................................................................ 98
Figure 66 Services workflow ............................................................................................. 99
Figure 67 Smartcity association between different components ................................. 100
Figure 68 ML Algortihm workflow ................................................................................... 102
Figure 69 Smartcity components involving service work ............................................ 102
Figure 70 Simplified vision of Pilot1 ............................................................................... 106
Figure 71 Pilot1 in different PoP ..................................................................................... 106
Figure 72 MCS service in Affordable 5G network .......................................................... 107
Figure 73 Simplified interaction between modules in Pilot1 ........................................ 107
Figure 74 Scalability use case workflow ........................................................................ 108
Figure 75 Interactions between MCS service, monitoring and orchestrator ............... 108
Figure 76 Dockerized MCS service ................................................................................. 109
Figure 77 Nemergent MCS service deployed in Castellolí ............................................ 110
Figure 78 Nemergent MCS service Pods deployed in Castelloli .................................. 111
Figure 79 Nemergent MCS service deployed in Castellolí with NodePort ................... 111
Figure 80 Nemergent MCS service deployed in Castellolí with storage classes......... 112
LIST OF TABLES
Table 1 Individual Test Cases KPIs Summary Table ...................................................... 65
Table 2 Integration Test Cases Summary Table ............................................................. 92
Table 3 Emergency Communications port exposure in NodePort mode .................... 110
Table 4 Status update of end-to-end testing of deployed Pilot 1 in Castellolí ............. 112
ABBREVIATIONS
3GPP Third Generation Partnership Project
4G Fourth Generation
5G Fifth Generation
5GC 5G Core
AI Artificial Intelligence
AF Application Function
AGV Automated Guided Vehicle
AMF Access and Mobility Function
AMR Autonomous Mobile Robots
API Application Programming Interface
AUSF Authentication Server Function
BC Boundary Clock
BSS Business Support System
CAPIF Common API Framework
C-MDAF Centralized Management Data Analytics Function
CN Core Network
CNN Convolutional Neural Networks
CP Control Plane
CU Central Unit
CPRI Common Public Radio Interface
DA Data Analyzer
DC Data Collector
DL Deep Learning
DN Data Network
DNN Data Network Name
DRL Deep Reinforcement Learning
DSCP Differentiated Services Code Point
DS Data Source
DS-TT Device Side Translator
DSF Data Semantic Fabric
DP Data Plane
DU Distributed Unit
E2E End to End
eMBB enhanced Mobile Broadband
EMS Element Management System
EMS-CM EMS - Configuration Management
ENI Experiential Network Intelligence
FEC Forward Error Correction
FoF Factory of the Future
gNB Next Generation NodeB
gNMI gRPC Network Management
IM Infrastructure Monitoring
IMF Infrastructure Management Framework
IoT Internet of Things
IP Internet Protocol
ITU International Telecommunication Union
KDU Kubernetes Deployment Unit
KPI Key Performance Indicator
LiFi Light Fidelity
LLS Lower Layer Split
LSTM Long Short-Term mMemory
SA Standalone
SBA Service Based Architecture
SDN Software Defined Networking
SEAL Service Enabler Architecture Layer
SIB System Information Block
SIM Subscriber Identity Module
SLA Service Level Agreement
SME Small of Medium Enterprise
SMF Session Management Function
SMO Service Management and Orchestration
SON Self Organizing Network
TCP Transmission Control Protocol
TNE Transport Network Equipment
TSC Technical Steering Committee
TSN Time Sensitive Networking
UAV Unmanned Aerial Vehicle
UDM Unified Data Management
UDR Unified Data Repository
UE User Equipment
UP User Plane
UPF User Plane Function
UE User Equipment
UML Unified Modeling Language
URLLC Ultra Reliable Low Latency Communication
V2X Vehicle to Everything
VDU Virtual Deployment Units
vEPC Virtualized Evolved Packet Core
VM Virtual Machine
VIM Virtual Infrastructure Manager
VNF Virtual Network Function
VoMS Vertical-oriented Monitoring System
VNFD Virtual Network Function Descriptor
VPN Virtual Private Network
VSNF Vertical Service Management Function
VSF Virtual Security Function
WDM Wavelength Division Multiplexing
1 INTRODUCTION
The development of an affordable 5G system, which is the main objective of this project, is
based on the combination of many building blocks that, together, conform a full functional
structure. To achieve this, one of the most critical steps is the integration and testing of the
components, which is the main goal of this deliverable.
To this end, the two sites are separately described: In Section 2, the integration and testing
performed in Malaga platform is defined, while Section 3 describes the same content structure
but regarding Castellolí platform. In each section, the architecture of the platforms is presented,
which are updated versions of the ones included in D4.1 [2] and the integrations that have
taken place for the specific platform are described in detail. Finally, each section describes the
Timeplan readjustment and the milestones to be achieved before the end of the project.
Section 4 is divided by the individual test cases and the integration test cases. The first part is
a detailed section that contains the test cases performed to validate each of the building blocks
before being integrated in the platforms. This individual test cases also specify the test results
in the same section.
The second part of section number 4 explains the test cases that took place to validate the
integrations between components, once deployed in the platform or in a preliminary stage in
some cases. The integrations test cases also gather the results obtained at the end of the
section.
The following sections describe the specific evolution of each pilot, explaining the pilot building
blocks, the integrations that have been found necessary and its deployment status.
Section 6 explains the SmartCity pilot, that intends to demonstrate the usage of a 5G private
network, and the advances provided by Affordable5G in an emergency scenario occurring at
an indoor environment.
In addition, an alternative O-RAN solution will be deployed in Málaga. This solution will allow
us to test multivendor compatibility with the 5G core and the rest of the components of the 5G
network, including the pilots deployed on the top. Thus, one the goals of the project can be
tested, which is the interoperability of different equipment coming from different vendors
leading to an affordable 5G network.
5G Core Integrations
This section provides an overview of the Athonet 5G core installation, configuration, and test
plan in Malaga (UMA) testbed, to provide the 5GC integration in the end-to-end (E2E) 5G
infrastructure.
Two Virtual Machines (VMs) containing pre-configured 5GC functionalities were provided by
Athonet and installed on the Malaga Network Functions Virtualization Infrastructure (NFVI).
One VM contains a full 5GC Stand Alone (SA) instance (release 3.1 of Athonet’ s software),
which acts as a main central core with complete control and user planes, whereas the second
VM only contains a User Plain Function (UPF) and plays the role of a 5G edge node. This
second UPF is installed on the same hardware of the first instance, but it is logically separated
from the rest of the functionalities. This setting allows the creation of a second network slice
with a separate dedicated user plane.
The provisioning and installation were performed remotely via Virtual Private Network (VPN)
by dedicated Athonet support service.
The network plan was configured according to the testbed owner’s requirements, in order to
expose the 5GC interfaces as follows:
• N1, N2 between the Access and Mobility Management Function (AMF)nd the gNB
• N3 between the UPF and the gNB
• N6 from the UPF to the Data Network (Internet)
At the moment of writing this document, a Nokia gNB is attached to the 5GC AMF; the
attachment was verified with a set of validation tests, by checking the HTTP message traces
and logs of the network elements, especially for the AMF.
Figure 2 Some HEARTBEAT messages captured by Tcpdump during gNB and 5GC communication.
At the end of the project, the Nokia Radio Access Network (RAN) will be replaced with two O-
RAN solutions, one of them directly coming from the Affordable5G project.
The central 5GC instance was provisioned with 10 Subscriber Identity Module (SIM) cards
provided by Athonet; the associated User Equipments (UEs) were then successfully
provisioned and attached to the core.
The integration of the Athonet 5GC with the O-CU (ACC dRAX) was performed in multiple
stages, described below.
2.2.1.1.1 First stage with Athonet ITP-EU & ACC Labs
A first preparatory stage of the 5GC’s integration with the O-RAN solution developed by the
project, was performed by connecting and testing the N1/N2 and N3 interfaces between the
5GC and Accelleran’s gNB, obtaining the E2E IP connectivity. For this purpose, a cloud
instance of the Athonet 5GC was utilized. The following section provides some details about
this platform and its specific integration.
In order to perform these tests, Athonet provided an instance of AWS 5GC called Innovation
Test Platform for European projects (ITP-EU). This cloud platform has been used for advanced
tests in subsidized projects, like, in this case, Affordable5G.
The first integration test introduced above has been performed by Accelleran, to verify the
connectivity of its gNB with the 5GC, testing UPF sessions and Internet traffic exchange. For
that, a dedicated 5GC cloud instance has been set up on ITP-EU. The Athonet support team
provided all the information regarding the PPTP VPN access to allow Accelleran to connect to
the platform. Then, a specific subnet (e.g., /29) was assigned to give the proper IP address to
the Accelleran’s gNB.
Accelleran provided its SIMs information (i.e., IMSI, K and OPc) for the provisioning of the
users into the 5GC Unified Data Management (UDM).
2.2.1.1.2 Second stage for EuCNC Demo
A second stage of integration was performed in ACC labs in preparation for a demo video @
EuCNC 2022 in Grenoble. The video was positioned as a joint Affordable5G & FUDGE-5G
demo (with Athonet participating in both projects), and its preparation was used as a second
stage of integration.
This demo was a full E2E integration of a 3GPP Release-15 (with Release-16 additions)
Standalone 5G SNPN with an O-RAN aligned RAN. The UE was a commercial 5G
smartphone. The DU and RU were external 3rd party network functions.
As part of the preparations for this video, the 5GC and disaggregated CU (CU-CP and CU-UP)
N2/N3 interfaces were re-tested, with focus more on consistent RAN/5GC network slicing.
Several different slice configurations were tested but for the demo, an eMBB slice was used
to the ‘default’ UPF in the Athonet cloud, with default DN routed to the Internet.
The first screenshot (Figure 5) shows the internal configuration of the gNB with CU-CP creating
the CU-UP instance and a DU attaching to CU, monitored by the RIC/SMO.
Figure 5: CU-CP creating CU-UP and DU attaching to demonstrate an active radiating gNB
When the UE (smartphone) is taken out of flight mode, it then attaches to the gNB and then
5GC, authenticating and then establishing an active Internet connection. Some of the resultant
generated F1, N2 & N3 traffic is shown in Figure 6:
Figure 6: UE attaching to gNB and authenticating with 5GC to establish active connection
Finally, in Figure 7, the smartphone displays the Athonet tagged 5GS that it has connected to.
An example IP service (in this case a speedtest) is then demonstrated. Note that delays and
throughput reflect setups in the lab (with the VPN connection to the Athonet 5GC in the cloud
and a public Internet speedtest server) and should not be used to infer achievable performance
in Castellolí or Malaga (which depends on other factors like allocated 5G-NR spectrum
bandwidth, UPF location, etc.).
Figure 7: 5G smartphone connected to and running over the Affordable5G Open SNPN
The final integration will be then consolidated with the 5GC instance physically installed at
Castellolí and UMA’s site, focusing on further specific test cases.
OSM integrations
Martel has developed several software components that extend the current OSM functionality
in two directions:
• KNF Placement. The ability to provide placement details for Kubernetes Network
Functions (KNFs), in order to target specific nodes of a registered Kubernetes cluster. This
is currently not possible in the official OSM releases, with the placement capabilities only
limited to targeting entire clusters, with no way to target individual nodes.
• Infrastructure as Code. Presently OSM lacks built-in support for GitOps. Martel has
developed OSM Ops, a distributed system to complement OSM’s own deployment and
operation tools with GitOps pipelines. The basic idea is to describe the state of an OSM
deployment through version-controlled text files hosted in an online Git repository. Each
file declares a desired instantiation and runtime configuration for some of the services in a
specified OSM cluster. Collectively, the files at a given Git revision describe the deployment
state of these services at a certain point in time. OSM Ops monitors the Git repository in
order to automatically reconcile the desired deployment state with the actual live state of
the OSM cluster.
KNF Placement
For the KNF placement functionality, the goal was to contribute this feature to OSM’s main
codebase, discussing the design and implementation details with the OSM Technical Steering
Committee (TSC) for approval. The reasoning for this was not only to ensure that the
functionality becomes part of an official OSM release, thus allowing for long term support for
it, but also to provide a good contribution to OSM thanks to Affordable5G.
At this time, the feature’s design was mostly finalized with help from the OSM’s developers,
however, implementation details were not fully discussed. This unfortunately due to slow
communication with the OSM developers, and several back-and-forth discussions dealing with
the issue of providing placement functionality for KNF descriptors without violating the vendor
contract (e.g. not altering the descriptors directly, providing ways for vendors to specify whether
or not to allow placement, etc.). As it stands, an implementation of the placement functionality
is now possible, but only in the form of a custom lifecycle management module (LCM) that
needs to be deployed in place of OSM’s base LCM module (Release Ten).
Due to the current status of the KNF placement implementation functionality we opted against
replacing the module in any installations of OSM that is being actively used for other tests.
What we have is 3 separate VMs running in Malaga:
• A VM running OSM V.10 with the modified LCM module deployed, 8 cores, 8 GB memory,
100 GB disk space.
The OSM instance has the Kubernetes cluster registered through a dummy Virtual
Infrastructure Manager (VIM), and a package containing a mission critical service for Pilot 2
ready to be deployed on it, with the ability to provide (at onboarding time) a list of labels that
the target Kubernetes nodes must have in order for the KNF to be deployed on them.
Due to the nature of this placement feature, defining relevant Key Performance Indicators
(KPIs) isn’t an easy task. Metrics such as the resource usage of deployed services or their
latency response aren’t measuring performance in relation to the placement functionality
introduced, but rather the performance of the OSM and Kubernetes themselves. Their
performance in that regard doesn’t relate to our work in that regard. This is a new functionality
being introduced, and the advantages that come from it are mainly in the flexibility gained in
placing services on the exact nodes where they are needed, and the ability to maintain
consistency within OSM, which would be difficult to have if one was to start moving Kubernetes
pods around outside of OSM.
A KPI we could use in this situation could be the average E2E deployment time for a given
KNF on our setup. This to compare the time needed to deploy a KNF on a given node without
the placement functionality (which requires manually moving the deployments/statefulsets on
the Kubernetes cluster from one node to another) and with the placement functionality
(providing the configuration directly at onboarding time from OSM). However, even in this case,
there are several factors at play that could make the time vary by a large margin, as the
complexity of the services (how many KDUs and corresponding Deployments/Statefulsets and
pods are created) and how they are meant to be distributed across the nodes can impact the
deployment times.
Infrastructure as Code
The Infrastructure as Code solution was validated through a fully-fledged integration into the
Malaga cluster. The Malaga rollout entailed deploying every OSM Ops component and then
using the OSM Ops facilities to set up a GitOps pipeline to automatically deploy the Nemergent
services. We outline deployment and test cases below, then conclude this section with a brief
discussion of KPIs.
Deployment
The OSM Ops deployment targets the virtual servers already set up for KNF Placement.
OSM runs in its own virtual server (named “osm”) and is configured with a VIM pointing to a
Kubernetes cluster (MicroK8s distribution) made up of two nodes, named “node1” and “node2”.
The Kubernetes cluster hosts FluxCD's own Source Controller and the OSM Ops main service,
both in the “flux-system” namespace. Source Controller monitors an OSM test repository on
GitHub. The OSM Ops service connects both to Source Controller, within the same cluster,
and to the OSM north-bound interface (NBI) running outside the Kubernetes cluster. The
diagram below (Figure 8) illustrates the deployment.
The OSM Ops deployment entailed several tasks. We first installed and configured the FluxCD
CLI on node2, then deployed FluxCD and OSM Ops services to the Kubernetes cluster.
Following that, we configured OSM Ops to be able to interact both with the Git demo repository
through Source Controller and the OSM Northbound Interface (NBI). Finally, the OSM cluster
had to be configured with a new repository containing the Nemergent Helm charts for the
services to be deployed through the GitOps pipeline as well as suitable NSD and VNFD
descriptors.
Note that, with the above setup, OSM creates Network Service (NS) instances using an
Network Service Descriptor (NSD) named “affordable_nsd” pointing to a Virtual Network
Function Descriptor (VNFD) called “affordable_vnfd”. The VNFD declares a Kubernetes
Deployment Unit (KDU) named “nemergent” which references the Helm repository mentioned
earlier.
O-RAN Integrations
Eurecom Open5GLab network setup for the O-RAN 7.2 development is depicted in Figure 9
and it is composed of a FibroLAN switch that interconnects the O-DU server with different O-
RAN compliant RUs including Mavenir, Foxconn and VVDN RUs. The network is synchronized
using Qulsar PTP Grand Master that is either distributing the PTP to DU and RUs directly or
via the FibroLAN switch.
OAI O-DU integrates the O-RAN Front Haul Interface (FHI) software libraries, many steps to
achieve the full integration with commercial O-RU have been validated:
● S-Plane validation both with:
○ Local master clock: OAI O-DU assuming grand master role and O-RAN sample
app O-RU assuming the slave role.
○ Grand master in the network: PTP synchronization packets coming from the
Qulsar Grand Master and OAI O-DU assuming the PTP slave role.
● CP and UP validation using both:
○ Testing OAI O-DU with respect to the O-RAN O-RU sample app.
○ Successful connection of OAI-DU to the Foxconn RU and proper exchange of
O-RAN packets.
The connection to the commercial off-the-shell (COTS) Foxconn RU required first, the
validation of the S-plane for both the O-DU and O-RU, followed by the configuration of the
Foxconn RU M-plane. A specific configuration of the network switch was also set to comply
with the VLAN tagged packet of the CP and UP.
The wireshark capture in Figure 10 shows the successful connection of OAI O-DU to Foxconn
RU using the O-RAN 7.2 interface. The UP packets flow in both directions, O-DU <-> Foxconn
RU. The CP packets indicate the RU about the transmission of the UP packets.
Example: a CP message from OAI-DU will set up one section ID that spans from Physical
Resource Block (PRB) 0 to 105, which indicates that all the subsequent UP packets will be
transmitted over the selected PRB. Such CP may be sent at any time (e.g., when beamforming)
to indicate the RU about the PRBs associated to the subsequent UP packets.
Some additional steps are still required to achieve a full E2E solution able to allow COTS UE
to connect to the 5G SA network using O-RAN FHI and they are mainly related to:
● Timing tuning for OAI O-DU threads with the aim to comply with O-RAN FHI library one.
This will allow the precise O-RAN FHI buffer fill.
● Observation of the O-RU emitted spectrum and decoding of the principal cell signaling
radio channels (e.g., Master Information Block (MIB) and System Information Block
(SIBs))
● Integration of the PRACH procedures complying with the O-RAN FHI to enable the UE
random access procedure for the connection.
● Connection with other commercial RUs like RunEL, Mavenir and VVDN.
With the aim to help RUNEL to test the interoperability of their RU with a test DU, Eurecom
delivered to RunEL a dedicated software that simulates the ORAN Interface protocol in March
2022 time frame. RunEL with Eurecom remote support successfully installed the Eurecom
ORAN 7.2 Simulator SW and all other needed supporting SW tools on one of RunEL servers
and the simulator is ready to start the initial integration between the DU and the RU. Eurecom
will first finalize the integration of OAI-DU with the Foxconn RU, and then provide the software
platforms to RUNEL.
Following several virtual meetings between Eurecom and RunEL the integration plan between
Eurecom protocol stack (Core, CU and DU) and RunEL RU over ORAN PHY split interface
Option 7.2 was created.
Following the completion of the initial ORAN integration, Eurecom will provide to RunEL the
full stack of the 5G SA Core + CU + DU SW that will be also installed in the RunEL server and
the E2E 5G standalone link that includes commercial 5G UE will be integrated and tested.
2.2.3.3 DU - CU Integration
As one of the goals of O-RAN is to support multi-vendor RAN solutions, for Accelleran,
interoperability of the F1 interface between CU & DU is key, to allow the ACC dRAX (Near-RT
RIC and disaggregated CU with CU-CP and CU-UP) to support multiple DU vendors and
implementations.
As ACC have been integrated with multiple O-DUs (in fact, the OAI is the second integration
and there are other integrations ongoing), ACC has created an integration specification and
plan (primarily for F1 interface but also E2).
• Phase 4: Performance limit testing with flow control, increasing system capacity:
o Incrementally increasing number of Cells per DU, RUs per DU, DUs per CU-
CP, CU-UP per CU-CP, UEs (so total throughput exceeds available bandwidth,
retest at performance limits.
o Test of PDU sessions of type IPv4, IPv6, IPv4v6.
o Test of network slicing eMBB, URLLC, mMTC types.
o Test of multiple PDU sessions per UE.
o Test of multiple data radio bearers per UE.
These F1 (but not E2) tests have been performed between the OAI DU and ACC CU by
installing an ACC CU in Eurecom labs. Additionally, all these tests have also been performed
with the alternative O-DU being installed in Malaga.
The time plan keeps track of all tasks, starting dates, durations, dependencies between tasks
and involved partners. The network integration activity is tracked separately for the two project
test sites (Castellolí and Málaga). Figure 11 shows the updated time plan for Málaga:
Taking as a reference the diagram already presented in D4.1 [2], the timeplan has suffered
important deviations. This has made that final deadline in the diagram have been shifted to
represent the real status. It is actually a blocking point that has delayed the full deployment of
the 5G network at Malaga platform. The main issue is the integration between the RU-DU,
which is a critical interface in O-RAN and whose new expected date for its fulfillment is early
September.
UMA, as owner of Malaga platform, has acquired an alternative full O-RAN solution to be used
in the meantime, which will be ready by early July and will allow us to continue with the rest of
the tasks. In any case, once the issue is solved, the alternative solution will go to a second
place, mainly to test multivendor interoperability, and the main O-RAN solution will be the one
coming from Affordable5G project.
The main building blocks that have been deployed in the Testbed are the NBYONE, in charge
of the orchestration of the 5G network; the O-RAN, which will present important integration
points of different components developed by the different partners, and finally, the 5G core,
which has been updated to include new functionalities to support pilot developments. Note that
pilot developments will be explained in another sections.
5G Core Integrations
This section provides an overview of the Athonet 5G core installation, configuration and test
plan in Castellolì testbed, to provide the 5GC integration in the end-to-end 5G infrastructure.
A full 5GC instance (upgraded to release 3.1) has been provided as a VM containing a pre-
configured core, installed on the Castellolì NFVI.
The provisioning and installation have been performed remotely via VPN by dedicated Athonet
support service. As in Malaga, the 5GC networking and licensing were configured and
checked, in order to fully provide the required functionalities.
The Athonet 5GC exposes its Northbound APIs in order to be managed and monitored by an
orchestrator. In this context, NearbyOne orchestrator (NBC) will leverage this set of APIs to
retrieve some 5G system information. In particular, the focus is on the 5GC KPIs, that can be
retrieved by the NearbyOne orchestrator to provide a single point of monitoring. More
information about this integration is provided in Section 3.2.2.4.
A telemetry component is expected to be integrated with the 5GC. In this respect, a NWDAF
instance is currently deployed in Castellolí testbed, with the purpose to connect to the 5GC
and retrieve some relevant KPIs of its functional elements (e.g., number of sessions, number
of UEs, traffic throughput, etc.)
The 5GC is equipped with a Prometheus component, which allows KPI shipping towards
another Prometheus instance or another compatible monitoring tool, through a Remote Write
functionality. The Remote Write was enabled to the 5GC Prometheus instance by setting the
following HTTP URI:
http:<PROM_IP:PORT>/api/v1/write
where PROM_IP and PORT are the endpoints of the receiving Prometheus instance, that from
which the NWDAF retrieves some KPIs.
Orchestrator Integrations
NearbyOne as the E2E orchestrator in Castellolí, has integrated with different layers of
components involved in the Affordable5G pilot in Castellolí, ranging from the
baremetal/network provisioning to the integration with VNFs/Apps.
NearbyOne orchestrator is containerized, and can be deployed via a helm chart and has
support to be installed at different managed and unmanaged k8s flavours e.g.: vanilla k8s,
Redhat openshift, OKD, Rancher RKE, AWS EKS, Azure AKS, ARO, VMWare Tanzu, ...
In Castellolí, we have deployed NearbyOne on an RKE k8s cluster, running in 3 VMs deployed
on one of the Castellolí paddock’s servers.
3.2.2.1 Provisioner
For the integrations between NearbyOne’s baremetal provisioner and the specific Lenovo
servers Cellnex has chosen for Castellolí, we have developed our interfaces supporting the
Redfish standard (RDFSH) [23] (“DMTF’s Redfish® is a suite of specifications that deliver an
industry standard protocol providing a RESTful interface for the management of servers,
storage, networking, and converged infrastructure.”)
The standard is supported by most hardware vendors (WKRD) [24] (Advantech, Dell, Fujitsu,
HPE, IBM, Lenovo, Supermicro and Cisco) but specific features required for the dynamic
provision the nodes are left out of the standard and required some extensions e.g.,
mechanisms for redirecting operator interfaces such as serial console or virtual media are not
included in the standard, as these can’t be reasonably implemented as RESTful interfaces.
NearbyOne orchestrator has integrated with the RESTful interfaces supported by i2CAT
SliceManager to handle the RAN chunks of the slices. In the following lines we describe the
current workflow related to slice provisioning among the Slice Manager, the non-rt-RIC and
NearbyOne orchestrator.
1. The Slice Manager creates a user and register the RAN infrastructure in the slice
manager.
2. The Slice Manager returns all available RAN infrastructure for the user.
3. The Slice Manager returns the configured RAN infrastructure.
4. The Slice Manager receives the request for deploying a slice and its 5G Core
configuration.
5. The Slice Manager returns information regarding the activation of the RAN slice and of the
slice deployed.
In Castellolí’s pilot, the integration between NearbyOne orchestrator and Accelleran dRax
includes its lifecycle management. Accelleran provides the helm chart for deploying dRax as
a containerized application. NBC has exported Accelleran’s helm chart as a block in
NearbyOne. This block can be easily deployed from our dashboard to our provisioned k8s
edge clusters, where also its placement/migration and lifecycle management policies can be
defined.
Notice that all the interactions between the orchestrator and dRax go through i2CAT Slice
Manager and its NonRT-RIC. NearbyOne orchestrator delegates the RAN chunks of the slices
to the SliceManager, and these are the components interfacing with dRAX.
In Castellolí’s pilot, Athonet provided its 5G Core already pre-installed in a server and managed
by themselves. In this scenario, the integration between NearbyOne orchestrator and
Athonet’s 5G Core skips the lifecycle management of the Core and focuses on the integration
with its monitoring interfaces.
The purpose is to expose Athonet’s 5G Core KPIs to the NearbyOne orchestrator, to let the
orchestrator act as a single point of monitoring, in order to demonstrate the “monitoring
management” capabilities provided by our solution.
This integration was built using the Prometheus server Athonet 5G Core exposes and
configuring in NearbyOne the Prometheus federation to give the orchestrator and external
consumers (e.g., the ML/AI engines from ATOS/i2cat/NKUA) a single point of monitoring.
Similarly, to the above Accelleran dRax integration, the integration between NearbyOne
orchestrator and Nemergent MCS App includes its lifecycle management. Nemergent provides
the helm chart deploying the MCS as a containerized application. NBC has exported
Nemergent helm chart as a Block in NearbyOne. This block can be easily deployed from our
dashboard to our provisioned k8s edge clusters, where also its placement/migration and
lifecycle management policies can be defined.
Unlike with the previous dRAX CNF, in this case the block will make use of the different KPIs
provided by the orchestrated components and NWDAF to migrate/scale/heal the MCS
application on different scenarios.
This section focuses on the integration between the RIC manager, which corresponds to the
implementation of the non-RT RIC in Affordable 5G and is part of the SMO layer, and the dRAX
at the RAN, which includes the near-RT RIC. The description of this integration is presented
in the first sub-section. Then, as a specific capability of this integration, the second sub-section
describes an xApp that can be deployed in the dRAX via the RIC manager. The test cases for
this integration will be presented in section 4.
As discussed in Deliverable D3.2 [3], the implementation of the non-RT RIC in Affordable5G
is realized through a software module, referred to as RIC manager, which offers an ORAN
compliant A1 interface and can control different types of near-RT RIC implementations
regardless of whether they support or not the A1 interface. Moreover, it expands the definition
of the ORAN non-RT RIC to include additional functions such as xApp discovery or onboarding.
Then, following the architecture of the RIC manager presented in deliverable D3.2 (Figure 14),
this deliverable covers the integration of this module with the new version 4.0 of Accelleran’s
dRAX, also known as dRAX 5G. The features of dRAX 4.0 are very similar to the ones present
in dRAX 2.1 that were considered in D3.2, but there are some additions that allow us to extend
the integration with RIC Manager.
RIC Manager’s API (Figure 15) still offers the same endpoints, which follow the O-RAN
standard, allowing to seamlessly work with both dRAX’s 4.0 and 2.1 versions as well as with
O-RAN standard implementations of Near-RT and Non-RT RICs.
As described in Deliverable D3.2 [3], the RIC Manager allows to deploy xApps in different
Near-RT RICs, providing the specific implementation for each vendor but offering the same
functionalities.
One of the xApps that we have developed extracts data and telemetry from the Near-RT RIC
and exports it to a selected data lake. For dRAX 2.1 the only supported data lake is AWS S3,
but for version 4.0 we are able to expose the xApp’s ports outside of the K8s cluster, allowing
us to integrate the Telemetry xApp with Prometheus. We have developed a Prometheus
exporter that runs alongside the xApp, which is constantly exposing 4G and 5G telemetry to a
Prometheus server.
The exporter can be configured through a new A1 policy (Figure 16) exclusive for dRAX 5G,
which allows to toggle the exporter as well as select which specific metrics are to be exposed.
It also supports all the configuration present for dRAX 2.1 Telemetry xApp.
dRAX 4.0 also supports new 5G telemetry that can be exported via xAPP:
• UE Measurements
o RSRP
o RSRQ
o SINR
• RRC Stats
o RRC Attempted Connections
o RRC Successful Connections
• CU-UP Throughput
o Downlink Throughput [bps]
o Uplink Throughput [bps]
A set of tests to validate the use of dRAX5G and the deployment of the telemetry xApp using
RIC manager have been defined. These tests will be presented in section 4.2.7.
For O-RAN fronthaul infrastructure at Castellolí site, ADVA provides Transport Network
Equipment (TNE) components. TNE ensures fibre connectivity and synchronization between
O-RU and O-DU. The following dedicated hardware components have been shipped to the
site: OSA-5401 and FSP 150 XG-118. The proposed fibre connectivity and timing distribution
path is depicted below in Figure 17. In O-RAN fronthaul terminology, this configuration is
known as LLS-C3 (Low Level Split, Configuration 3).
The solid blue line is representing single-mode fibre, suitable for a distance up to 5~30 km,
and orange line is multi-mode fibre (up to 300 m).
Prior to the shipment, we have assessed the quality of the synchronization at COTS Edge
Server and reported it in Deliverable D2.2 [4] Chapter 5.2. To make on-site integration easier,
the equipment had been pre-configured. The configuration has been reviewed at face-to-face
meeting.
The time plan keeps track of all tasks, starting dates, durations, dependencies between tasks
and involved partners. The network integration activity is tracked separately for the two project
test sites (Castellolí and Málaga).
Figure 18 shows the updated time plan for Castellolí : Taking as a reference the diagram
already presented in D4.1 [2], the timeplan changed significantly, presenting different
deviations.
One of the issues that had to be treated was that the RU-DU integration presented significant
delays that made impossible to assume that the equipment would be available for Affordable
5G’s pilot demonstration in Castellolí’s platform. Cellnex, the partner owning the platform,
decided to purchase O-RAN equipment that can fulfill the needs of the project. This purchase
presented a huge delay of the supply chain because of Covid19 and is expected to arrive the
second week of August. The integrations related to RU and DU in Castellolí will start once the
equipment arrives.
With that exception, all partners have been working hard in fulfilling the initial timeplan. There
were some short delays in the 5G Core installation due to access problems in the platform and
limited hardware resources, that was solved once the requirements were updated.
The testing of individual components and pre-integrations testing phases are an important part
for the future tasks that are approaching. The purpose of the individual test phase is precisely
to minimize the margin of failure of each component, just as the test phase between different
components, that not only proves that the individual tests are successful, but also
demonstrates that the interaction between modules results in a successful outcome.
The nomenclature used to define each Test Case identifier is composed by a first set of letters
that differentiates between Individual Test Case (Ind) and Integration Test Case (Int). These
letters are followed by -test- and an enumeration of the component and the number of test
cases defined by that specific component or Test Cases defined for the integration of several
components. In that form, each component and set of component has a first enumeration, and
the amount of Test Cases defined for it or them, has another enumeration, divided by “- “. An
Example would be: Ind–test–01–01 (The first individual test case of the first component).
The numerical KPIs are detailed in the section 4.1.2. This table presents in a joined-up way
the different values that determine that the Test Cases have been fulfilled. Afterwards, results
are provided for the cases that such an evaluation was made possible.
The following test case aims to test out the KNF placement functionality enhancement for
OSM, by fully deploying a service in the form of a NS/KNF both with and without using the new
placement options at onboarding time. This to compare how quickly one can place the services
in the nodes they need to be in versus the case where this functionality is not directly provided
by OSM.
Slice Manager
The Slice Manager will oversee creating RAN slice subnets and sending an acknowledge to
the Orchestrator informing that the radio chunks have been created successfully. Additionally,
the Slice Manager will communicate to the Orchestrator the topology and characteristics of the
current infrastructure. With that information, the Orchestrator will be able to choose which
parameters it needs to deploy and then send the slicing creation requests so that the Slice
Manager can create such RAN slice subnets. To confirm the creation of the slice subnet, an
acknowledge code is sent to the Orchestrator using HTPP Methods. Finally, the time needed
for the creation of the slices was computed under laboratory conditions.
KPI This time is for reference only as it was tested under laboratory
conditions, and it must not be taken as final.
As an example of the timed test, the following is one of the logs used:
AI-ML
The ever-increasing demand on control loops in mobile network architectures and the current
trends in MLOps clearly requires dedicated architectural blocks to facilitate and orchestrate all
the different Machine Learning (ML) operations, such as training, evaluation and execution of
the algorithms at every network level. Therefore, in the context of the Affordable5G project we
have developed an AI/ML Framework based on open libraries and standards, including a set
of interfaces for its integration with different network components of the 5G architecture. The
AI/ML Framework developed in the context of the project is based on TensorFlow, a widely
adopted open source set of libraries for numerical computation and machine learning. The
architecture of the AI/ML framework is depicted in Figure 19.
This subsection aims to assess the functional verification of the individual components that
conforms the Affordable5G AI/ML Framework. This functional verification has been assessed
in 4 test cases (Ind-test-03-01, Ind-test-03-02, Ind-test-03-03, Ind-test-03-04) covering the
AI/ML Pipeline Orchestration Platform (POP), the AI/ML Automated Model Deployer (AMD)
and the AI/ML Model Serving Platform (MSP), the essential building blocks of the architecture.
In addition, this subsection also includes a performance evaluation of the MSP in the last test
case (Ind-test-03-05) that will help to verify if the prediction latency of the MSP fulfills the
requirements for the different ORAN control loops (Real-Time [<10ms], Near-Real-Time
[>10ms y <1s], Non-Real-Time[>1s]).
The purpose of this individual test case is to demonstrate the proper deployment of the POP
component of the AI/ML Framework including the onboarding of a toy AI/ML pipeline
(chicago_taxi pipeline provided in official TensorFlow documentation [TFX]) into the system.
without raising
exceptions
Step List models loaded in POP (Airflow) CLI or check the GUI
2
Test Verdict The test must show that the AI/ML pipeline is onboarded
successfully to the POP either though the GUI or the CLI
Additional CLI command:
Resources airflow dags list
The results are demonstrated in the following figures. In first place Figure 20 shows the Airflow
GUI with the toy model load and then Figure 21 shows the airflow CLI output, listing all the
AI/ML pipelines (DAGs) onboarded into the system.
The purpose of this individual test case is to demonstrate the proper operation of the POP
showing an example of a toy pipeline (TFX) [21] properly executed by the Airflow instance of
the AI/ML Framework.
The results of this individual test case are presented in the following lines. Firstly, Figure 22
shows the Airflow GUI with the model enabled with different runs, demonstrating the proper
operation of the POP. Then, Figure 23 shows all the different components that conforms the
model pipeline (DAG) properly executed highlighted in green (success tag).
Figure 23 GUI showing the AI/ML pipeline graph view with all the pipeline components successfully
executed
The purpose of this individual test case is to demonstrate the proper operation of the AMD
component of the AI/ML Framework, showing how the exported models are successfully
detected by the component and the MSP configuration file is updated.
The result of the aforementioned test case is presented in the following figures. Figure 24
shows the logs of the AMD instance demonstrating a successful automatic deployment of a
toy model in the TFS instance of the AI/ML Model Serving Platform. In addition, Figure 25
shows the TFS configuration file that has been automatically updated by the AMD.
Figure 24 AMD logs showing the successful deployment of the model in the MSP
The purpose of this individual test case is to demonstrate the proper operation of the TFS
instance of the AI/ML Framework MSP component. This test aims to show that the model is
correctly served through the REST interface of the TFS instance, which is the interface used
in the context of the project. For this test we have used a toy model (half_plus_two) obtained
from the official TensorFlow documentation TFSD [22].
The result of this test case is presented in the following lines. Firstly, Figure 26 shows that the
model is available through the REST interface and secondly, Figure 27 shows an response to
an exemplary prediction using the CURL command.
Figure 26 MSP REST response showing the availability of a test model that is being served correctly
The purpose of this individual test case is to validate that the MSP is able to meet the
requirements of the different O-RAN control loops (Real-Time [<10ms], Near-Real-Time
[>10ms y <1s], Non-Real-Time[>1s]). Hence, this test case aims to evaluate the inference
latency of two ML models with different complexity. First a baseline toy model (half_plus_two)
provided by TensorFlow in the official documentation TFSD [22] and secondly, the CPU
Prediction model developed by I2CAT in the context of Affordable5G Demo 3.
Model inference
Test case name Test Case id Ind-test-03-05
latency
Test purpose Measure the inference latency of the ML model serving platform
POP installed and running
AMD installed and running
Configuration ML model uploaded to the POP
ML model exported
ML model deployed in the MSP
Test tool Custom script
KPI Inference latency of the ML model serving platform (seconds)
Components
AI/ML Framework
Involvement
The AI/ML Framework has to be deployed
Ind-test-03-01 successfully completed
Pre-test
Ind-test-03-02 successfully completed
conditions
Ind-test-03-03 successfully completed
Ind-test-03-04 successfully completed
Send POST with the model input to
Test sequence Step
the MSP REST API
Step Check response
The result of this test case is presented in Figure 28. The figure shows that the MSP of
the AI/ML Framework is suitable for Real Time control loops obtaining an inference latency
lower than 10ms.
Figure 28 CDF of the prediction latency of a toy model (half_plus_two) and the CPU prediction model
provided by I2CAT
5G Core
This individual test case is needed to check the validation of all the 5GC’s NFs, in order to
make all the 5GC services running and available. Without a valid license, the NFs are not
operative and configurable by the user.
The result of this test shows that all the licenses have been successfully registered, making
the NFs fully operational.
The purpose of this test case is to check the network configuration of the 5GC in terms of
available interfaces, IP pool settings, and modules reachability. Every NF that exposes its
interface externally must be reachable from the other network elements. This test is extremely
important as the connectivity with other network elements (gNB, dRAX, Orchestrators, etc.)
depends on these configurations.
Results of this test case is the confirmation that the interfaces are up, no errors are obtained
after the network configuration and the core is reachable.
The purpose of this test is to evaluate the average packet latency through the user plane part
(i.e., UPF). In this case, the session traffic packets are analyzed via some software tools (open-
source and commercial ones) to estimate the average values on a total of processed traffic
packets. The measured time is the time that elapses from the instant in which a packet enters
the UPF to the instant it leaves it.
This test is relevant when it is required to analyze the total delay in an end-to-end 5G system
and to confirm negligible delay introduced by the 5G core.
The obtained result reflects what was expected from the various estimates, with an average
delay of tens of µs, a negligible value compared to the total delay due to the whole end-to-
end chain.
This test aims to evaluate the number of UPFs that the 5GC is able to support at the same
time given the HW in use. Due to the importance of slicing in the context of the Affordable5G
project, analyzing the ability of a 5GC to support concurrent UPFs in different edge nodes
acquires some relevance.
The test case takes into consideration the hardware configuration installed at Malaga testbed
(Dell R640). From the information extracted from specific metrics available from the 5GC user
interface (see table below), it is possible to only estimate the number of UPFs concurrently
connected.
Additional
Resources
With a typical network configuration and considering the performance of the hosting machine
(HW+hypervisor), experiments indicate that the number of UPFs simultaneously installed can
vary and go above 5, where this number depends on the required throughput and available
computing-networking resources per UPF instance (constrained by the total HW and NIC card
resources available).
This test evaluates the maximum throughput reachable from the user traffic through a single
UPF. This specific metric is highly dependent on the adopted hardware and, in particular, on
the network card, hence it is not a metric directly associated to core performance.
The test case takes into consideration the hardware configuration installed at Malaga testbed
(Dell R640). The test has been conducted applying a mix of UL and DL traffic.
NEOX [11] is a parallel multicore and multithreaded GPU-like Deep Neural Network (DNN)
architecture based on the RISC-V RV64C ISA instruction set with adaptive Network-on-Chip
(NoC) offered by THI. NEOX is fully customizable in terms of number of threads per core,
width of the vector processing lanes and memory resources (private and shared caches as
well as the cache prefetching options). The NEOX multithreading capabilities hides long
latency delays from external memory controller maintaining high computation throughput for
the entire array sources (both in terms of memory and computational capabilities) and also
devices operating under tight power constrains).
The target of this testbed is to calculate the power consumption of NEOX accelerator at the
ASIC level.
Power
consumption
Test case name Test Case id Ind-test-05-03
estimation of
NEOX accelerator
Highly accurate estimation of power consumption of NEOX
Test purpose
accelerators
The estimation of the power consumption will be performed at the
netlist level (lowest level of the design, thus the one with the
Configuration highest accuracy) using for 4 different configurations of NEOX
accelerators (varying the number of cores and blender
configurations).
Design compiler and power compiler of Synopsys.
Test tool
A single convolution layer of size equal to 24Kbytes.
Average power consumption, peak power consumption, and
KPI
maximum frequency
Components
NEOX accelerator with four different configurations
Involvement
HDL (verilog) code of NEOX can be synthesized into ASIC.
Pre-test
Design compiler and power compiler of Synopsys are running.
conditions
Two different process libraries are available.
Synthesized code is executed at gate level to
Step 1:
measure execution time
Synthesized code is executed at gate level to extract
Step 2:
(bit) switching activities (SAIF) files
Test sequence Power compiler is configured to take as input the SAIF
Step 3:
files
Power compiler is configured to take as input two
Step 4: process libraries for two different process
technologies
For the single convolution layer, the measured execution time and
Test Verdict power consumption was from 2.58 to 4.11 mWatts (depending on
the number of cores) for performing 30 inferences per second.
The figure below shows the testbed for measuring the ASIC-level power consumption for
NEOX accelerator. It is important to mention that the power consumption is measured at the
netlist level, thus highly accurate measurements are performed. As we can see in the figure
below, the power estimation frameworks consist of three distinct levels: synthesis level,
simulation at gate level, and netlist level.
The following picture shows the run-times in milliseconds as well as in frames or inferences
per second when we vary the number of cores between 1 and 4.
Runtime / FPS
FPS Runtime (ms)
2000,00 6,00
1489,13
1500,00 4,52
4,00
1000,00 787,08
407,032,46 2,00
500,00 221,01 1,27
0,67
0,00 0,00
XS XL1000 XL2000 XL4000
Figure 30 Latency and throughput for NEOX accelerator when the number of cores is modified
The next figure depicts the power (in mWatts) and energy consumption (in mJoules). As
expected, the energy and power figures remain almost intact. Among others, this is evidence
of the efficiency of the power και clock gating techniques that have been employed in NEOX
hardware design.
6,00
4,06 3,97 4,11
4,52
4,00
2,58
2,00 2,46
0,09 0,14 0,13 1,27 0,14
0,67
0,00
XS XL1000 XL2000 XL4000
Figure 31 Power and energy of NEOX accelerator when the number of cores is varied
TSN over 5G
This section includes individual test cases related to the TSN over 5G Proof of Concept
developed by UMA, which is explained in detail in section 5. The test cases exposed below
are related to e2e latency and jitter, which are the most representative KPIs in a time sensitive
networking solution. As this solution is on an intermediate stage, the results obtained are
expected to be improve with the final upgrades at the end of the project.
This first test case aims to validate one of the most important KPIs on a TSN over 5G solution,
the e2e latency. Latency is calculated, using Network performance analyzer tool, as the
difference between packets time stamp in sending and receiving, considering that both time
clocks are synchronized. The test verdict has been defined according to service requirements
and TSN features discussed on 5GACIA [5], using Cyclic-Asynchronous as traffic type.
The results obtained after executing this test case can be observed in Figure 32. The figure
shows that latency is below 20 ms during the whole test, so test verdict is PASS.
Latency (ms)
12
10
0
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 111 116
Time (s)
This test case is focused on e2e jitter measurement. As jitter measurement can be obtained
while latency test case is performed, Ind-test-06-01 will be used as a pre-test condition. Again,
test verdict is based on 5GACIA [5].
After processing the .pcap file, jitter is calculated. In Figure 33, we can see that the result is
below time period.
In first stage of the project REL and ADVA have investigated 5G stripped-down system at REL
lab. A TSN-optimized UPF-U prototype software (TSN-UPF) has been developed for this
purpose. The goal was to assess the absolute minimal one-way latency and jitter values
(Affordable5G, 2022).
The testbed for the stripped-down system with UP shortcut is illustrated in Figure 34.
The latency and jitter results for stripped-down system are the baseline for the testing the whole
Affordable5G system. To assess one-way latency and jitter for fully integrated 5G system at
Malaga site we are developing UPF-U and UPF-C prototype software, which now are in the
final integration phase. Like in the previous test setup, a synthetic UDP stream will be same
parameters will be used.
Orchestrator
We define a series of tests to validate the use of Nearby One Orchestrator as discussed in
section 3.2.2. These tests include:
These orchestrator individual tests verify that the orchestrator is able to provision the
infrastructure edge nodes that will be used later to deploy all the apps, vnfs and slices
configuration. It does so by running self-contained simple tests not requiring integrations with
external components from other partners.
This test proofs that the orchestrator can be used to provision COTS servers in Castellolí
environment. For this test we use a server with no prior-configuration at all because the server
is IPMI/redfish enabled, in other scenarios we also where this interface is not available, the
only thing required si to configure the server to boot from a virtual-ISO or USB device that
triggers the iPXE provision of the node:
Test Case
Test case name Nztp-provision Node Ind-test-07-01
id
Test purpose Nztp-Provision a COTS server.
NearbyOne e2e-orchestrator is up and running,
NearbyOne e2e-orchestrator L3-reachable from the servers to be
Configuration
provisioned.
The COTS-Server is configured to boot from the provided ipxe.iso
Cypress (it automatically follows the Test sequence steps and
Test tool
checks the expected responses)
Functional test,
KPI Concurrent provisions (>=3 nodes, no more COTS nodes were
avilable for higher number of concurrent provision tests)
Components
Orchestrator
Involvement
Pre-test
The orchestrator is up and running
conditions
Open the NearbyOne GUI
and visit The response code is 200.
/app/infrastructure/add- A deviceID is generated
device to register the for the device.
device providing: The device is registered in
Test sequence Step 1 - location, NearbyOne.
- HW_identifier/TPM, If the HW_identifier
already exists or some
- workflow (defines
data is missing in the
the OS, drivers and
request an error is
HW/SW
Figure 37 Cypress video’s screenshot showing the e2e-provision-vm2 in « Ready » status at time
895.30s of the test.
This test shows how to deploy a sample Application using the same procedure that would be
run in case of more complex slices involving other kind of resources:
The following screenshots Figure 38, Figure 39 and Figure 40 show the results obtained from
running this test in Concourse CI using Cypress. The same test includes in one go, both the
deployment, update and deletion, that are documented as part of the next test in 4.1.7.3. It
also includes other tests not documented here (e.g., updating the deployed version of an
application/VNF).
This test shows how these resources previously allocated by deployments like the ones shown
in the previous test can be removed from the orchestrator dashboard, triggering all the required
actions to remove from the different platforms all the resources that were deployed in that
slice/App/Xnf
The previous screenshot from Figure 38, shown this test passing as part of the same one that
was creating and updating the deployment. The Following screenshot Figure 41 shows its
undeployment.
Open 5G-RAN
The components of the Open 5G NG-RAN, consisting of the O-CU, O-DU, O-RU and Near-
RT RIC fully interconnect and interact to create a running 5G gNB, so most of the test are
documented in the integration test cases section 4.2. The individual test cases are related to
the initial installation and configuration of the network functions, into a running state before
attachment.
This test installs a clean version of the cloud-native dRAX Near-RT RIC and the associated
dRAX Dashboard.
This test installs the dRAX O-CU gNB network function. The CU-CP is installed.
For Malaga (where the O-RAN is configured manually), all the CU-UP instances will also be
installed.
For Castellolí, only the CU-CP will be manually configured as the CU-UP instances will be
created by the slice manager during addition of PLMNs & network slices.
https://accelleran.github.io/drax-docs/drax-install/#install-drax-5g-
components
The configuration parameters entered depend on the network
(both internal 5GC/RAN IP addressing and 5GS NG-RAN PLMN)
configurations, so will differ for Malaga & Castellolí.
For Malaga (where the O-RAN is configured manually), all the
Configuration
steps will be followed.
For Castellolí, only the CU-CP will be manually configured as the
CU-UP instances will be created by the slice manager during
addition of PLMNs & network slices.
Test tool Web browser to dRAX dashboard: http://$NODE_IP:31315
KPI Functional test
Components
O-RAN O-CU
Involvement
dRAX RIC and Dashboard installed (previous test case
executed).
CU-CP configuration parameters defined and available:
Pre-test https://accelleran.github.io/drax-docs/drax-install/#required-
conditions parameters
If required: CU-UP configuration parameters defined and
available: https://accelleran.github.io/drax-docs/drax-
install/#required-parameters_1
From dRAX Dashboard, select New 5G CU deployment:
Step 1 https://accelleran.github.io/drax-docs/drax-
install/#install-drax-5g-components
Step 2 Select CU-CP and configure the parameters. Submit.
For each CU-UP instance to be manually configured,
configure and submit as per
Step 4
https://accelleran.github.io/drax-docs/drax-install/#5g-
cu-up-installation
Test sequence
Connect a web browser to the dRAX Dashboard:
http://$NODE_IP:31315.
Verify the installation as per
https://accelleran.github.io/drax-docs/drax-
install/#verifying-the-drax-installation
Step 5
Also connect and check on the 5G system health
dashboard on http://$NODE_IP:30300. pick the
Accelleran dRAX 5G System Dashboard from the list of
pre-built Grafana dashboards. (Note: not all services
and interface may be running at this point)
Via the dashboards, check that the dRAX CU has been installed
Test Verdict
and is operational but at this point, there will be no connected DU
Additional
https://accelleran.github.io/drax-docs/drax-install/
Resources
This test case aims to validate the Open Fronthaul S-plane, implemented as PTP distribution
between TNE (XG118) and O-DU (SE350). The O-RU receives the synchronization from
GNSS independently, so it is not part of this test case. The configuration of the test is shown
in Figure 17.
Open Fronthaul S-
Test case name Test Case id Ind-test-08-03
plane
Validate the correct installation, configuration and operation of the
Test purpose
Open Fronthaul S-plane between TNE and O-DU
The configuration parameters of PTP domain shall match the
Configuration
configuration of BC at TNE
Web browser to SoftSync dashboard: http://$NODE_IP:8080
Test tool
Web browser to TNE(XG118) dashboard: https://$TNE_IP
KPI Functional test
Components
O-RAN TNE, O-RAN O-DU
Involvement
Pre-test TNE is installed with GPS antenna connected. Fiber is connected
conditions between TNE Port#2 and O-DU port #N
Connect to TNE web GUI and validate PTP Boundary
Step 1
Clock Operational status is Normal
Validate PTP stream is arrived at COTS (tcpdump -nne -
Step 2
i $IF_NAME)
Step 3 Install SoftSync Software on O-DU COTS
Test sequence
Configure IP address on COTS interface $IF_NAME as
Step 4
50.0.0.2/24
Step 5 Configure PTP clock and PTP port via web GUI
Via web GUI check there is no alarms and the PTP
Step 6
Master IP is the correct one (50.0.0.1)
Via web GUI monitor the state of PTP clock is Normal
Step 7
and no Alarms during 15 min
Via SoftSync dashboards, check that the SoftSync has been
installed and is operational. The PTP clock should report proper
Test Verdict values for {Clock Recovery State=Locked, Phase Recovery
State=Locked, Time Traceability Status=True, Current Time Of
Day=$date}
https://www.oscilloquartz.com/en/products-and-
Additional services/embedded-timing-solutions/osa-softsync
Resources https://www.adva.com/en/products/packet-edge-and-
aggregation/edge-computing/fsp-150-xg-118pro
The below test case is validating Open Fronthaul S-plane client telemetry.
Open Fronthaul S-
Test case name Test Case id Ind-test-08-04
plane telemetry
Validate the S-plane telemetry streaming towards gNMI Collector
Test purpose
platform
Configuration The configuration of SoftSync as in previous test
Web browser to SoftSync dashboard: http://$NODE_IP:8080,
Test tool
gNMIC client or Python3.7
KPI Functional test
Components
O-RAN O-DU with SoftSync, gNMI collector application
Involvement
Pre-test SoftSync is up and running on O-DU COTS. gNMI collector
conditions software is installed (either gNMIc or Python3.7 pyGNMI)
Using gNMIc application, connect to Got response :
SoftSync running on O-DU ModelData:
Test sequence Step 1 name: ietf-ptp
/$NODE_IP:20830 and request
capabilities organization:
IETF TICTOC
Working Group
Subscribe to the telemetry stream
gnmic -a $NODE_IP:20830
--insecure -u admin -p admin sub Successful
Step 2
--path "/ptp/instance-list/1/time-properties- completion
ds/time-traceable"
--sample-interval 2s
Validate the telemetry is updated streamed every 2
Step 3
seconds
Via gNMIc application receive S-plane telemetry stream with the
following paths [
/ptp/instance-list/1/current-ds/offset-from-master
Test Verdict /ptp/instance-list/1/current-ds/mean-path-delay
/ptp/instance-list/1/time-properties-ds/time-traceable
/ptp/instance-list/1/default-ds/clock-quality/clock-class
]
https://www.oscilloquartz.com/en/products-and-
services/embedded-timing-solutions/osa-softsync
https://netdevops.me/2020/gnmic-gnmi-cli-client-and-collector/
Additional
https://gnmic.kmrd.dev/
Resources
https://datatracker.ietf.org/doc/html/rfc8575
https://github.com/Affordable5G/tsn-latency/tree/main/telemetry
The O-RAN compliant OAI O-DU will support the connection to 1 commercial O-RU at the time,
a simultaneous multi-RU connection is forecasted after the validation of the single connection.
The OAI O-DU will support a 2x2 MIMO, higher MIMO capabilities are currently under
development. The OAI O-DU supports the PTP synchronization to the network grand master
using linuxptp. The management plane can directly be accessed in the O-DU machine and a
Netconf client-server configuration will be developed as a further step. The OAI O-DU supports
both U-Plane and C-Plane but for now will not support beamforming.
O-DU
network and
Test case
O-RAN Test Case id Int-test-02-03
name
packets
validation
Test purpose Network setup and O-DU O-RAN packet transmission
Configuration OAI O-DU connected to a Eth port not binded with DPDK
Test tool Wireshark
KPI Filtered packets PTPV2 and O-RAN
Components
Standalone OAI O-DU
Involvement
O-RAN FHI library compiled
Pre-test OAI DU project built
conditions Network setup connection to the switch and Grand Master clock
OAI-DU machine optimized for the O-RAN Front Haul
Test Run linux ptp in hardware mode and
Step 1
sequence consequently phc2sys
The following table summarize the different KPIs defined per relevant individual test cases. In
this table there are not detailed the functional test cases.
Table 1 Individual Test Cases KPIs Summary Table
Max delay is
2.695 ms
Jitter is 1.503 ms
NBYONE Time to trigger the deployment on the
Ind-test-07-02 <5s
Orchestrator edge nodes
RU-DU
The RU and DU integration have encountered different problems, in terms of interfaces and
interaction between components, but finally the following different Test Cases show the
evolution of the integration status. The results of this integration test cases will be specified in
the following deliverable D4.3.
Test case
UE connection Test Case id Int-test-01-02
name
Test purpose COTS UE properly connected
Configuration Indoor E2E testing O-DU - O-RU - UE
Test tool OAI UE or COTS UE + OAI logs + Wireshark + UE logs
KPI 3GPP 5G SA message exchange for the UE connection
Components
Integrated O-DU and O-RU + UE provisioned in the CN database
Involvement
O-RAN FHI protocol between O-DU and O-RU is validated
Pre-test O-DU and O-RU synchronized
conditions O-DU, O-RU 5G SA spectrum validated
UE provisioned with a test SIM card
Step 1 Run the CN
Use wireshark to
verify the gNB-CN
Step 2 Run the gNB (O-DU - O-RU)
Test sequence successful
connection
Verify using UE
logs and OAI O-Du
Step 3 Turn on the UE
logs the successful
UE connection
The UE connection ended properly, IP address assigned, and
Test Verdict
connection is stable (no connectivity loss with no data)
Additional
Resources
ORAN Interface
Test case name validation between Test Case id Int-test-01-04
O-RU and O-DU
Test purpose Validation of ORAN interface according to ORAN Option 7.2
Indoor E2E testing using Eurescom O-DU ORAN Emulator and
Configuration
O-RU
Test tool Eurecom O-DU ORAN Emulator SW
Two Way communication ( UL and DL ) Between RunEL O-RU
KPI
and Eurecom O-DU ORAN Emulator
Components RunEL O-RU
Involvement Eurecom O-DU ORAN Emulator installed on RunEL Server
Pre-test Successful Installation of the ORAN Emulator SW on RunEL
conditions Server
Connect the O-DU Oran Emulator to RunEL O-RU via
Step 1
Ethernet Port with 10GBPS capacity
Check reception of
Test sequence Send DL stream from the
Step 2 the stream at the O-
Emulator to the O-RU
RU
Check reception of
Send UL stream from the O-RU
Step 3 the stream at the O-
to the O-DU ORAN emulator
DU emulator
The test is successful when both UL and DL streams are received
without error at both sides of the ORAN Interface.
Test Verdict
Following the successful test the O-DU ORAN emulator will be
replaced with a full O-DU protocol stack from Eurecom
Additional
None
Resources
The final results of the Integration test between the Eurecom O-DU and the RunEL O-RU over
ORAN Interface (Option 7.2) have not been completed at the publication date of this document
due to the plan change at Eurecom that started the ORAN integration of the Eurecom O-DU
with other third party O-RUs (Foxconn and Mavenir) and delayed the Integration with the
RunEL O-RU to a later date ( planned to August 2022).
DU-CU
These tests focus on the F1 mid-haul interface and interaction between O-CU and O-DU of
the O-RAN gNB network functions.
gNB DU to CU
Test case name Test Case id Int-test-02-01
attachement
Test purpose Setup and validate the attachment between O-RAN DU and CU
Network configured according to the network plan.
Configuration CU-CP started, creating CU-UP.
CU discovered and monitored by dRAX SMO dashboard
Test tool dRAX SMO dashboard and wireshark
KPI -
Components ACC CU (disaggregated into CU-CP and CU-UP) and EUR OAI
Involvement DU
Pre-test dRAX RIC+CU installed.
conditions RU configured and powered on
Start dRAX via Helm chart (RIC, CU and SMO
dashboard).
Step 1
Ensure CU is discovered and display on dRAX
Test sequence
dashboard as unconnected
Start the DU.
Step 2 Verify F1 attachment signalling.
Verify gNB now active
Wireshark messages show F1 exchange.
gNB now in active transmission mode.
Test Verdict
This is demonstrated in 2.2.1.1 for the EuCNC demonstration
video, more specifically Figure 5
Additional
None
Resources
gNB DU to CU
Test case name Test Case id Int-test-02-02
performance
Validate O-RAN DU and CU at peak performance, ensuring no
Test purpose
packet loss and acceptable CPU load
Network configured according to the network plan.
Configuration O-RAN gNBs started and attached to 5GC.
UE’s attached to 5GS and generating maximum traffic
dRAX SMO dashboard and wireshark
Test tool
E2E iperf3 ensuring throughput without packet loss
KPI Max throughput without packet loss for configure 5G-NR spectrum
Components ACC CU, EUR OAI DU, RunEL RU, ATH 5GC, commercial UEs.
Involvement Performance application running in UEs and DN behind UPF.
O-RAN CU/DU/RU running, attached to running 5GC
Performance test server (iperf3 or librespeed) setup in DN behind
Pre-test
UPF.
conditions
Performance test clients installed in UE (e.g., smartphone) or
behind 5G modem in CPE
Test sequence Step 1 Attach UEs to 5GS. Run ping to ensure IP connectivity
Step 2 Start performance tests.
CU – 5G Core
These tests focus on the N2/N3 backhaul interface and interaction between the 5GC and the
O-RAN gNB, specifically the O-CU network function.
Test sequence Set the IP address of the gNB in the whitelist (by using
Step 2 web GUI)
The following tests cases verify the integration between the orchestrator and the slice
manager. It proofs that the orchestrator is using the REST API interfaces provided by the Slice
manager, to allow the user of the orchestrator to create and delete slices, in particular the first
test shows how the orchestrator creates the RAN chunks of the slice, and the second test how
that RAN configuration is deleted.
Slice Manager –
Test case name Orchestrator Test Case id Int-test-04-01
Connectivity
Validate that the Orchestrator connects successfully with the
Test purpose
Slice Manager.
Configuration Slice Manager and Orchestrator up and running.
Test tool Orchestrator – Slice Manager (API).
KPI Functional Test
Components
Orchestrator – Slice Manager.
Involvement
Pre-test
A pre-created user on the Slice Manager.
conditions
Orchestrator: Access the Slice
Step 1 Manager through a GET Request
from the Orchestrator.
Test sequence
Slice Manager: Response with the
Step 2 pre-created user information
through a json body.
Orchestrator: Receive the request
Step 3
and check the response received.
Test Verdict Response status code is 200.
Request example:
Response example:
Additional
Resources
For the second test, the initialization is tested between the Orchestrator and the Slice Manager
using an API REQUEST. The Orchestrator sends with the GET request the information from
the Slice Manager and then answers sending such information to the Orchestrator. With that
information, the Orchestrator selects the proper parameters and then requests a POST
command for the initialization of specific RAN parameters. With this request, the Slice Manager
creates RAN slice subnets and sends an acknowledge to the Orchestrator.
curl -X POST
Response example:
Additional
Resources
This integration test case shows how the orchestrator connects with the Slice Manager API.
Figure 43 shows the logs of the orchestrator, these lines show how the Slice Manager replies
to a GET request from the orchestrator.
Figure 43 NearbyOne orchestrator logs showing a GET request to the Slice Manager
This integration test case shows how the orchestrator uses the Slice Manager API to run a
POST request creating a RAN slice.
Figure 44 shows the final POST of the interaction between these 2 components, in earlier
steps the components resolve all the required configuration and ids needed to define the slice
resources.
The first line of the logs shows the POST request, the second one shows the body being sent,
and the next two lines are showing the response sent by the Slice Manager and the
radio_chunk and radio_service_ids that need to be stored in order to be able to undeploy that
ran chunk of the slice.
Figure 44 NearbyOne Orchestrator logs showing POST creating a ran slice in Slice Manager
With the increasing heterogeneity of mobile networks resource, orchestration tasks are
becoming more challenging. In such scenario, Machine Learning (ML)-based network
reconfiguration techniques play a key role to obtain an optimal trade-off between network
performance and the utilization of computing resources.
Therefore, this subsection aims to showcase the integration tests performed in order to verify
the proper integration between the Telemetry component (Prometheus), the AI/ML Framework
and the NearbyOne Orchestrator that enables the execution of a CPU prediction algorithm to
optimize the CPU resource allocation of network slices in the Affordable5G ecosystem.
This integration test case demonstrates the proper integration between the AI/ML Framework
Message Broker and the Telemetry component (Prometheus). The CPU prediction AI/ML
model requires two metrics for the optimization of the CPU utilization of a given container: i)
the CPU utilization of the container (container_cpu_usage_seconds_total) and ii) the
transmitted bytes by the container (container_network_transmit_bytes_total).
Prometheus
Test case name Test Case id Int-test-05-01
integration
The result of this integration test case is presented in Figure 45 and Figure 46, showing how
these two metrics are successfully gathered by the Message Broker from the Telemetry
component using the REST API of Prometheus.
This integration test case shows how the output of the CPU prediction model is published in
the Message Broker data bus and available to the rest of components (including the
Orchestrator) through a RabbitMQ queue.
AI/ML Framework
Test case name interface to the Test Case id Int-test-05-02
Orchestrator
Test the correct integration of the AI/ML Framework and the
Test purpose
NearbyOne Orchestrator through the Message Broker
Prometheus deployed
AI/ML Framework deployed
Configuration
Message Broker deployed
CPU prediction model deployed in the AI/ML Framework
Test tool Message broker logs, testing script
KPI -
Components
Prometheus, AI/ML Framework, Message Broker
Involvement
Prometheus instance running
Prometheus collecting the required metrics
Pre-test AI/ML Framework installed and running
conditions Message Broker installed and running
Message Broker configured with configuration file
CPU prediction model deployed in the AI/ML Framework
Check logs of the message
Step 1
Test sequence broker
Check if prediction is published
Step 2
in the queue
The AI/ML model prediction is successfully published by the
Test Verdict Message Broker in the RabbitMQ and the prediction can be
accessed from the Orchestrator
Additional Command:
Resources docker logs message_broker
The results are presented in the following figures. Firstly, Figure 47 shows the logs of the
Message Broker container demonstrating that the predictions are properly published in the
queue, then, Figure 48 demonstrate that the predictions can be accessed from a RabbitMQ
client using a testing python script (receiver_test.py).
Figure 47 Message Broker logs showing the publication of the prediction in the queue
Figure 48 Test script showing that the prediction is properly published in the queue
This integration test case shows how the predictions published in the Message Broker data
bus are collected by the orchestrator and re-introduced to Prometheus to be able to use them
by the alarms/rules that will trigger slice resizing, app/CNF migrations, etc. It reuses the data
that was already published to the message broker in Int-test-05-02.
Orchestrator
Test case name Test Case id Int-test-05-03
reacting to KPI
Verify the predictions introduced in the Message Broker are
Test purpose pushed to Prometheus and are available to define orchestration
rules/alarms
Message Broker deployed
Configuration Orchestrator Deployed
Prometheus deployed
Test tool Testing script, Prometheus dashboard
KPI -
Components
Message Broker, Prometheus, NearbyOne Orchestrator
Involvement
Message Broker installed and running
Message Broker configured with configuration file
Pre-test
Prometheus instance running
conditions
Prometheus collecting the required metrics
Orchestrator instance running
Check data collected in
Step 1
Test sequence Prometheus
Check alarms status in
Step 2
Prometheus
The results are presented in the following lines. In first place, Figure 49 shows in the
NearbyOne Prometheus dashboard how the data has been exported from the Message broker
to the existing Prometheus database where all KPIs are aggregated, and the rules/alarms used
to trigger slice migration/resizing are defined.
Secondly, Figure 50 shows 2 rules that are used to monitor the predictions. E.g., when a slice
has been defined with a Service Level Agreement (SLA) associated to the
Affordable5GForecastHigh rule, it will increase the resources assigned to that slice as soon as
that alarm triggers.
Figure 49 Prometheus dashboard showing the prediction data that was published in the Message
Broker
Figure 50 Prometheus dashboard showing the rules defined monitoring the predictions
The ML reconfiguration techniques mentioned in the previous section will be also deployed in
THI platform. THI has developed a FPGA prototype equipped with NEOX accelerator [11].
Apart from the hardware IP, THI will also utilize the NEOX SDK for optimizing the Convolutional
Neural Networks (CNN) models in terms of memory footprint and execution time. Prime targets
for customization are the number of cores, the number of threads per core, the wide of the
vector processing lanes and the memory resources (private and shared caches as well as the
cache prefetching options). NEOX will be provided in a fully functional FPGA prototype based
on ZYNQ platforms (NEMA) [12]. ZYNQ FPGAs contain (apart of the FPGA programmable
logic) a dual core A9 ARM processor in which a regular Linux operating system have been
ported. In this way, the communication of the remaining computational and network
components can be performed with standard Linux processes.
The goal is to explore how far at the edge such ML based telemetry functionality can be
deployed. Far edge is characterized by devices with scarce resources (both in terms of
memory and computational capabilities) and also devices operating under tight power
constrains).
Therefore, the target is to showcase that the required performance can be achieved (in terms
of ms) but under a very tight memory and power constrains. To achieve these goals, the NEOX
accelerator will be configured to execute the ML based telemetry modules and also the NEOX
AI-SDK will be used to compress the ML models using two different compression techniques
(quantization to int8 arithmetic and low-rank factorization).
Telemetry
Test case name integration in THI Test Case id Int-test-06-01
platform
Verify the operation of the ML-based telemetry module in far
Test purpose
edge platform
The ML model of the telemetry module has been verified that is
compatible with the NEOX AI-SDK deployment framework.
The ML model of the telemetry module has been verified that is
Configuration
compatible with the NEOX AI-SDK compression framework.
NEOX|Bits FPGA platform with NEOX accelerator is released and
its correct operation has been verified.
Test tool NEOX|Bits FPGA platform
KPI -
Telemetry module
Components NEOX AI-SDK deployment framework
Involvement NEOX AI-SDK compression framework
NEOX|Bits FPGA platform
AI/ML Framework installed and running in x86 platform.
AI/ML Framework installed and running in ARM platform.
Pre-test
NEOX AI-SDK deployment framework is installed and running.
conditions
NEOX AI-SDK compression framework is installed and running.
NEOX|Bits FPGA platform is installed and running.
ML model of telemetry module is executed in x86
Step 1:
machines
ML model of telemetry module is executed in ARM
Step 2:
machines
ML model of telemetry module is compressed using
Step 3:
quantization to int8 numbers
ML model of telemetry module is further compressed
Test sequence Step 4:
using Low-Rank Factorization (LRF) technique
ML model of telemetry module is analyzed by the
Step 5:
NEOX AI-SDK deployment framework
ML model of telemetry module is deployed to NEOX
Step 6:
accelerator
Correct operation is validated by comparing the
Step 7:
gathered logs to the one generated in a x86 machine
Telemetry module is properly integrated in THI FPGA platform
Test Verdict
and effectively parallelized in at least 32 threads, while the ML
We define a series of tests to validate the use of dRAX 5G and the deployment of the Telemetry
xApp using RIC Manager, as discussed in section 3.2.3. These tests include:
For easy reproducibility of these tests, we provide a Swagger GUI, which can be found at:
https://<ric_manager_ip>:8080/docs
To follow the O-RAN standard, we have developed a Non-RT dRAX (as described in
Deliverable 3.2) that implements all the logic that would be part of an O-RAN Non-RT RIC. To
register it, it is only required to provide a friendly name and the same IP address and dRAX’s
API port as the regular dRAX. RIC-Manager’s API also provided a “ric_type” field to indicate
which kind of implementation is going to be registered. The only two currently supported
options are ORAN and dRAX. The following table and present the steps of the test and a
screenshot with the execution.
Request example:
Additional
Resources
Figure 52 Screenshot with the execution of test "Registration of Mocked Non-RT dRAX 5G"
Once the Non-RT dRAX is registered, we can register the dRAX. To do so, we specify the
Non-RT friendly name provided in the previous step, to which the dRAX will be associated, as
well as a friendly name for the dRAX and the IP and port. The following table and Figure 53
present the steps of the test and a screenshot with the execution.
To deploy the Telemetry xApp we need to specify the xApp in which the xApp is going to be
deployed. The request’s body expects two parameters:” xapp_type” and “xapp_name”. To
deploy the Telemetry xApp in a 5G dRAX, we set the value of “xapp_type” to be “5G-
TelemetryxApp”. The “xapp_name” is an identifier to distinguish the exact instance of the xApp
that is going to be deployed. The following table and figure present the steps of the test and a
screenshot with the execution.
Test case
xApp Deployment Test Test Case id Int-test-07-03
name
Test purpose Deploy an xApp on dRAX using RIC-Manager.
Configuration RIC-Manager and dRAX up and running.
Test tool Swagger
KPI -
Components
RIC manager and dRAX
Involvement
Pre-test
RIC-Manager has connectivity to the dRAX.
conditions
Step 1 Select an xApp type from the list of available ones
Test sequence Perform a POST request to the RIC-Manager endpoint
Step 2
“/registration/nonrt”.
Within the request’s body include the xApp type and a
Step 3
name to identify this particular instance.
Additional
Resources
Figure 54 Screenshot with the execution of test "Deployment of the Telemetry xApp"
To create the xApp’s Policy Type, we need to specify the destination Near-RT RIC and pass
as the request’s body the policy type of the xApp (see Figure 55).
Figure 55 Screenshot with the execution of test "Creation of dRAX’s Policy Type"
Once the Policy Type has been created, we can proceed to create the Policy Instance to set
the values of the desired fields. Note that this request does not have a RIC as destination since
all the required information has been defined in the policy. The following table and figure
present the steps of the test and a screenshot with the execution.
Additional
Resources
Figure 56 Screenshot with the execution of test "Creation of dRAX’s Policy Type"
We can recover all the information stored about all the registered Near-RT RICs (or specify a
name to only recover one). That information includes the registration parameters (IP and port)
as well as all the policy instances and xApps associated with the devices.
Figure 57 Screenshot with the execution of test "List all the information stored about dRAX"
4.2.7.7 Deletion of dRAX’s Policy Type and all associated Policy Instances
To delete a Policy Type, and subsequently all the policy instances associated with it, we only
need to specify the dRAX’s friendly name and the Policy Type ID present in said dRAX.
Finally, we can undeploy an instance of the Telemetry xApp by passing to RIC-Manager the
dRAX’s friendly name and the name of the exact instance, which is defined in the “xapp_name”
field of the deployment request.
Figure 59 Screenshot with the execution of test "Undeployment of the Telemetry xApp"
Orchestrator– 5G Core
As discussed earlier the integration between the orchestrator and the 5G Core doesn’t include
its lifecycle management or configuration. But the project has several requirements to use the
data exposed by the 5G core. In such scenario, the integration of the orchestrator was focused
on the federation techniques used to expose the Prometheus server included in the 5G Core
to the other components of the architecture.
Therefore, this subsection aims to showcase the integration tests performed in order to verify
the proper integration between the Orchestrator (Prometheus aggregator), and the federated
Prometheus instance of the 5G Core in the Affordable5G ecosystem.
The test case below shows the process to synchronize a TSN endpoint with a master clock.
This master clock is provided, in our case, by ADVA FSP 150 equipment. Synchronization is
critical in TSN, as time awareness in the full network is needed. Thus, this is the first step to
achieve that final goal.
TSN
Test case name Test Case id Int-test-09-01
synchronization
Test purpose Synchronize TSN endpoint using ADVA FSP 150.
Relyum NIC integrated with TSN endpoint.
Configuration
TSN endpoint connected to ADVA FSP 150.
Test tool ptp4l, phc2sys, Relyum web manager
KPI < 100 ns Clocks’ offset
Results are exposed in Figure 60 and Figure 61. In Figure 60, we can see the output of
executing the ptp4l command, which synchronizes the PHC with ADVA master clock. As it can
be observed, the offset is consistently below 100 ns, which indicates that the synchronization
has been achieved.
In addition, in Figure 61 we can see the output of executing phc2sys command, which
synchronizes TSN endpoint’s system clock with PHC’s clock of Relyum card. Again, the offset
is always below 100 ns, so we can confirm that the TSN endpoint is fully synchronized with
ADVA FSP 150.
The following table summarizes the different KPIs defined per integration test cases. In this
table there are not detailed the functional test cases.
5.1 Introduction
The TSN over 5G Proof of Concept, what is being developed to be integrated in Malaga
platform, was already explained in D3.2 [3]. However, after several discussions and
developments, the architecture has been improved with new functionalities and components.
The final architecture can be found in Figure 62. To ease its comprehension, we will divide the
explanation in the three main challenges coming from the development of a TSN over 5G
solution and how to solve them, these are: Translation from TSN domain to 5G, prioritization
over 5G and time synchronization. These parts will be explained in detail in the next
subsection.
One of the most critical challenges in TSN over 5G is how to adapt the traffic from TSN domain
to 5G domain. To achieve that, the following components are involved: the wired TSN network,
composed by TSN endpoints; the TSN translators, Device Side Translator (DS-TT) in the
device side and Network Side Translator (NW-TT) in the network side; and 5G UEs and UPF,
in the 5G network.
On the wired TSN network, critical traffic is generated and received together with regular traffic.
This critical traffic will be simulated in the final demo with a balancing table, which is on the
wired device side, controlled by another endpoint on the wired network side. In order to send
the traffic from one side of the network to the other one through the 5G network, the first step
is to translate these critical and best-effort traffic from TSN domain to 5G domain. For this
purpose, UMA has developed two translators.
These translators are software-based switches using Stratum CLORAN [7] as OS, which is an
open-source silicon-independent switch operating system for software defined networks.
Prioritization over 5G
In wired TSN networks, scheduling and traffic shaping allows for the coexistence of different
traffic classes with different priorities on the same network. These traffic classes are, in
practice, identified with the PCP field in 802.1Q header. Thus, one of the challenges in TSN
over 5G is how to guarantee these priorities in the 5G network with regard to assure bandwidth
and end-to-end latency.
To achieve that in 5G, a mapping between TSN traffic classes and 5QIs is required. This
mapping involves a 5G PDU session establishment with the 5QI demanded, which can be
carried out either statically or dynamically. The dynamic request of the bearers in 5G can be
managed by the AF. However, the development of a full TSN AF is out of the scope of this
project.
In Affordable5G project, the bearers are established in a static way. Two bearers are
configured: one for critical traffic, associated with a specific 5QI with higher priority, and the
default bearer for best-effort traffic. Unfortunately, there aren’t UEs in the market supporting
simultaneous DNN for data traffic. For this reason, in the final architecture two different UEs
will be used, one for critical traffic and another for the regular one. The TSN translators’ entities
(NW-TT and DS-TT) are in charge of mapping the TSN class into a 5QI, and vice versa,
modifying the PCP field.
Synchronization
Synchronization is another key feature of the TSN over 5G network. The optimal approach
would be to use a unique master clock for the full network. Thus, transmitting the
synchronization packets (PTP) through 5G network. However, this is not available in 3GPP
Release 15, which is the version supported in Affordable5G network. For this reason, in this
PoC, synchronization is carried out with the help of two FSP 150-GE100Pro, provided by
ADVA, that act as grand master clocks. It is important to note that we are using two different
master clocks, one for the device side and another for the network side. However, these master
clocks use the same GPS signal as a reference. Thus, an important assumption here is that,
as they are both using the same reference signal, both sides are essentially synchronized.
Signal clock is propagated through both sides of the wired network using PTP flows, which are
generated by the masters and distributed to the clients’ PHC clocks. Moreover, at the end
stations, not only the TSN cards are synchronized, but also the OS system clocks, to be able
to generate and receive synchronized traffic at application level. Finally, for this proof of
concept, this synchronization signal will not be distributed through the 5G network for the
aforementioned reasons. Instead, own 5G mechanisms and configurations will be modified to
be optimized for TSN over 5G, both in RAN and 5G core.
Focusing on architecture level, at the moment only Nokia RAN solution is deployed, which is
part of the Malaga platform in which this proof of concept is carried out. It is expected to have,
at the end of the project, two different O-RAN architectures available, together with the Nokia
RAN. One, coming directly from Affordable5G project and an additional one, acquired by UMA,
in order to show multivendor interoperability. The second difference in the architecture is that
only one 5G UE (Telit fn980m) is used in the current status. Thus, it is needed to add an
additional UE in order to allow the management of QoS. Basically, each UE will be associated
to a different bearer, which has been configured for specific 5QIs.
These are the only differences in the architecture. However, there will also be modifications in
the functionality of the components. Specifically, the inclusion of an additional UE will include
a new port in the DS-TT translator that needs to be managed. Therefore, it is needed to
improve the features of the translator. The updated entity, depending on the access port, will
modify the PCP field, in order to assign the corresponding priority. That is, if data is coming
from the 5G UE for critical traffic, the PCP field will be modified with a higher value than if it is
coming from the 5G UE for regular traffic. In addition, another important job to be done is the
research on the optimal configuration of the 5G network for TSN traffic. Several configurations
will be tested, and the optimal parameters will be set up in the final solution.
Finally, it is also important to mention that the synchronization part will not be updated, as it is
assumed that the current solution is good enough for this proof of concept. Final results will
show the suitability of this approach.
6 SMARTCITY
6.1 Introduction
The SmartCity pilot intends to demonstrate the usage of a 5G private network, and the
advances provided by Affordable5G in an emergency scenario occurring at a shopping mall.
The demo scenario will host in-door, assuming the loss of a child in a shopping mall.
The pilot will provide two types of services: a dynamic and a static service. The dynamic service
will be hosted indoors, and its main goal is to find a missing person given the provided image
captured by mobile device. The static service consists of a security CCTV system for person
detection. The following figure illustrates the links between the person detection system and
the Affordable5G components and provides the essential information and data flows required
for addressing the pilot’s requirements and specifications.
The distinction between these two services besides the ML algorithm, equipment and their
core functionalities are in the fact that it is expected that the 5G Core network will change the
bandwidth priority for the Dynamic service when needed.
This architecture (Figure 64) assumes the existence of a telecommunication network. Such a
network should be formed by a 5G-NPN deployment, consisting of the Core and the Radio
Access parts and moreover extended by a wired connection. It should be noted that such a
network must provide the necessary integration mechanisms for the use case to succeed, in
other words a wired, but most importantly, a 5G-NPN enabled communication channel is of
the utmost importance. According to discussions with the different partners involved in the
infrastructure (for this specific use case) a router is used to enable point to point communication
between the network actors. With this in mind, the use case that we are proposing should be
connected to this router and, by that, have access to the 5G capabilities of the infrastructure
(5G Core, UPF, OSM, etc.).
Dynamic service
The demo scenario starts with a parent reporting his missing child in a shopping mall. The
Affordable5G’s OSM (orchestrator) module will then deploy the detection system, located at
the edge layer of the 5G private network, to analyze the video streams from all the cameras in
the mall. This analysis will require the usage of a UPF, which may be privately owned or belong
to a national operator. It is responsible for the routing of the actual data coming from the RAN
to the Internet/DN. The UPF quickly and accurately routes the packets to the correct
destination over the Internet/DN, addressing the need for a security guard to perform video
streaming using a mobile phone.
The 5G Core and the O-RAN modules will be used to supply the highest possible bandwidth,
through their prioritization feature, to the video equipment in order to allow video streaming
over the 5G network.
Having received the information, consisting of at least one image of the missing person, a
Machine Learning (ML) algorithm focused on person re-identification is used to search for the
target person in the mall camera’s video streams. In re-identification approaches (REID) [17],
a system tries to identify a target person in a gallery/query set. The following picture illustrates
the flow of designing a practical person Re-ID system, which consists of the following five
steps:
From these five steps, the re-identification methods are generally divided into two classes:
Open-World Person Re-Identification and Closed-World Person Re-Identification. The
difference between both classes is summarized in the following table (REID) [17]:
The chosen algorithm for Person Re-Identification is the Self-paced Contrastive Learning with
Hybrid Memory for Domain Adaptive Object Re-ID (SpCL) (MEBN) [13] which belongs to the
Open-World Person Re-Identification class. The SpCL algorithm was chosen because it is an
unsupervised learning method, it is a domain adaptive Person Re-identification method that
improves upon Re-ID accuracy when applied to a different scene compared to the training
scene, and due to its performance in both Cumulative Match Characteristics (CMC) [14] and
Mean Average Precision (mAP) [15] metrics.
This solution is deployed in a Jetson Nano (Jetson Nano-A) [16], which is an advanced
embedded system for use with ML applications. This system, in conjunction with having a small
form factor, high performance and power efficiency, includes an ecosystem that enables a fast
development process for custom ML projects. The Jetson Nano hosts a web server to function
as the platform where the security guard will be able to get the alert of the missing child.
Once a detection is made by the ML algorithm, the security guard is notified through an Android
application that the missing child was matched in a video stream. The security guard then
proceeds with the evaluation of the detection by confirming or denying the match. This in turn
will alert the JetsonNano about the status of the search: if the guard confirms the match, the
search will stop and the response will be propagated to the OSM; if the guard denies the match,
the search continues. This feature is implemented using WebSockets. The necessary services
to notify the parents will start as depicted in the following image:
The next image represents an alternative diagram which illustrates how the association
between the different components of the dynamic service at a protocol level is envisioned to
be.
• Missing child signal receiver receives and makes the obtained information from the
missing child alarm available to the server, in order for it to be used by the API to inform
the guard and by the re-ID algorithm. The information received from the external
services contains the information of the lost child.
• API: processes the communication/data between the Application/Web browser and the
server instantiated in the Jetson Nano. While transmits/receives data with Person Re-
identification when an alarm of a lost child is enabled.
• Person Re-identification: tries to find a match between the provided missing child’s
picture/picture set and the gallery set. This task starts once a missing child alarm is
received and will continuously make a positive result available to the server, in order to
be sent to the security guard for confirmation, until the security guard confirms the
result.
• Camera: The smartphone’s camera captures the scene pointed by the security guard.
• Android Application/Web Browser: Platform used by the security guard to receive the
notification that an alarm of a lost child is ongoing and that he must start a new search.
Additionally, this platform allows the user to confirm/dismiss a detection made by the
re-ID algorithm. Furthermore, depending on the manner that the access to the security
guard smartphone camera is done, it may transmit the frames to the Jetson Nano.
• OSM: triggers the lost children alert across the network until it reaches the Missing child
signal receiver.
• 5G Network services: When necessary, provides higher prioritization,
transmits/receives the status of the missing child alert between the OSM and the
server.
Considering the scenario, a successful and rapid resolution of the emergency depends on how
the 5G network supports the intervention of the security team member in the search for the
missing kid. This is only possible due to two specific network features: the capability of locally
maintaining the user data traffic, in order to minimize the latencies and allow smooth data
exchange between the mobile phone and the Jetson Nano; and the capability of prioritizing the
traffic of the security guard’s smartphone compared to other User Equipment (UES), so as to
avoid service quality degradation in case of network congestion due to a crowded area.
Additional traffic prioritization and QoS control are performed via specific configurations of the
UPF at the PDU session management level. These configurations are implemented within the
UPF as packet filters and are identified by a series of specific parameters that differentiate the
QoS flows within PDU sessions. For each device, these parameters are provided to the UPF
by the Session Management Function (SMF), which, in turn, receives updates on traffic policies
from the Policy Control Function (PCF).
Static service
The static service is available through two types of computer systems: a server capable of
performing person detection in multiple video streams provided by connected edge nodes, and
a Jetson Nano (Jetson Nano-B) which performs the role of an edge node, that also consists in
person detection given the video stream obtained from a connected camera. It's referred to as
a static service since it doesn't take advantage of the 5G capabilities due to some equipment
not being capable of connecting to the 5G network.
The Jetson Nano performs person detection using a lightweight person detection algorithm
based on You Only Look Once (YOLO), which makes predictions with a single network
evaluation.
When a person is detected by the Jetson Nano, the video stream is transmitted to the
BullSequana Server (BLLSQ) [18], a more computationally powerful device compared to the
Jetson Nano, where a more capable and robust machine learning algorithm processes the
stream. On this server, there is also hosted a web server that displays to a user of the UP the
video stream with the ongoing detections. These detections are expected to be stored in a
Wasabi [19] bucket, which is a cloud storage service that provides a high-performance,
reliable, and secure data stor-age infrastructure. These stored files will also be available to
access in the UP.
Additionally, the connection between the Jetson Nano and the network provider will be
supported by an ethernet connection since this equipment natively does not support 5G
connections.
The ML algorithm running in the server is based on DeepSORT and YOLOv4, [20] which
enables the detection and tracking of individuals within the camera's field of view. The overall
workflow of this system is illustrated in the following image:
To better depict how the different components involving this service work, the following
organizational diagram was created. Moreover, the enumeration between the components
intends to suggest an order that may not be followed.
• Edge server: transmits the live stream and receives the responses if it should transmit
the information to the BullSequana Serve. This latter capability was omitted to simplify
the illustration.
• RabbitMQ: receives and transmits the reply to the Edge Server, if it has permission to
perform the transmission.
• Main Process: performs the detection and saves the captures to Wasabi.
• SubprocessFastAPI: Provides an API for requesting camera streams and the saved
videos.
• Webserver: Provides a web interface for viewing the streams made available by the
FastAPI
Dynamic service
The current iteration of the Dynamic Service is still under analysis to ensure the optimal
communication between the Jetson Nano and the security guard’s smartphone. As well as to
define the minimal requirements regarding the dataset of the lost child to have a fully functional
ML algorithm, also, these requirements should address the mitigation of false positives and
false negatives.
As previously mentioned, the best approach for the security guard to receive the notification of
a lost child is by a native application or a web browser approach is being investigated.
Static service
In relation to the progress of the Static Service implementation, the software project for the
Jetson Nano is dockerized, has access to the GPU for faster processing and takes advantage
of a Machine Learning algorithm to detect people. It can also be connected to a camera via
Real-Time Streaming Protocol (RTSP) or through a physical connection.
As for the integration with the different partners, a task force was created to facilitate
communication and promote synergy to have a faster development cycle and deployment. The
significant milestones accomplished for this task are summarized in the following table.
Functionality Status
Live feed transmission to the Cloud. Done
Request and receive port to stream to the Cloud. Done
Stop transmission to the Cloud and resume its detection Done
Integration of the software solution with Docker Done
Verification of the type of camera it is connected Done
The following table summarizes the implemented features and their current development
status.
Functionality Status
Receive a request from the Edge and assign a new port if any are
Done
available.
Storing the frames in a media file and into a wasabi bucket. Partially Done
An essential integration with Urban Platform [27] was done, which consisted of the use of
authorization tokens to verify that the user has the required permissions to access the live
stream. It needs further development once all current issues are addressed.
Since the transmitted information is sensitive, VidGear [28] was configured to use the secure
mode in order only to allow servers/clients to communicate if they have a valid certificate, thus
safeguarding the streaming from nefarious agents. A RabbitMQ broker was deployed with user
authentication and Transport Layer Security (TLS) not to allow unauthorized instances to
communicate or spoof the data.
A part of the CVAE services will be also demonstrated in the FPGA platform provided by THI.
In particular, the people detection algorithm will deploy in the Zynq platform in which a
multithreaded instance of the NEOX accelerator will be realized. Apart from the hardware IP,
THI will also utilize the NEOX SDK for optimizing the CNN models in terms of memory footprint
and execution time. The goal is to explore and showcase that the person detection algorithm
can be executed with the required performance in a far edge platform (e.g., a low-cost camera)
characterized by scarce resources (both in terms of memory and computational capabilities)
operating at the same time under tight power constrains. The significant milestones for this
part are represented in the following table.
Functionality Status
Development of a lightweight DNN-based person detection
Done
application
Integration of the person detection of application in the AI-
Partially Done
SDK compression framework
Integration of the person detection of application in the AI-
Partially Done
SDK deployment framework
Deployment of person detection in NEOX accelerator Done
The person detection CNN model is deployed in THI FPGA platform and effectively parallelized
in at least 32 threads, while the CNN is compressed by a factor of 5x compared to the initial
size of the model. The execution of each inference step is executed in less than 1 sec at a
power consumption less than 3.5mWatts.
7.1 Introduction
“Emergency Communications” Pilot objective is to integrate, deploy and demonstrate 3GPP-
compliant MCS services on top of the Affordable5G private network responding to cloud native
functions of monitoring, flexible deployment, scaling and resources allocation.
As part of the Affordable5G network, there is the possibility of deployment over different point
of presence (PoP), being able to deploy the service in generic main infrastructure or in closer
and more optimized placement edge locations. This underlying infrastructure gives the
possibility of not only applying modifications to the service once deployed in a single PoP but
also the chance of moving the service by re-instantiating in another PoP if the monitored
metrics meet the re-instantiation threshold (seen as the need for a better place to host the
service).
The two use cases that are going to be demonstrated in the validation phase are the ones
corresponding to service scalability and service re-instantiation.
Whenever the system detects a service KPI degradation due to an increasing number of
connections or a load increase, an MCS scaling mechanism is implemented to deploy a new
MCS CNF/KNF. For the case of high latency if an edge service deployment and a MC service
instantiation may be possible, another MC instance in the edge will be implemented.
Depending on the events happening in a certain area, both system and MCS service KPI are
collected by the System Monitoring. Alarm and detection systems are required to notice any
variation of the nominal values from the system and MCS KPI. A communication between the
orchestrator, the MCS Service and the infrastructures is considered. The orchestrator shall
trigger the actions (in our case, service scalability and service re-instantiation) depending on
the alarms received from the monitoring module. The orchestration layer shall provide
management and operation of network slice creation across the whole infrastructure and the
services instances as well.
In Figure 74 below we can see a UML diagram, from “D1.2: Affordable5G building blocks fitting
in 5G system architecture” [1] depicting the interactions that were initially considered and that
have been simplified during project lifetime. The analytics subscription routine, monitoring and
detection routine and scale-up routine are shown between the main components described in
the previous paragraph.
As the integrations have been materialized and the validations have started, it has been
decided to integrate the MCS service, monitoring module and orchestrator as follows.
Nemergent Services will expose some internal metrics using Prometheus exporter to NWDAF
or Network Analytics module developed by NKUA. NWDAF will process and aggregate this
data and expose this data through Prometheus. This data will be consumed by Orchestrator
developed by Nearby Computing and according to that data the Orchestrator will trigger the
required action to fulfill each use cases requirements in terms of service re-instantiation.
Figure 76 illustrates in a simplistic way the different dockerized components that are deployed
with the Nemergent HelmChart and the connection points or exposed ports with the client side
of the service. In this regard, the Castellolí platform offers from the very beginning the NodePort
networking exposure function that the Nemergent service uses in the HelmChart and considers
for evolutions of the service like scaling-up methods (technical consideration of having same
IP and different port for all “external” components in NodePort). Currently the platform also
offers the possibility of deploying with LoadBalancing capabilities and Nemergent is adjusting
the deployment and service intrinsic relationships to completely support it (technical
consideration of having same port and different IP for all “external” components in
LoadBalancer).
Taking into account the abovementioned figure and the used NodePort method to externalize
certain components to be reachable by the client side, the next table sums-up the utilized ports
while deploying in Castellolí. The reason to highlight such table is due to the fact that the
service has required hardcoding adjustments in terms of used ports for Configuration
Management Server (CMS) and Identity Management Server (IdMS) that usually make use of
ports 80 and 8080. This time, the ports were modified to be 20001/20003 and 20002/20004 (in
bold) to avoid present blocked ports in the Castellolí infrastructure and complex networking
environment.
P-CSCF
TCP/UDP 5060
(IMS)
All constraints and adjustments considered, the Emergency Pilot can be easily deployed using
the HelmChart provided by Nemergent and modified to point to NearbyOne docker registry.
Figure 77 shows the “nemergentmcs” service deployed in “slice-nemergent” Namespace in
Castelloli.
The service brings all linked components in the HelmChart and are easily accessible as
isolated Pods. Tools like the one used in the screenshots (Lens) helps in the troublesome
process of troubleshooting specific Pods in terms of available logs or checking the internals.
As explained before, this entire MCS/MCX service provided by Nemergent is deployed at the
moment using NodePort in Castellolíand this has a direct impact on the assigned “external” IP
and port for those components that require external access from the client side. Figure 79
shows the service part of the deployment where the cluster IPs are shown alongside the
mapping “external” port in the Ports field for all “NodePort” Type service.
Additionally, the Emergency Communication pilot also takes advantage of the LongHorn
storage class for those Pods that need stateful information stored in case of Pod restarts or
stability. To this end, Nemergent uses 6 different persistent volumes that are directly requested
or claimed in deployment time.
Once the service is completely deployed, the end-to-end testing can be fulfilled. In this regard
Nemergent has conducted a thorough analysis of the service and has encountered several
limitations when it comes to the infrastructure networking that directly impacts the proper
reception of packets for the service. Next table summarizes the driven steps and the
encountered blocking points.
Until the networking burden is solved, Nemergent has proceed with a Plan B to factually assess
the end-to-end service in a separate cluster controlled by NearbyOne. Once double-checked,
this should be equally transposed to the actual Castellolí platform.
8 CONCLUSIONS
This deliverable is an update of the previous deliverable in this work package: D4.1 Integration
and Affordable 5G rollout plans. It includes the upgrade of both Malaga and Castellolí platforms
achieved during the second year of the project lifetime, including the installation and
deployment of enhanced products, developed prototypes and open platforms to carry out
complete system tests. The installation and deployment are based on the methodology that
has been described in detail in the previous deliverable, which is crucial for the execution of
these tasks.
The focus of this document has been on the integrations performed for Affordable 5G
components, the different Test Cases defined individually and also the integrations that already
took place in the different platforms as well as initial services running on top of these 5G NPN
testbeds in Malaga and Castelloli. As described along the document, numerous successful
integrations have been performed, as the different partners demonstrated the result of hard
work done along the project. Even in the cases that presented some slight deviations,
alternative Test Cases were demonstrated.
Additionally, several taskforces (consisting of groups of partners) have started working on the
different pilots, as explained in the sections 6, 7, 8 of this document, where the interplay and
the requirements for the integration of components and services have been described in detail.
Each pilot has the purpose of challenging the platform, not only to test the connectivity, but
also each component and the overall performance of the systems. For example, the Time-
Sensitive Networking over 5G offers a relevant innovation in terms of network evolution, being
able to solve its main challenges: Translation from TSN domain to 5G, prioritization over 5G
and time synchronization. The Smart City Pilot uses the same indoor network to validate a real
environment scenario with strict requirements on latency and bandwidth demand, and the
Mission Critical Services Pilot evaluates the outdoor private network in an emergency situation,
using edge computing services.
Finally, this deliverable will be updated in D4.3, as the final deliverable of this Work Package.
It will include the details of the end-to-end integrations and the validation results of the two
Pilots demonstrations.
9 REFERENCES
[1] Affordable5G, Deliverable D1.2: Affordable5G building blocks fitting in 5G system
architecture [Online].
[2] Affordable5G, Deliverable D4.1: Integration and Affordable5G roll-out plans [Online] -
https://www.affordable5g.eu/download/d4-1-integration-and-affordable5g-roll-out-
plans/?wpdmdl=1230&masterkey=623307620ba59, August 2021.
[3] Affordable5G, Deliverable D3.2: Software developments release [Online] -
https://www.affordable5g.eu/deliverables, March 2022.
[4] Affordable5G, Deliverable D2.2: Hardware solutions release
[5] 5G-ACIA White Paper, Integration of 5G with Time-sensitive Networking for Industrial
Communications [Online] - https://5g-acia.org/wp-content/uploads/2021/04/5G-
ACIA_IntegrationOf5GWithTime-SensitiveNetworkingForIndustrialCommunications.pdf
[6] SLI: O-RAN.WG1.Slicing: O-RAN Working Group 1 Slicing Architecture, v02.00, July
2020.
[7] CLORAN: O-RAN.WG6.CAD: Cloud Architecture and Deployment Scenarios for O-RAN
Virtualized RAN, v02.01, July 2020.
[8] OPNET: Stratum [Online] - https://opennetworking.org/stratum/, Accessed: June 2022.
[9] OSPROG: P4 Open Source Programming Language [Online] - https://p4.org/, Accessed:
June 2022.
[10] Ublox modem specification - https://www.u-blox.com/sites/default/files/SARA-R5-R4-
GNSS-Implementation_AppNote_%28UBX-
20012413%29.pdf#page=67&zoom=100,68,572
[11] NEOX: https://www.think-silicon.com/neox-graphics.
[12] NEMA: https://www.think-silicon.com/nema-bits
[13] MEBN: 821fa74b50ba3f7cba1e6c53e8fa6845-Paper.pdf (neurips.cc).
[14] CMC: Evaluation Metrics — Open-ReID documentation (cysu.github.io).
[15] MAP: Mean Average Precision (mAP) Explained: Everything You Need to Know
(v7labs.com).
[16] JTSN: NVIDIA Jetson Nano Developer Kit | NVIDIA Developer.
[17] REID: 2001.04193.pdf (arxiv.org).
[18] BLLSQ: BullSequana S (atos.net).
[19] WASABI: Creating a Bucket (wasabi.com).
[20] YOLO: GitHub - theAIGuysCode/yolov4-deepsort: Object tracking implemented with
YOLOv4, DeepSort, and TensorFlow.
[21] TFX Airflow Tutorial, https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop.
[22] TFSD: TensorFlow Serving with Docker, https://www.tensorflow.org/tfx/serving/docker.
[23] RDFSH: https://www.dmtf.org/standards/redfish.
[24] WKRD: https://en.wikipedia.org/wiki/Redfish_(specification).