Deploying Commissioning and Integrating Cloud RAN BTS
Deploying Commissioning and Integrating Cloud RAN BTS
DN276085968
Issue 01 DRAFT
Revised on 2024-02-14
© 2024 Nokia. Nokia Condential Information. Use subject to agreed restrictions on disclosure and use.
Deploying, Commissioning, and Integrating Cloud RAN BTS
This document includes Nokia proprietary and condential information, which may not be
distributed or disclosed to any third parties without the prior written consent of Nokia. This
document is intended for use by Nokia’s customers (“You”/”Your”) in connection with a
product purchased or licensed from any company within Nokia Group of Companies. Use this
document as agreed. You agree to notify Nokia of any errors you may nd in this document;
however, should you elect to use this document for any purpose(s) for which it is not
intended, You understand and warrant that any determinations You may make or actions
You may take will be based upon Your independent judgment and analysis of the content of
this document.
Nokia reserves the right to make changes to this document without notice. At all times, the
controlling version is the one available on Nokia’s site.
NO WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
ANY WARRANTY OF AVAILABILITY, ACCURACY, RELIABILITY, TITLE, NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, IS MADE IN RELATION TO THE
CONTENT OF THIS DOCUMENT. IN NO EVENT WILL NOKIA BE LIABLE FOR ANY DAMAGES,
INCLUDING BUT NOT LIMITED TO SPECIAL, DIRECT, INDIRECT, INCIDENTAL OR
CONSEQUENTIAL OR ANY LOSSES, SUCH AS BUT NOT LIMITED TO LOSS OF PROFIT,
REVENUE, BUSINESS INTERRUPTION, BUSINESS OPPORTUNITY OR DATA THAT MAY ARISE
FROM THE USE OF THIS DOCUMENT OR THE INFORMATION IN IT, EVEN IN THE CASE OF
ERRORS IN OR OMISSIONS FROM THIS DOCUMENT OR ITS CONTENT.
© 2024 Nokia.
A list of changes between document issues. You can navigate through the respective changed
topics.
AirScale Cloud RAN BTS (Cloud RAN BTS) deployment and commissioning processes involve
installation and configuration of hardware and software. The cloud-native network functions
(CNFs) of the gNB are deployed on the Red Hat OpenShift Container Platform (OCP), which must
be installed and configured on host hardware servers.
Virtualized gNB distributed It is a distributed data center unit that hosts virtualized, real-time (RT)
unit (vDU) functions of the radio access cloud (RAC).
Virtualized gNB central unit It is a central data center unit that hosts virtualized, non-real-time
(vCU) (NRT) functions of the gNB and controls the operation of one or
more vDUs.
The vDU and vCU are installed on the cloud infrastructure, deployed on hardware servers. The
servers connect to the management network through a cell site router (CSR).
In addition to the hardware servers, the physical entities of the Cloud RAN BTS deployment
include:
Radio units (RUs) Entities that host the RF functionality of the gNB. The RU is a physical
radio supporting either the Common Public Radio Interface (CPRI) or
the Enhanced Common Public Radio Interface (eCPRI) fronthaul
connectivity. The RU may also host layer 1 (L1) processing in the
solution based on an L1 functional split.
Nokia AirFrame Fronthaul Gateway, which can work either as a conversion device or an Ethernet
Gateway (FHGW) switch. It is responsible for converting the CPRI traffic in the time
domain to the eCPRI traffic in the frequency domain. It provides
switching of the eCPRI-native RU to the vDU.
RAN NICs are installed and controlled using the following Nokia proprietary Kubernetes operators
for Cloud RAN BTS, deployed on cloud infrastructure together with the vDU CNF:
RAN NIC Software Nokia proprietary Kubernetes operator for Cloud RAN BTS which
Controller controls the life cycle management (LCM) operations of RAN NICs.
Nokia Advanced Nokia proprietary Kubernetes operator for Cloud RAN BTS , which
Synchronization provides the Precision Time Protocol (PTP) time reference for the
Kubernetes Operator vDU.
(Nokia Synchronization
Operator)
Notice:
This section presents the target Cloud RAN BTS deployment models. C-RAN configurations
will be supported in future releases.
D-RAN and C-RAN deployment models provide several options for the vDU and vCU locations.
D-RAN (small deployment) Cell site (co-located with a Cell site (co-located with a
vCU) vDU)
C-RAN (small deployment) Far-edge data center (co- Far-edge data center (co-
located with a vCU) located with a vDU)
In both deployment models, RUs connect with vDUs through the fronthaul interface using CPRI or
eCPRI protocols. The eCPRI is a low-latency Ethernet-based fronthaul interface towards legacy and
new RUs in the Cloud RAN BTS network. Legacy RUs use the CPRI interface, which requires FHGW
between an RU and a vDU. FHGW converts the CPRI to the eCPRI.
The vCU and the vDU are deployed on the cloud infrastructure using Kubernetes containers,
organized into pods. Containers are a type of software that can virtually package and isolate
applications. This way, applications can share access to an operating system without the need for
a virtual machine (VM). Pod deployment is automated using Helm, which is a system for simplifying
container management. The Helm charts are provided as part of the deployment package and
contain the details of resource creation and pod deployment with default values of the CNF
deployment parameters. You can override the default configuration values with parameters
specific to your infrastructure and the chosen deployment type by modifying the
values.override YAML files included in the deployment package.
The hardware servers which host the CNFs use the Container-as-a-Service (CaaS) framework
provided by OCP. You need to install and configure OCP on the hardware servers before the CNF
deployment. For detailed information on the supported hardware configurations, see Reference
Documentation/AirScale Cloud RAN BTS Supported Configurations.
Automation of the CNF deployment is possible thanks to the use of Nokia MantaRay NM and
other management tools, such as Nokia AirFrame Data Center Manager (NADCM) for hardware
management and Nokia Edge Automation Tool (NEAT) for OCP deployment and cloud
infrastructure management. Additional tools, such as Data Collection and Analytics Platform
(DCAP) and Unified Troubleshooting Framework (UTF), enable monitoring and troubleshooting
Cloud RAN BTS.
RAN NICs software doesn't belong to the vDU CNF. To deploy it with the vDU CNF, first install the
RAN NIC Software Controller on the target Kubernetes cluster. The RAN NIC drivers are installed
on the target cluster in the post-configuration phase of the OCP deployment. For more
information, see Deploying cloud infrastructure.
Installation of the The management software necessary for the deployment of Cloud
management applications RAN BTS includes:
• NADCM for hardware management
• NEAT for installation and management of the cloud infrastructure
and pre-configuration of the FHGW and RAN NIC software
• MantaRay NM as a network management system (NMS)
Installation of hardware The selected deployment model defines the hardware configurations.
components at their
locations
Deployment of the cloud Deployment of the cloud infrastrucure is automated with NEAT.
infrastructure
Deployment of the FHGW Deployment of the FHGW and RAN NIC software is automated with
and RAN NIC software NEAT and MantaRay NM.
Deployment of the vCU, Deployment of the CNFs and the Kubernetes operators is automated
vDU, and Nokia proprietary with MantaRay NM.
Kubernetes operators for
Cloud RAN BTS
For detailed end-to-end Cloud RAN BTS deployment procedure, see Deploying Cloud RAN BTS.
For an overview of the tools used in the Cloud RAN BTS deployment procedure, see Cloud RAN
BTS LCM overview.
Systems and tools involved in the AirScale Cloud RAN BTS (Cloud RAN BTS) life cycle management
(LCM)
For detailed information on the Cloud RAN BTS LCM, see the Operating Documentation/AirScale
Cloud RAN BTS Features/Cloud RAN BTS Site Solution Features/Cloud RAN BTS Life Cycle
Management.
The Cloud RAN BTS LCM workflow with the tools and systems responsible for the process
presents in the following way:
Figure 3: Overview of the systems, tools, and use cases involved in the Cloud RAN BTS LCM
OCP overview
Nokia Red Hat OCP is the Container-as-a-Service (CaaS) infrastructure that hosts the Cloud RAN
BTS CNFs. It's based on Kubernetes. Deployment of the Cloud RAN BTS CNFs is orchestrated
using Helm, with instructions for the deployment specified in Helm charts. In the Cloud RAN BTS
deployment process, OCP is deployed using NEAT.
MantaRay NM overview
MantaRay NM serves as the LCM orchestrator of the CNFs. It executes the Helm charts to deploy
the CNFs and coordinates the involved resources. Equally important is the configuration manager
applying the configuration plans to the CNFs. The configuration plans are specified in site
configuration files (SCFs).
Central container registry (CCR) runs in the management cluster alongside MantaRay NM and
stores the container images of the CNFs, cloud infrastructure, and Nokia proprietary Kubernetes
operators for Cloud RAN BTS.
For more information, see MantaRay NM Operating Documentation.
NEAT overview
NEAT is the infrastructure manager, providing edge cloud automation of:
edge site hardware, cloud infrastructure, and cloud infrastructure manager deployment.
configuration modification.
cloud infrastructure upgrade.
Kubernetes cluster creation and configuration.
NADCM overview
NADCM is a management system for optimizing and automating data center operations and
resource usage. It provides the following functionalities:
A single view over distributed data centers
Operations across distributed data centers
Hardware and embedded software inventory
Alarm monitoring, performance management, and event management for the distributed data
centers
Device configuration and firmware management
NADCM and NEAT provide a single real-time view of cloud infrastructure and data center
resources. They allow you to manage faults, performance, and configuration.
In the Cloud RAN BTS deployment process, NADCM and NEAT are used to plan and commission
hardware and networking resources.
For more information, see Operating Documentation/Nokia AirFrame Data Center Manager.
DCAP overview
DCAP Suite is a Nokia solution for data collection, analysis, and troubleshooting. It includes several
different products and serves all radio technologies. DCAP Basic is a part of the solution, which is
integrated with MantaRay NM. In Cloud RAN BTS, DCAP can be used optionally for troubleshooting
data at the call level.
Central container registry (CCR) is an Open Container Initiative (OCI) compliant registry that
provides a storage and content delivery system for OCI artifacts. In AirScale Cloud RAN BTS
(Cloud RAN BTS) solution, the CCR is integrated with MantaRay NM.
During the installation of MantaRay NM, CCR is installed as a container service on Compute1
virtual machine (VM), which is an optional VM for the MantaRay NM LCM. After the CCR is installed,
configured, and integrated with Nokia Edge Automation Tool (NEAT) and MantaRay NM, you can
use the respective tools to onboard the software images to the CCR:
NEAT for OCP images
MantaRay NM for CNF images
For more information on the installation and configuration of the CCR, see Administering CCR in
MantaRay NM Operating Documentation.
For more information on the CCR integration to MantaRay NM, see Integrating Container
The MantaRay NM Workflow Engine provides LCM operations, which fetch the artifacts from
the SWM NFS and push them to the CCR.
The AirScale Cloud RAN BTS (Cloud RAN BTS) life cycle management (LCM) operations involve
communication between several cooperating tools and systems. To perform Cloud RAN BTS LCM
operations, you need to enable communication between the relevant tools and systems.
Cloud RAN BTS deployment is based on the cooperation between the following tools and systems:
Nokia AirFrame Data Center Manager (NADCM) for hardware management
Nokia Edge Automation Tool (NEAT) for installation and management of the cloud
infrastructure and preparation of the Nokia AirFrame Fronthaul Gateway (FHGW) deployment
MantaRay NM as network management system (NMS)
Central container registry (CCR) as an Open Container Initiative (OCI) artifacts repository and
content delivery system
CaaS Containers-as-a-Service
CM configuration management
FM fault management
PM performance management
RU radio unit
To perform the Cloud RAN BTS LCM operations, you need to enable the following connections
between relevant tools and systems:
NEAT 443
application
programmi
ng interface
(API)
Hardware 69 (TFTP)
server - 6996 (NADCM HTTP)
FHGW
deployment
with
Preboot
Execution
Environmen
t (PXE)
method
An overview of the full end-to-end AirScale Cloud RAN BTS (Cloud RAN BTS) deployment process,
including references to detailed instructions for the individual steps
Purpose
To deploy Cloud RAN BTS, you need the following supporting tools and systems:
MantaRay NM, which is a network management system (NMS).
Nokia AirFrame Data Center Manager (NADCM), which is a hardware infrastructure
management system.
Nokia Edge Automation Tool (NEAT), which is a cloud infrastructure management system.
If there are no existing instances of the supporting tools in your network, you need to deploy and
commission them first. After that you can start the deployment of the actual Cloud RAN BTS
components:
Virtualized gNB central unit (vCU)
For detailed information on the role of the supporting tools and systems in the overall Cloud RAN
BTS deployment process, see Cloud RAN BTS LCM overview.
The order of the steps presented in this procedure is a guideline. Some steps can be performed
simultaneously.
Integrating the vCU, vDU, FHGW, and RUs with each other and with the core network is not
treated as a separate step, but is done as part of the configuration provisioning.
The steps that describe operations on the vDUs, RUs, CSR, and FHGW are referring to a particular
site. You need to repeat these steps for every site.
Note:
CCR runs in the management cluster alongside the NMS and stores the cloud-native
network function (CNF) and cloud infrastructure container images. You can access the CCR
using MantaRay NM.
Procedure
1 Deploy and integrate the supporting tools and systems.
You need to perform this step only if there are no existing instances of the supporting tools
in your network.
1.1 Deploy MantaRay NM.
For more information, see Configuring and integrating CCR with MantaRay NM.
NADCM and NEAT need to be installed on the same Kubernetes cluster. Installation of
NADCM is a step in the NEAT installation procedure. The NADCM software is included
in the NEAT installation package. For instructions, see Operating
Documentation/Nokia Edge Automation Tool//Installation/Installing NEAT.
Note:
The NEAT installation package also includes NEAT Planner Application, which is
used to create edge data center deployment plans, needed as input for the
NEAT workflows. For more information, see Operating Documentation/Nokia
Edge Automation Tool/Operations and Maintenance/Operating NEAT
Planner Application.
4 Onboard the Nokia Red Hat OpenShift Container Platform (OCP) images to the CCR.
5.1 Prepare the site configuration files (SCFs) and the value override files for your
deployment.
You can configure and integrate the CSR using NADCM. For instructions, see Operating
Documentation/Nokia AirFrame Data Center Manager/Operations and
Maintenance/Operating and Maintaining NADCM.
Note:
In some deployment models, the vCU can also be co-located with the vDU at
the far edge or cell site. In such a case, you install the hardware simultaneously
for both the vDU and the vCU. For more information on the Cloud RAN BTS
deployment models, see Overview and requirements for Cloud RAN BTS
startup.
You need to select a target deployment profile, specific to your configuration. For
Tip:
If the vCU and vDU are co-located, you only need to deploy one OCP cluster for
both CNFs.
8.6 Prepare a CNF deployment plan and Kubernetes secrets for a CNF object in MantaRay
NM.
Note:
You need to set a target deployment profile, specific to your configuration,
with the correct parameter value in the values override files. For more
information, see Filling in the values override files for the vCU.
Note:
You need to commission the vCU in CU WebEM only if you didn't use
autoconfiguration during the deployment.
10.1 Install the hardware servers and the IXR switches (if used) at the far edge or cell site .
Note:
In some deployment models, the vCU can be co-located with the vDU at the far
edge or cell site. In such a case, you install the hardware simultaneously for
both the vDU and the vCU. For more information on the Cloud RAN BTS
deployment models, see Overview and requirements for Cloud RAN BTS
startup.
10.3 Commission the hardware servers and the IXR switches (if used).
You need to select a target deployment profile, specific to your configuration. For
more information on the Cloud RAN BTS deployment models, see Overview and
requirements for Cloud RAN BTS startup.
Tip:
If the vCU and vDU are co-located, you only need to deploy one OCP cluster for
both CNFs.
10.7 Prepare a CNF deployment plan and Kubernetes secrets for a CNF object in MantaRay
NM.
10.8 Deploy Nokia proprietary Kubernetes operators for Cloud RAN BTS on the vDU
Kubernetes cluster.
Nokia proprietary Kubernetes operators for Cloud RAN BTS include the RAN NIC
Software Controller and Nokia Advanced Synchronization Kubernetes Operator (Nokia
Synchronization Operator). You need these operators to install and use the RAN NICs.
For instructions, see Deploying a CNF in MantaRay NM without autoconfiguration.
Note:
You need to set a target deployment profile, specific to your configuration,
with the correct parameter value in the values override files. For more
information, see Filling in the values override files for the vDU.
Note:
You need to commission the vDU in vDU WebEM only if you didn't use
autoconfiguration during the deployment.
Nokia provides software packages for the cloud-native network functions (CNFs), physical
network functions (PNFs), cloud infrastructure, Nokia proprietary Kubernetes operators for
AirScale Cloud RAN BTS (Cloud RAN BTS), and hardware. To execute the deployment, the software
packages need to be delivered to the following tools: MantaRay NM, Nokia Edge Automation Tool
(NEAT), or Nokia AirFrame Data Center Manager (NADCM).
Cloud infrastructure NEAT Holds the images for deploying Red Hat
OpenShift Container Platform (OCP).
Note:
The following software packages are collected and distributed as a part of the Cloud RAN
BTS software releases:
CNF
Nokia proprietary Kubernetes operators for Cloud RAN BTS
Cloud infrastructure
PNF
The embedded software packages are distributed as a part of the AirFrame product
releases.
You need to download the AirScale Cloud RAN BTS (Cloud RAN BTS) software packages from
Nokia Software Supply Tool (SWSt) before you can manually import and onboard them to the tool
executing the deployment.
Purpose
To deploy Cloud RAN BTS, you need to first download the following software packages from SWSt:
Procedure
1 Go to SWSt on Nokia Support Portal.
Step example
24R1-CR SW:›ID: 24R1-CR AirScale Cloud RAN BTS 0.0TD
3 Check the box next to the selected file and click DOWNLOAD.
Note:
You can select and download multiple files at the same time.
You need to prepare the configuration for both the virtualized gNB central unit (vCU) and the
virtualized gNB distributed units (vDUs) before you deploy AirScale Cloud RAN BTS. The
configuration consists of two sets of parameters: BTS parameters in a site configuration file (SCF)
and deployment parameters collected in Helm values override files.
The cloud gNB is configured using managed object (MO) parameters. The gNB split introduces
common and separate parameters for the virtualized gNB central unit (vCU) and virtualized gNB
distributed units (vDUs). Matching values of the common parameters must be set in the vCU and
the vDUs for correct operation.
You can correct the inconsistencies between vCU and vDU parameters and MO definitions in the
MantaRay NM CM Analyzer view. For more information, see the CM Analyzer Help chapter in
MantaRay NM Operating Documentation.
In AirScale Cloud RAN BTS (Cloud RAN BTS), the following parameters require identical settings in
interconnected vCUs and vDUs:
NRBTS New Radio Basestation The value must be the same for the vCU and
instance identifier (nrBtsId) all interconnected vDUs.
NRDU New Radio Distributed Unit The value must be the same on both the vCU
instance identifier (nrDuId) and vDU side for a pair of NRDU objects having
the same value of the gNbDuId parameter.
NRCELL Local cell resource ID The lcrId parameter together with the
(lcrId) nrBtsId parameter identify a cell within a
public land mobile network (PLMN).1
NRCELLGRP New Radio Cell Group Two NRCELLGRP objects, one in the vCU and
instance identifier (nrCellGrpId) one in the vDU, which contain the same list of
NRCELL objects must have the same value of
the nrCellGrpId parameter.2
1
After any cell addition or reconfiguration, you need to check the cell parameters using the MS
Excel-based Adaptive PDCCH Configuration Tool. Only then, the cell is configured correctly. You
can find the tool in Discovery Center under Reference Documentation/Product
Configurations/Adaptive PDCCH Configuration Tool.
2
Keep the value of the NRCELL New Radio Cell instance identifier (nrCellId)
parameter equal in the vCU and vDUs. Managed object IDs in counters and alarms are created
using the nrCellId parameter.
Note:
The above parameter list is partial. For the full parameter list, see the Reference
Documentation/Reference document/AirScale Cloud RAN BTS Parameters document.
Some parameters are common to the vCU and vDUs, although there are also parameters that
exist only in the vCU and others that exist only in vDUs. Common parameters must have the same
values for interconnected object instances, meaning corresponding objects, in both the vCU and
the vDUs.
New Radio Network Signaling Pmax Profile user label userLabel MRBTS/NRBTS/NRNSPMAX_PROFILE
Note:
The above parameter list is partial. For the full parameter list, see the Reference
Documentation/Reference document/AirScale Cloud RAN BTS Parameters document.
For each managed object class (MOC), there is a parameter which defines that an instance on the
vCU side is interconnected with an instance on the vDU side. The pairing object attribute is not
always the MOC ID.
Note:
The above parameter list is partial. For the full parameter list, see the Reference
Documentation/Reference document/AirScale Cloud RAN BTS Parameters document.
Note:
The above parameter list is partial. For the full parameter list, see the Reference
Documentation/Reference document/AirScale Cloud RAN BTS Parameters document.
You also need to configure the vDU parameters listed in Table: vDU parameters required for a
working connection to vCU. These parameters can also be configured using vDU WebEM or the
SCF. For more information, see Creating a configuration plan in vDU WebEM and Configuring vDU
parameters in SCF.
The site configuration file (SCF) contains all cloud gNB parameters. Modify the SCF to customize
the requirements for the cloud gNB.
The SCF contains all necessary configuration details for management, hardware, and transmission
for the network elements. This configuration is required for the gNB to work properly, and to
commission the virtualized gNB central unit (vCU) and the virtualized gNB distributed units (vDUs).
You can configure and modify the SCF in two ways:
Online • Using vDU WebEM to create an SCF. For more information, see
Creating a configuration plan in vDU WebEM.
• Using CU WebEM to create an SCF. For more information, see
Creating a configuration plan in CU WebEM.
You need to configure a site configuration file (SCF) before uploading it to the virtualized gNB
central unit (vCU).
Purpose
The SCF is a configuration file that can be applied to the gNB directly. The SCF template is part of
the software package that can be downloaded from Support Portal. The template contains
parameters with recommended values, but it requires modification to your specific deployment,
feature activation, and feature configuration.
Note:
Edit the file locally before uploading it from your local device.
Procedure
1 Open the SCF in a text editor.
Note:
To edit the SCF, you can also use the Parameter Editor in CU WebEM. For more
information, see Creating a configuration plan in CU WebEM.
Go to f1Cplane and configure the IP data for the cpif pod fronthaul interface.
MRBTS/TNLSVC/TNL/ETHSVC/ETHIF/VLANIF vlanIfName Identifier used for discovering a VLAN interface. Its value f1cplane
needs to be the same as the value of the vlName
parameter of the network in the values override file. For
more information, see Filling in the values override files
for the vCU.
MRBTS/TNLSVC/TNL/IPNO/IPIF/IPADDRESSV4 f1Cplane/ipV4Addre The primary IPv4 address discovered on the vCU F1 MRBTS-2205/TNLSVC-1/TNL-2/IPNO-1/IPIF
ssDN1 control plane (C-plane). -2/IPADDRESSV4-1
MRBTS/TNLSVC/TNL/IPNO/IPIF/IPADDRESSV4 ipAddressAllocatio The IP address allocation method for a fronthaul IP in the DISCOVERED
nMethod cpif pod. The IP address DISCOVERED is for the IP
address allocated during the vCU deployment.
Go to f1Uplane and configure the IP data for the upue pod fronthaul interface.
MRBTS/TNLSVC/TNL/ETHSVC/ETHIF/VLANIF vlanIfName Identifier used for discovering a VLAN interface. Its value f1cplane
needs to be the same as the value of the vlName
parameter of the network in the values override file. For
more information, see Filling in the values override files
for the vCU.
MRBTS/TNLSVC/TNL/IPNO/IPIF/IPADDRESSV4 f1Uplane/ipV4Addre The primary IPv4 address discovered on the vCU F1 user MRBTS-2205/TNLSVC-1/TNL-3/IPNO-1/IPIF
ssDN1 plane (U-plane). -2/IPADDRESSV4-1
MRBTS/TNLSVC/TNL/IPNO/IPIF/IPADDRESSV4 ipAddressAllocatio The IP address allocation method for a fronthaul IP in the DISCOVERED
nMethod upue pod. The IP address DISCOVERED is for the IP
address allocated during the vCU deployment.
Postrequisites
Once your SCF file is configured and all the parameters are set, you can upload the file to CU
WebEM. For more information, see Loading an SCF to CU WebEM.
You need to configure a site configuration file (SCF) before uploading it to the virtualized gNB
distributed unit (vDU).
Purpose
The SCF is a configuration file that can be applied to the gNB directly. The SCF template is part of
the software package that can be downloaded from Support Portal. The template contains
parameters with recommended values, but it requires modification to your specific deployment,
feature activation, and feature configuration.
Note:
Edit the file locally before uploading it from your local device.
Procedure
1 Open the SCF in a text editor.
Note:
To edit the SCF, you can also use the Parameter Editor in vDU WebEM. For more
information, see Creating a configuration plan in vDU WebEM.
Go to f1Cplane and configure the Virtual Local Area Network (VLAN) IP address used for
connection between the virtualized gNB distributed unit (vDU) and the virtualized gNB central
unit (vCU).
MRBTS/TNLSVC/TNL/ETHSVC/ETHIF/VLANIF vlanIfName Identifier used for discovering a VLAN interface. Its value f1cplane
needs to be the same as the value of the vlName
parameter of the network in the values override file. For
more information, see Filling in the values override files
for the vDU.
MRBTS/TNLSVC/TNL/IPNO/IPIF/IPADDRESSV4 f1Cplane/ipV4Addr The primary IPv4 address discovered on the vCU F1 MRBTS-2205/TNLSVC-1/TNL-1/IPNO-1/IPI
essDN1 control plane (C-plane). F-2/IPADDRESSV4-1
MRBTS/TNLSVC/TNL/ETHSVC/ETHIF/VLANIF vlanIfName Identifier used for discovering a VLAN interface. Its value f1uplane
needs to be the same as the value of the vlName
parameter of the network in the values override file. For
more information, see Filling in the values override files
for the vCU.
MRBTS/TNLSVC/TNL/IPNO/IPIF/IPADDRESSV4 f1Uplane/ipV4Addr The primary IPv4 address discovered on the vCU F1 user MRBTS-2205/TNLSVC-1/TNL-1/IPNO-1/IPI
essDN1 plane (U-plane). F-2/IPADDRESSV4-1
Postrequisites
Once your SCF file is configured and all the parameters are set, you can upload the file to vDU
WebEM. For more information, see Loading an SCF to vDU WebEM.
To validate the SCF in vDU WebEM, follow Validating an SCF in vDU WebEM.
The virtualized gNB central unit (vCU) is deployed using Helm charts. Before the deployment, you
need to provide the Helm charts with configuration parameters and information about your
environment. You do this by editing YAML files included in the vCU deployment package.
Figure 7: Updating Helm chart configuration values from values override files
In the CNF deployment package, there are three values override files:
values-override.aic-vcu-cluster-preparation.yaml
values-override.aic-vcu-prerequisite.yaml
values-override.aic-vcu.yaml
Before the deployment, you need to fill in the mandatory parameters in the values-
override.aic-vcu.yaml file and all parameters in the remaining files. After the deployment,
you can provide additional parameter values in CU WebEM.
It's possible to deploy the CNF using autoconfiguration. In such a case, the CNF automatically
downloads the planned configuration from the network management system (NMS) and you don't
need to provide additional information after the deployment. Using autoconfiguration requires
you to provide additional parameter values for autoconnection and certificate managemet in the
values-override.aic-vcu.yaml, as well as the initial certificate enrolment secrets in the
cmp_secret.yaml and taTrustChain_secret.yaml files. You can find the files inside the
Secret folder of the CNF deployment package.
Extract the following files from the HelmChart folder inside the vCUCNF<version>.zip
package:
values-override.aic-vcu-cluster-preparation.yaml
values-override.aic-vcu-prerequisite.yaml
values-override.aic-vcu.yaml
sccName This parameter defines the security context constraints (SCC) name.
It allows administrators to control permissions for pods.
If the sccName parameter value is set to privileged, a new SCC
isn't created and a privileged SCC is bound to the
serviceaccounts parameter.
If the sccName parameter value isn't set to privileged or is set
to null, a new SCC is created and bound to the
serviceaccounts parameter.
The default value is cnf5g.
userName This parameter defines a user name of an object class (oc) user.
When the userName parameter value is different from null, roles
are added for an oc user so that the vCU can be deployed.
fastPathDevicePoolBa This parameter defines the SR-IOV device pool used by the
ckup external-u-s network. Its value should be different from the
fastPathDevicePool parameter value.
slowPathDevicePool This parameter defines the SR-IOV device pool used by the
external-c and internale1 networks.
slowPathDevicePoolBa This parameter defines the SR-IOV device pool used by the
ckup external-c-s and internal-s networks. Its value should be
different from the slowPathDevicePool parameter
value.
accessModes This parameter defines the access modes of the persistent volume.
The supported values are:
• For the Red Hat OpenShift Container Platform (OCP) single-node (SNO):
ReadWriteOnce
• For OCP multi-node (MNO): ReadWriteMany
storageClassName This parameter defines the storage class name of the persistent volume. If
its value isn't specified, the default storage class is used.
The supported values are:
• For OCP SNO: localfs-lvm-sc
• For OCP MNO: ocs-storagecluster-cephfs
restoreData This parameter defines whether to restore data which was backed up
when the permanent virtual circuit (PVC) was created. For more
information, see Operating Documentation/AirScale Cloud RAN BTS
System/Upgrading Cloud RAN BTS System.
The default value of this parameter is true.
4 Edit the values-override.aic-vcu.yaml file in a text editor and save the changes.
To deploy the vCU, you need to fill in the parameters in the following sections:
global (general parameters, network parameters, autoconnection parameters)
certman-auto-cmp (certificate management configuration parameters)
aic-vcu-upue, aic-vcu-cpif, aic-vcu-cpe2, and aic-oamext (subchart LRIP
round-check configuration parameters)
The values depend on your environment. For guidance, use the following tables and the
cpifInstanceCount This parameter defines the number of the 2N cpif redundancy instances.
You need to fill in the value of this parameter only if the value of the
cnfLayout parameter is set to redundant.
The supported values range from 1 to 4.
cpclInstanceCount This parameter defines the number of the 2N cpcl redundancy instances.
You need to fill in the value of this parameter only if the value of the
cnfLayout parameter is set to redundant.
The supported values range from 1 to 4.
upueFlavor This parameter defines the deployment flavor of the upue pod and indicates the
number of l2hicu containers and the resources requested for creation of new
trsfp containers.
Allowed values:
• small, if the value of the cnfLayout parameter is set to mini, basic, or
redundant. In such a case, the upueCount parameter values range from 2
to 44.
• medium, if the value of the cnfLayout parameter is set to redundant. In
such a case, the upueCount parameter values range from 2 to 22.
• large, if the value of the cnfLayout parameter is set to redundant. In
such a case, the upueCount parameter values range from 2 to 11.
cpe2Count This parameter defines the number of cpe2 pods. Redundancy of the cpe2 pod
is not supported.
Allowed values:
•0
•1
minCapacityCPUE This parameter defines the minimum number of cpue pods to fulfil the capacity
requirements. If the requirement specified by the value of this parameter is not
met, the gNB raises an alarm.
The maximum value of this parameter is 8.
If the cnfLayout parameter value is basic, the minCapacityCPUE
parameter values range from 1 to 8.
If the cnfLayout parameter value is redundant, the minCapacityCPUE
parameter values range from 2 to 8, and a value of 1 is automatically changed
to 2.
minCapacityUPUE This parameter defines the minimum number of upue pods to fulfil the capacity
requirements. If the requirement specified by the value of this parameter is not
met, the gNB raises an alarm.
The maximum value of this parameter is 44.
If the cnfLayout parameter value is basic, the minCapacityUPUE
parameter values range from 1 to 44.
If the cnfLayout parameter value is redundant, the minCapacityUPUE
parameter values range from 2 to 44, and a value of 1 is automatically changed
to 2.
coreDumpPath This parameter defines the directory for host path volume to store the
coredump information.
storageClassName This parameter defines the storage class name of the persistent volume. If its
value isn't specified, the default storage class is used.
The supported values are:
• For OCP version 4.12 or higher SNO and SNO+1: lvms-vg1
• For OCP version lower than 4.12 SNO: localfs-lvm-sc
• For OCP MNO: ocs-storagecluster-cephfs
accessModes This parameter defines the access mode of the persistent volume.
The supported values are:
• For OCP SNO: ReadWriteOnce
• For OCP MNO: ReadWriteMany
timezone This parameter defines the time zone as a string in a Region/City format. The
default value of this parameter is UTC.
Example:
• Europe/Helsinki
• UTC
externalNetworkInterfaceEn This parameter defines whether the external network interface is enabled in
abled context of the system upgrade. For more information, see Operating
Documentation/AirScale Cloud RAN BTS System/Upgrading Cloud RAN BTS
System.
The default value of this parameter is true.
swReplacementType This parameter defines the type of software replacement operation for software
upgrade . For more information, see Operating Documentation/AirScale Cloud
RAN BTS System/Upgrading Cloud RAN BTS System.
The default value of this parameter is Rip & Replace.
upgradeUserAccountSecretNa This parameter defines the name of the secret which is used in the blue-green
me upgrade post installation hook You need to create the secret vefore upgrading
the vCU. For more information, see Operating Documentation/AirScale Cloud
RAN BTS System/Upgrading Cloud RAN BTS System.
nodeSelector This parameter assigns the pods to specific nodes. It's mainly used in the OCP
SNO+1 deployment, where the vCU needs to be deployed on the
Master+Worker+Storage node. You need to pre-define the node labels in
advance.
seLinux This parameter defines whether the Security-Enhanced Linux (SELinux) policies
are enforced.
If the value of the seLinux parameter is set to true, the SELinux mode is set
to enforcing.
If the value of the seLinux parameter is set to false, the SELinux mode is
set to permissive. Note that when the seLinux parameter value is set to
enforcing, audit logs are not available.
The allowed value of the seLinux parameter is true.
highPerformanceRunTimeClas This parameter enables Kubernetes RuntimeClass used for high performance.
s The default nokia-performance class is installed with OCP provided by
Nokia. Change the value if the deployment is performed on non-Nokia
infrastructure.
4.2 Fill in the subchart LRIP round-check configuration parameters under <sub
chart>.lripRoundcheckInfo.
Subchart names:
aic-vcu-upue
aic-vcu-cpif
aic-vcu-cpe2
aic-oamext
fastPathDevicePool This parameter defines the SR-IOV device pool used by the external-u network.
Its value must be the same as the value of the aic-vcu-
network.devicePoolList.fastPathDevicePool parameter in the
values-override.aic-vcu-prerequisite.yaml file.
fastPathDevicePoolBackup This parameter defines the SR-IOV device pool used by the external-u-s
network.
Its value must be the same as the value of the aic-vcu-
network.devicePoolList.fastPathDevicePoolBackup parameter
in the values-override.aic-vcu-prerequisite.yaml file.
slowPathDevicePool This parameter defines the SR-IOV device pool used by the external-c network.
Its value must be the same as the value of the aic-vcu-
network.devicePoolList.slowPathDevicePool parameter in the
values-override.aic-vcu-prerequisite.yaml file.
slowPathDevicePoolBackup This parameter defines the SR-IOV device pool used by the external-c-s network.
Its value must be the same as the value of the aic-vcu-
network.devicePoolList.slowPathDevicePoolBackup parameter
in the values-override.aic-vcu-prerequisite.yaml file.
vlanInfo.vid This parameter defines the guest VLAN for the corresponding
interface.
Make sure that the vlanInfo.vid parameter values are different
for the VLAN interfaces which have the same values of the
defaultUnderlyingIface parameter.
vlanInfo.vlName This parameter defines the name for further identification of the VLAN,
for example, in the case of configuring transport separation in the
NMS.
The vlanInfo.vlName parameter needs to be the same as the
VLAN interface name in the site configuration file (SCF).
vlanInfo.defaultUnderly This parameter defines the default underlying Ethernet interface for
ingIface the corresponding VLAN interface.
vlanInfo.secUnderlyingI This parameter defines the secondary underlying Ethernet interface for
face the corresponding VLAN interface.
It is not applicable to the transport network.
ipInfo.role This parameter defines the role of the interface when the transport
separation is configured in the NMS.
Its value needs to be the same as the value of the IPADDRESSV4
Unique IP address identifier
(uniqueIpAddressIdentifier) parameter or the
IPADDRESSV6 Unique IP address identifier
(uniqueIpAddressIdentifier) parameter for the network in the
SCF.
If there are multiple roles, separate them by space in the SCF.
Example: F1-U_customname1
physicalIpGroup.usePhys This parameter defines the group which will be using the set up OAM
icalIpGroup physical IP.
The supported values are:
• groupA
• groupB
• empty (default)
physicalIpGroup.subnet This parameter defines the subnet including a mask. It is applicable only
to the OAM network.
Example:
• 192.168.254.0/24
• 2001:db8::0/128
physicalIpGroup.role This parameter defines the role of the interface when the transport
separation is configured in the NMS. It is applicable only to the OAM
network.
The value of the physicalIpGroup.role parameter needs to be
the same as the value of the IPADDRESSV4 Unique IP address
identifier (uniqueIpAddressIdentifier) parameter or the
IPADDRESSV6 Unique IP address identifier
(uniqueIpAddressIdentifier) parameter for the network in the
SCF.
If there are multiple roles, separate them by space in the SCF.
vlanInfo.vid
This parameter defines the ID for the internale1 VLAN interface.
vlanInfo.vlName
This parameter defines the name for further identification of the VLAN, for
example, in the case of configuring transport separation in the NMS.
The vlanInfo.vlName parameter needs to be the same as the VLAN
interface name in the site configuration file (SCF).
vlanInfo.defaultUnderlyin
gIface This parameter defines the default underlying Ethernet interface for the
corresponding VLAN interface.
vlanInfo.secUnderlyingIfa
ce This parameter defines the secondary underlying Ethernet interface for the
corresponding VLAN interface.
useGroup This parameter defines the group, from which the IP adresses will be used.
ipInfo.subnet This parameter defines the subnet for the internale1 network.
ipInfo.role
This parameter defines the role of the internale1 interface when the
transport separation is configured in the NMS.
ipInfo.ipAddress.groupA.c
pif.rangeStart TheipInfo parameters define the IP pool information.
You need to fill in both the rangeStart and rangeEnd parameters for
ipInfo.ipAddress.groupA.c groupA and groupB.
pif.rangeEnd In case of the blue-green upgrade of the vCU, you need to configure IP
addresses for groupA and groupBas different IP adresses from the same
ipInfo.ipAddress.groupA.c
subnet.
pnrt.rangeStart
ipInfo.ipAddress.groupA.c
pnrt.rangeEnd
ipInfo.ipAddress.groupB.c
pif.rangeStart
ipInfo.ipAddress.groupB.c
pif.rangeEnd
ipInfo.ipAddress.groupB.c
pnrt.rangeStart
ipInfo.ipAddress.groupB.c
pnrt.rangeEnd
lripInternalE1Info.gatewa
yList This parameter defines the gateway list of the IP address.
in the file, including all the spaces. For F1-U, copy the f1u_slice1 example:
vlanInfo:
vlName: "f1u_slice1"
defaultUnderlyingIface: "extu"
secUnderlyingIface: "extus"
vid: ""
ipInfo:
- rangeStart: ""
rangeEnd: ""
subnet: ""
role:
- "F1-U_slice1"
s1uSlice1:
vlanInfo:
vlName: "s1u_slice1"
defaultUnderlyingIface: "extu"
secUnderlyingIface: "extus"
vid: ""
ipInfo:
- rangeStart: ""
rangeEnd: ""
subnet: ""
role:
- "S1-U_slice1"
vlanInfo:
vlName: "ngu_slice1"
defaultUnderlyingIface: "extu"
secUnderlyingIface: "extus"
vid: ""
ipInfo:
- rangeStart: ""
rangeEnd: ""
subnet: ""
role:
- "NG-U_slice1"
- rangeStart: ""
rangeEnd: ""
subnet: ""
role:
- "NG-U_slice1"
vlanInfo.vid This parameter defines the guest VLAN for the slice. If its value isn't
set, a host VLAN is used.
vlanInfo.vlName This parameter defines the VLAN name used for identifying the
VLAN in the context of configuring transport separation in the NMS.
Examples:
• f1u_slice1
• s1u_slice1
• ngu_slice1
vlanInfo.defaultUnderl This parameter defines the default underlying Ethernet interface for
yingIface the corresponding VLAN interface.
ipInfo.role This parameter specifies the role of the interface in the context of
configuring the transport separation in the NMS.
Its value needs to be the same as the value of the IPADDRESSV4
Unique IP address identifier
(uniqueIpAddressIdentifier) parameter or the
IPADDRESSV6 Unique IP address identifier
(uniqueIpAddressIdentifier) parameter for the network in
the SCF.
If there are multiple roles, separate them by space in the SCF.
A role needs to be unique and have the following format: NG-U_X.
X is the dynamic part which can be customized by the operator. The
maximum length for X is 24 and the allowed characters include [A-
Za-z0-9].
Examples:
• F1-U_slice1
• S1-U_slice1
• NG-U_slice1
4.7 [Optional] If you use the Two-Way Active Measurement Protocol (TWAMP), fill in the
Note:
This is relevant only if the CB008468: IP Transport Network Measurements
in Cloud RAN feature was activated after the deployment. This feature makes
TWAMP available for all mid- and backhaul interfaces of the vDU and the vCU.
For more information, see the Operating Documentation/AirScale Cloud RAN
BTS Features/CB008468: IP Transport Network Measurements in Cloud
RAN document.
For each interface with the configured TWAMP, copy the example in the file under
global.externalNetwork.twamp_f1c, including all the spaces:
twampF1c:
vlanInfo:
vlName: "twamp_f1c"
defaultUnderlyingIface: "twamp"
secUnderlyingIface: "twamps"
vid: ""
ipInfo:
- rangeStart: ""
rangeEnd: ""
subnet: ""
role:
- "twamp_f1c"
vlanInfo.vid This parameter defines the guest VLAN for TWAMP. If its value is not
set, a host VLAN is used.
vlanInfo.vlName This parameter defines the VLAN name used for identifying the
VLAN in the context of configuring the transport separation in the
NMS.
Allowed value of this parameter is a string, for example:
twamp_f1c
vlanInfo.defaultUnderl This parameter defines the default underlying Ethernet interface for
yingIface the corresponding VLAN interface.
ipInfo.role This parameter specifies the role of the interface in the context of
configuring the transport separation in the NMS.
Its value needs to be the same as the value of the IPADDRESSV4
Unique IP address identifier
(uniqueIpAddressIdentifier) parameter or the
IPADDRESSV6 Unique IP address identifier
(uniqueIpAddressIdentifier) parameter for the network in
the SCF.
Allowed value of this parameter is a string, for example:
twamp_ngc
If you use vCU autoconnection, fill in the following parameters. Otherwise leave them
unfilled.
siteDescription This parameter defines the unique ID of the vCU. Its value needs to be the same
as the value of the MRBTS AutoConnSiteID parameter specified to identify
the vCU.
Example: MRBTS-1234
urlNMS This parameter defines the URL of the NE3S registration endpoint of the NMS.
Example: http://<NMS_SBI_VIP or
hostname>/NE3S/1.0/NE3SRegistrationNotificationsService
agentIP This parameter defines the external IP address that the NMS uses to connect
back to the vCU. Its value needs to be the same as the vCU OAM IP address,
because the NE3S agent runs in the OAM container.
When you use vCU autoconfiguration, also follow 4.9. [Optional] Configure certificate
management options.
Note:
If you didn't use autoconfiguration, you need to configure these parameters in
CU WebEM after the deployment. For more information on certificate
management, see the Configuring Security in Cloud RAN BTS document.
additionalTACertSecr This parameter defines the name of the additional trust anchor
etName (TA) certificate secret name. Its value needs to be the same as
the value of the name parameter in the
taTrustChain_secret.yaml file.
ee_subject_name This parameter defines the subject name of the End Entity (EE)
certificate (the certificate of the BTS).
The allowed value of this parameter is a string, for example:
CN=MRBTS-1234.
ee_subject_fqdn This parameter defines the fully qualified domain name (FQDN)
to be set as a subject alternative name for the EE certificate. It
is optional.
The allowed value of this parameter is a list of strings, for
example: mrbts1234.nokia.example.
name
This parameter defines the name of the additional TA
certificate secret name.
additional_tacert
This parameter defines the base-64 encoded PEM
format TA trust chain.
Result
You have defined the CNF instance-specific parameters in the values override files. For
instructions on replacing the default parameters with the CNF instance-specific parameters, see
Updating Helm chart configuration values from values override files.
The virtualized gNB distributed unit (vDU) is deployed using Helm charts. Before the deployment,
you need to provide the Helm charts with configuration parameters and information about your
environment. You do this by editing YAML files included in the vDU deployment package.
Figure 10: Updating Helm chart configuration values from values override files
In the CNF deployment package, there are three values override files:
values-override.aic-vdu-cluster-preparation.yaml
values-override.aic-vdu-prerequisite.yaml
values-override.aic-vdu.yaml
Before the deployment, you need to fill in the mandatory parameters in the values-
override.aic-vdu.yaml file and all parameters in the remaining files. After the deployment,
you can provide additional parameter values in vDU WebEM.
It's possible to deploy the CNF using autoconfiguration. In such a case, the CNF automatically
downloads the planned configuration from the network management system (NMS) and you don't
need to provide additional information after the deployment. Using autoconfiguration requires
you to provide additional parameter values for autoconnection and certificate managemet in the
values-override.aic-vdu.yaml, as well as the initial certificate enrolment secrets in the
cmp_secret.yaml and taTrustChain_secret.yaml files. You can find the files inside the
Secret folder of the CNF deployment package.
Extract the following files from the HelmChart folder inside the vDUCNF<version>.zip
package:
values-override.aic-vdu-cluster-preparation.yaml
values-override.aic-vdu-prerequisite.yaml
values-override.aic-vdu.yaml
nodeSelector This parameter assigns the pods to specific nodes. It's mainly used in the
OCP SNO+1 deployment, where the vDU needs to be deployed on the
worker node. You need to pre-define the node labels in advance.
<network name>VLanMode This parameter defines the Host VLAN network or the Guest VLAN
network.
Allowed values:
• HOST
• GUEST (default)
VLanID.<network name> This parameter defines the VLAN ID for <network name> network for
the Host VLAN mode. Leave it empty in case of the Guest VLAN
mode.
devicePoolList.slowPath This parameter defines the SR-IOV device pool used by the
CommonPool operations, administration, and maintenance (OAM), tracing, f1c, fhm,
and twamp networks.
devicePoolList.fastPath This parameter defines the SR-IOV device pool used by l1up networks
RanNicPool (fast path).
devicePoolList.slowPath This parameter defines the SR-IOV device pool used by l1cp networks
RanNicPool (slow path).
4 Edit the values_override.aic-vdu.yaml file in a text editor and save the changes.
To deploy the vDU, you need to fill in the parameters in the following sections:
global (general parameters, network parameters, autoconnection parameters)
certman-auto-cmp (certificate management configuration parameters)
healing-controller (healing controller parameters)
aic-oamfh (internal fronthaul network DHCP server configuration parameters)
aic-vdu-l2nrt (l2nrt pod configration parameters)
aic-vdu-l2rt (l2rt pod configration parameters)
Additionally, you need to fill in the value of the nodeSelector parameter under the
global section and in all subcharts except for certman-auto-cmp.
The values depend on your environment. For guidance, use the following tables and the
comments inside the file.
cnfLayout This parameter defines the deployment type: distributed RAN (D-RAN) or
centralized RAN (C-RAN).
Allowed values:
• DRAN (default). The DRAN value means that the vDU will be deployed in
SNO with vDU only or SNO+1 .
• CRAN. The CRAN value means that the vDU will be deployed in MNO.
• mini. The mini value means that the vDU will be deployed in SNO
together with the mini virtualized gNB distributed unit (vCU) with multi-
tenancy.
coreDumpPath This parameter defines the directory for host path volume to store the
coredump information.
l2nrtCount This parameter defines the number of l2nrt pods to be deployed. The
l2nrtCount parameter value depends on the F1-U throughput.
Supported values range from 0 to 3. The default value is 1.
upInstanceCount This parameter defines the number of ahm or l2rt pods and Nokia Cloud
RAN SmartNICs (RAN NICs) to be deployed.
Supported values range from 1 to 12. The default value is 1.
pvcName This parameter defines the PVC name. Its value needs to be unique for each
vDU, and remain unchanged between the vDU upgrades. It also needs to be
the same as the value of the aic-pvc.pvcName parameter in the
values-override.aic-vdu-prerequisite.yaml file.
Allowed values:
• Lowercase alphanumeric characters
•-
•.
The PVC name must start and end with an alphanumeric character.
Example: vdupvc
storageClassName This parameter defines the storage class name of the persistent volume. Its
value must be the same as the value of the aic-
pvc.storageClassName parameter in the values-override.aic-
vdu-prerequisite.yaml file. If the value isn't specified, the default
storage class is used.
The supported values are:
• For OCP SNO: localfs-lvm-sc
• For OCP MNO: ocs-storagecluster-cephfs
image.imagePullSecret This parameter defines the name of the Kubernetes secret which stores the
credentials for authentication during the connection to the image registry.
Allowed values are supported secret types. The supported secret types are:
• kubernetes.io/dockercfg
• kubernetes.io/dockerconfigjson
timezone This parameter defines the time zone as a string in a Region/City format.
The default value of this parameter is UTC.
Example:
• Europe/Helsinki
• UTC
custom_extensions.labels This parameter defines the custom extensions for labels to be injected into
ConfigMap›Service›Pod resources.
Example: labels: {nf-type: 5g-nf-producer}
aic-vdu- This parameter defines whether the hyperthreading is enabled in the Kubernetes
l2nrt.activeSiblingCore cluster.
Allowed values:
• true (default)
• false (only for Nokia internal purposes)
aic-vdu- This parameter defines whether the L2HI container runs with the privileged
l2nrt.l2hi.securityContext.p permission.
rivilegedMode The default value of this parameter is false.
aic-vdu- This parameter enables or disables the dptrace function of the L2HI container.
l2nrt.l2hi.dptrace.enable Allowed values:
• 0 (function disabled - default)
• 1 (function enabled)
aic-vdu- This parameter enables or disables the L2HI container payload dump.
l2nrt.l2hi.dptrace.dump_payl Allowed values:
oad • 0 (function disabled - default)
• 1 (function enabled)
aic-vdu- This parameter defines the configuration of the L2HI dptrace filter.
l2nrt.l2hi.dptrace_filter The parameter value format is <messageID>:<queueID>.
The default value of this parameter is none:none.
global.l2rtL2PSHyperThre This parameter enables a sibling core for the L2PS application.
ading Allowed values:
• true
• false (default)
When the value of thel2rtL2PSHyperThreading parameter is set to
true, the value of the aic-vdu-l2rt.deploymentType parameter
needs to be set to NR_L2RT_1CL2PS_HT_2CL2LO.
aic-vdu- This parameter enables or disables the dptrace function of the L2RT
l2rt.l2rt.dptrace.enable container.
Allowed values:
• 0 (function disabled - default)
• 1 (function enabled)
aic-vdu- This parameter enables or disables the L2RT container payload dump.
l2rt.l2rt.dptrace.dump_p Allowed values:
ayload • 0 (function disabled - default)
• 1 (function enabled)
aic-vdu- This parameter defines the configuration of the L2RT dptrace filter.
l2rt.l2rt.dptrace.filter The parameter value format is <messageID>:<queueID>.
The default value of this parameter is none:none.
<sub
chart>.nodeSelecto
r
fastPathCommonPool This parameter defines the SR-IOV device pool used by the f1u
and bip networks.
Its value must be the same as the value of the aic-
intnet.devicePoolList.fastPathCommonPool
parameter in the values-override.aic-vdu-
prerequisite.yaml file.
slowPathFhPool This parameter defines the SR-IOV device pool used by the fhm,
f1c, oam, tracing and twamp network.
Its value must be the same as the value of the aic-
intnet.devicePoolList.slowPathFhPool in the
values-override.aic-vdu-prerequisite.yaml file.
fastPathRanNicPool This parameter defines the SR-IOV device pool used by l1up
networks (fast path).
Its value must be the same as the value of the aic-
intnet.devicePoolList.fastPathRanNicPool
parameter in the values-override.aic-vdu-
prerequisite.yaml file.
slowPathRanNicPool This parameter defines the SR-IOV device pool used by l1cp
networks (slow path).
Its value must be the same as the value of the aic-
intnet.devicePoolList.slowPathRanNicPool
parameter in the values-override.aic-vdu-
prerequisite.yaml file.
fhmInternalNetworkIpP This parameter defines the fhm plane internal network IP pool.
ool You need to set this parameter only if the value of the fhmSiso
parameter is direct-connect.
The IP number should be equal to or higher than the number of
the fhm RAN NICs + 3.
The IP number can't conflict with the subnets of eth0, OCP,
aic-oamfh.dhcp.service.dhcpv4.subnetsForDC,
and global.fhmIpForDC.ipAddress.
fhmIpForDC.ipAddress This parameter defines the fhm master agent IP. You need to set
this parameter only if the value of the fhmSiso parameter is
direct-connect.
The IP number can't conflict with the subnets of eth0, OCP,
aic-oamfh.dhcp.service.dhcpv4.subnetsForDC,
and global.fhmIpForDC.ipAddress.
fhmIpForDC.role This parameter defines the role of the interface when the
transport separation is configured in the NMS.
vlanInfo.vid This parameter defines the guest VLAN for the interface. If
its value is not specified, a host VLAN is used.
For the l2biprt and l2biphi networks, the host VLAN is not
supported.
vlanInfo.pcp This parameter defines the priority code point (PCP) for the
VLAN.
fhuc.vlName This parameters define the interface list for the fhuc under the
RAN NIC on the node with the running ahm-<x> pod. The x
value can range from 0 to 11.
fhuc.defaultUnderlyin
Example:
gIface
fhuc:
- vlName: "fhucplane01"
defaultUnderlyingIface: "ecpri0"
fhuc.vid vid: "301"
pcp: "7"
fhm.vlName This parameters define the interface list for the fhm under the
RAN NIC on the node with the running ahm-<x> pod, when the
value of the fhmSiso parameter is set to direct-connect.
fhm.defaultUnderlying The x value can range from 0 to 11.
Iface Example:
fhm:
- vlName: "fhmplane01"
defaultUnderlyingIface: "ecpri0"
vid: ""
fhm.vid
The value of the fhm.vlName parameter must be unique and
needs to include the fhm prefix.
"
role:
- "F1-U_slice1"
Modify the f1Slice1 section key name in each copy, for example f1uSlice2 and
so on, and fill in the fields as follows:
vlanInfo.vid This parameter defines the Guest VLAN for the F1-U
slice. If its value isn't set, a host VLAN is used.
4.11 [Optional] If you use the Two-Way Active Measurement Protocol (TWAMP), fill in the
additional network parameters.
For each interface with the configured TWAMP, copy the example in the file under
global.externalNetwork.twamp_f1c, including all the spaces:
twampF1:
vlanInfo:
# vlName should be set the same as the VLAN interface name in SCF
vlName: "twamp_f1"
defaultUnderlyingIface: "twamp"
vid: ""
ipInfo:
- rangeStart: ""
rangeEnd: ""
subnet: ""
type: "physical"
role:
- "twamp_f1"
Modify the twampF1 section key name in each copy, for example, twamp_ng,
twamp_f1c, and similarly. Fill in the fields as follows:
vlanInfo.vlName VLAN name used for identifying the VLAN in the SCF.
Allowed value is a string, for example: twamp_f1
If you use the vDU autoconnection, also fill in the following parameters. Otherwise
leave them unfilled.
siteDescription This parameter defines the unique ID of the vDU. Its value needs to be the same
as the value of the MRBTS AutoConnSiteID parameter specified to identify
the vDU.
Example: MRBTS-1234
urlNMS This parameter defines the URL of the NE3S registration endpoint of the NMS.
Example: http://<NMS_SBI_VIP or
hostname>/NE3S/1.0/NE3SRegistrationNotificationsService
agentIP This parameter defines the external IP address that the NMS uses to connect
back to the vDU. Its value needs to be the same as the vDU OAM IP address,
because the NE3S agent runs in the OAM container.
When you use the vDU autoconfiguration, also follow 4.13. [Optional] Configure
certificate management options.
Note:
If you didn't use autoconfiguration, you need to configure these parameters in
vDU WebEM after the deployment. For more information on certificate
management, see the Configuring Security in Cloud RAN BTS document.
additionalTACertSecr This parameter defines the name of the additional trust anchor
etName (TA) certificate secret name. Its value needs to be the same as
the value of the name parameter in the
taTrustChain_secret.yaml file.
ee_subject_name This parameter defines the subject name of the End Entity (EE)
certificate (the certificate of the BTS).
The allowed value of this parameter is a string, for example:
CN=MRBTS-1234.
ee_subject_fqdn This parameter defines the fully qualified domain name (FQDN)
to be set as a subject alternative name for the EE certificate. It
is optional.
The allowed value of this parameter is a list of strings, for
example: mrbts1234.nokia.example.
4.14 Configure the Dynamic Host Configuration Protocol (DHCP) server options for the
fronthaul network.
The radio units (RUs) use the DHCP server to obtain their IP addresses and identify the
vDU. You can use either an internal DHCP server located in the vDU, or an external
DHCP server located in the operator environment. For more information, see DHCP
process in the Configuring Cloud RAN BTS Transport document.
When the internal vDU DHCP server is enabled, you need to fill in the following
parameters:
dhcpv4.subnets.data.clie This parameter defines a list of client identifiers for the RUs in the
ntIdentifier <product code>/<serial number> format. If left empty, the
DHCP server doesn't check the client identifier part of the DHCP packets
(DHCP option 61).
The allowed value is a list of strings, for example:
clientIdentifier:
- "753/001"
- "753/002"
- "753/003"
- "755/001"
- "755/002"
This parameter is optional.
service.dhcpv4.subnetsFo This parameter defines the subnets used when the value of the
rDC.subnet fhmSiso parameter is set to direct-connect.
Result
You have defined the CNF instance-specific parameters in the values override files. For
instructions on replacing the default parameters with the CNF instance-specific parameters, see
Updating Helm chart configuration values from values override files.
Postrequisites
Before deploying the vDU, you need to deploy the Nokia proprietary Kubernetes operators for
Cloud RAN BTS:
RAN NIC Software Controller, which controls and handles the life cycle management (LCM)
operations of the RAN NICs.
Nokia Advanced Synchronization Kubernetes Operator (Nokia Synchronization Operator), which
provides the Precision Time Protocol (PTP) time reference for the vDU.
The Nokia proprietary Kubernetes operators for AirScale Cloud RAN BTS (Cloud RAN BTS) are
deployed using Helm charts. Before the deployment, you need to provide the Helm charts with
configuration parameters and information about your environment. You do this by editing YAML
files included in the operator deployment package.
Purpose
The Nokia proprietary Kubernetes operators for Cloud RAN BTS hold specific functions related to
the virtualized gNB distributed unit (vDU) operations:
Nokia Cloud RAN SmartNIC (RAN NIC) Software Controller oversees and handles the life cycle
management (LCM) operations of RAN NICs.
Nokia Advanced Synchronization Kubernetes Operator (Nokia Synchronization Operator)
provides the Precision Time Protocol (PTP) time reference for the vDU.
The operators have a structure similar to the cloud-native network functions (CNFs), and their
deployment is automated using Helm. Resource creation and pod deployment details are in the
Helm charts included in the operator deployment package. The Helm charts contain default
configuration values of the mandatory parameters. You need to provide the CNF instance-specific
configuration override values to the default configuration values by editing the values override
YAML files provided with the Helm charts.. After the deployment, you can provide additional
parameter values in vDU WebEM.
Note:
The virtualized gNB central unit (vCU) and the vDU deployment packages contain three
value override files. The operator deployment packages contain a single value override file.
Note:
You need to deploy the operators in their own namespaces on the same Red Hat
OpenShift Container Platform (OCP) cluster as the vDU. You need to deploy the operators
before deploying the vDU.
Download the operator software packages from Support Portal. For instructions, see
Downloading software packages from SWSt.
Note:
You need to perform the procedure separately for each operator.
Procedure
1 Extract the values override file for the selected operator.
Use:
ran-nic-sw-controller<version>.zip for RAN NIC Software Controller.
nokia-sync-operator<version>.zip for Nokia Synchronization Operator.
Step example
sccName This parameter defines the security context constraints (SCC) name. It
allows administrators to control permissions for pods.
If the sccName parameter value is set to privileged, a new SCC isn't
created and a privileged SCC is bound to the serviceaccounts
parameter.
If the sccName parameter value isn't set to privileged or is set to
null, a new SCC is created and bound to the serviceaccounts
parameter.
Step example
Example of the RAN NIC Software Controller values override file:
image:
registry:
pullPolicy: IfNotPresent
controller:
repository: "controller"
tag: "2.2.1"
labeler:
repository: "labeler"
tag: "2.2.1"
agent:
repository: "agent"
tag: "2.2.1"
tftp-init:
repository: "tftp-init-container"
tag: "2.2.1"
tftp:
repository: "tftp"
tag: "2.2.1"
dhcp:
repository: "tftp-dhcp-container"
tag: "2.2.1"
# if sccName is "privileged", it won't create a new SCC, only bind privileged SCC
to serviceaccounts.
# if sccName isn't "privileged", it will create a new SCC and bind new SCC to
serviceaccounts.
sccName: "privileged"
global.timezone This parameter defines the time zone as a string in a Region/City format.
The default value of this parameter is UTC.
Example:
• Europe/Helsinki
• UTC
Step example
Example of the Nokia Synchronization Operator values override file:
global:
provisioner: Prometheus
image:
registry: image-registry.openshift-image-registry.svc:5000/{{
.Release.Namespace }}
timezone: UTC
provisionerSpecific:
Prometheus:
authorization:
kind: ClusterRole
name: cluster-monitoring-view
dataSource:
endpoint:
https://prometheus-k8s.openshift-monitoring.svc.cluster.local:9091
Result
You have defined the CNF instance-specific parameters in the values override files. For
instructions on replacing the default parameters with the CNF instance-specific parameters, see
Updating Helm chart configuration values from values override files.
You need to install Red Hat OpenShift Container Platform (OCP) on hardware servers to provide a
Container-as-a-Service (CaaS) framework for the deployment of the cloud-native network
functions (CNFs) of the AirScale Cloud RAN BTS (Cloud RAN BTS).
RU radio unit
In Cloud RAN BTS, the CNFs are deployed on the cloud infrastructure using Kubernetes containers,
organized into pods. Containers are a type of software that can virtually package and isolate
applications. This way, the applications can share access to an operating system without the need
for a virtual machine (VM).
To prepare the cloud infrastructure for the CNF deployment, you need to install and configure the
CaaS framework on the hardware servers. In Cloud RAN BTS the CaaS framework is provided by
OCP.
Note:
The OCP host configuration is defined in the edge data center plan.
Notice:
Cloud RAN BTS supports CNF scale-out. The possibility of CNF scale-out when the CNF is
operational is subject to available cloud infrastructure resources. During initial cloud
infrastructure deployment and its initial capacity planning, Nokia recommends to consider
the following:
Failover capacity according to the CNF requirements
Additional capacity required for cloud infrastructure and CNF maintenance
Additional capacity to address expected network and traffic-related growth
You need to deploy and commission Nokia AirFrame Fronthaul Gateway (FHGW) at the virtualized
distributed unit (vDU) site to ensure connectivity between the vDU and the radio units (RUs),
Ethernet switching, radio connection aggregation, and synchronization of the connected RUs.
Purpose
Note:
The following procedure presents a high-level overview of the FHGW deployment,
integration, and commissioning process. For complete instructions, refer to the documents
listed in the procedure.
Figure 15: Overview of the FHGW deployment with the MantaRay NM Plug and Play (PnP) method
CM Configuration Management
In AirScale Cloud RAN BTS (Cloud RAN BTS), FHGW provides connectivity from the vDU to the
Common Public Radio Interface (CPRI) RUs, because direct CPRI connectivity from the
commercially available servers isn't possible in the cloud environment. The vDU uses Enhanced
CPRI (eCPRI), and FHGW uses CPRI towards the RUs.
NE network element
For more information on the FHGW in Cloud RAN BTS, see Operating Documentation/Cloud RAN
BTS System/Integrate and configure/Cloud RAN BTS Site Solution and Operating
Documentation/Nokia AirFrame Fronthaul Gateway/Integrate and configure/FHGW
characteristics for Cloud RAN BTS site solution.
PnP installation and configuration with MantaRay NM is the default FHGW installation method for
Cloud RAN BTS. The procedure consists of the following main parts:
Manual preparing of the FHGW PnP configuration with NADCM and NEAT.
Integrating the FHGW with a network management system (NMS) using MantaRay NM.
Configuring the FHGW functionality with the configuration plan using MantaRay NM.
Procedure
1 Install FHGW together with the vDU hardware servers at the far edge or cell site.
2 In NEAT Planner Application, prepare an edge data center plan, including FHGWs.
For instructions, see the Planning workflow chapter in the Operating Documentation/Nokia
Edge Automation Tool/Operations and Maintenance/Operating NEAT Planner Application
document.
For instructions, see the Adding a new edge data center chapter in the Operating
Documentation/Nokia Edge Automation Tool/Operations and Maintenance/Operating and
Maintaining NEAT document.
Tip:
You need to import the edge data center plan to NEAT before deploying OCP on the
servers target for installation of the CNFs. Skip this step if you have already imported
the edge data center plan to NEAT.
4 In NEAT, run the discover-edgedc-hw workflow to scan the edge data center hardware,
including FHGW, to the NADCM hardware inventory.
Note:
Instead of using the discover-edgedc-hw workflow, you can manually add FHGW
hardware to the NADCM hardware inventory using the NADCM graphical user interface
(GUI). For instructions, see Operating Documentation/Nokia AirFrame Data Center
Manager/Operations and Maintenance//Operating and Maintaining
NADCM/Managing data center layout/Adding new hardware.
Note:
Before running the fhgw-pnp-ucf-deployment workflow, make sure that the
FHGW device is present in the NADCM hardware inventory.
Step result
The fhgw-pnp-ucf-deployment workflow produces three artifacts:
fhgw-user-config-pnp.yaml, which is not used in the later steps of the procedure
FHGW-124220-TL2_raml_autoconn.xml
FHGW-124220-TL2_raml.xml
You can access the generated files in the NEAT GUI under Edge Data Center
Inventory›ARTIFACTS›INITIAL SWITCH CONFIG.
7 Add information about the Maintenance Region (MR) for the FHGW Managed Object (MO) to
the FHGW-124220-TL2_raml_autoconn.xml file.
For instructions, see Configuring Radio Network Elements with Plug and Play in MantaRay
NM Operating Documentation.
the target-embedded software for FHGW is pulled from MantaRay NM and updated.
the FHGW receives the configuration data from MantaRay NM.
For more information, see Configuring Radio Network Elements with Plug and Play in
MantaRay NM Operating Documentation
For instructions, see Configuring Radio Network Elements with Plug and Play in MantaRay
NM Operating Documentation.
Notice:
To successfully complete the FHGW configuration, you need to import the
FHGW-124220-TL2_raml.xml file after completing the PnP procedure and
uploading the FHGW object topology to MantaRay NM.
Before you can deploy the cloud-native network functions (CNFs) with MantaRay NM, you need to
set the correct permissions, integrate MantaRay NM with central container registry (CCR), install a
fast pass package for the current AirScale Cloud RAN BTS (Cloud RAN BTS) software version,
integrate the cloud instance with MantaRay NM, and prepare a CNF deployment plan.
You need to configure Domain Name Server (DNS), firewall settings, and central container registry
(CCR), and set the correct permissions before you can perform any AirScale Cloud RAN BTS (Cloud
RAN BTS) life cycle management (LCM) operations in MantaRay NM.
For instructions, see the Configuring forwarder in MantaRay NM DNS chapter under the
Administering DNS category in MantaRay NM Operating Documentation.
Table 51: MantaRay NM permissions specific for the Cloud RAN BTS LCM operations
For more information, see the Configuring Firewall for MantaRay NM chapter under the
Administering MantaRay NM System Security category in MantaRay NM Operating
Documentation.
CCR configuration
CCR is an Open Container Initiative (OCI) compliant registry which provides storage and content
delivery system for OCI artifacts. The content stored in the CCR is available:
for direct use by Kubernetes, which controls the applications running in Container as a Service
(CaaS) clusters.
for indirect use by distributed container registries closer to the Kubernetes cluster, which
controls the applications running in CaaS clusters.
Before you can push software images to the CCR, you need to configure it and integrate it with
MantaRay NM. For more information, see Configuring and integrating CCR with MantaRay NM.
Table 52: MantaRay NM permissions specific for the Cloud RAN BTS LCM operations
For general information on roles and permissions in MantaRay NM, see the Default roles and
permissions chapter under the Permission Management Help category in MantaRay NM
Operating Documentation.
For instructions on creating users, see the Creating users chapter under the User Management
Help category in MantaRay NM Operating Documentation.
You need to configure and integrate the central container registry (CCR) with MantaRay NM to
enable the possibility of the Open Container Initiative (OCI) software image distribution.
Purpose
Note:
The following procedure presents a high-level overview of the CCR configuration and
integration process. For complete instructions, see the following documents:
Administering Central Container Registry in MantaRay NM Operating Documentation
for CCR configuration
Integrating Container Registry to MantaRay NM in MantaRay NM Operating
Documentation for CCR integration with MantaRay NM
During the installation of MantaRay NM, CCR is installed as a container service on Compute1
virtual machine (VM), which is an optional VM for the MantaRay NM . After the CCR is installed,
configured, and integrated with MantaRay NM, you can use MantaRay NM to automatically
onboard the CNF software images to the CCR.
Note:
Instead of the CCR in MantaRay NM, you can use an external container registry and
integrate it to MantaRay NM. For more information, see Integrating Container Registry to
MantaRay NM in MantaRay NM Operating Documentation.
Make sure that you have installed MantaRay NM and that the CCR is successfully installed on the
Compute1 VM. For instructions, see Checking Central Container Registry service status under
Administering Central Container Registry in MantaRay NM Operating Documentation.
Procedure
1 Open the CCR user interface (UI) in a web browser.
Step example
You can obtain the fully qualified domain name (FQDN) of the MantaRay NM load balancer or
MantaRay NM IP addresses for a VM, where the CCR is running, by following Locating the
right virtual machine for a service under Integration/Integrating Juniper Elements to
MantaRay NM/Preparation before integration/Prerequisites for MantaRay NM in
MantaRay NM Operating Documentation.
2 Log in to the CCR UI using the default user name ccradmin and the CCR admin password.
3 Configure and prepare the CCR for integration with MantaRay NM.
3.1 Create separate projects in the CCR for the OCP and CNF software images.
Note:
By default, all Cloud RAN BTS software images automatically onboarded into
the CCR are stored under the same project. However, Nokia recommends
configuring separate projects for OCP and CNF software artifacts, by modifying
the software onboarding operation plan in MantaRay NM. For more information
on the software onboarding operations in MantaRay NM, see Onboarding
software images to the container registry.
Robot account is used by the OCP clusters Kubernetes operations for fetching
software images from the CCR.
For instructions, see Creating robot account under Administering Central Container
Registry in MantaRay NM Operating Documentation.
Step result
After creating a project and a robot account you can manually push and pull objects to the
container registry using curl commands. For instructions, see Pulling/pushing of artifacts
under Administering Central Container Registry in MantaRay NM Operating
Documentation.
For instructions, see Preparing MantaRay NM for Container Registry integration under
Integrating Container Registry to MantaRay NM in MantaRay NM Operating
Documentation.
For instructions, see Setting up firewall rules under Integrating Container Registry to
MantaRay NM/Preparing the intermediate system in MantaRay NM Operating
Documentation.
6.3 [Optional] If you are using an external container registry, create OAuth MO.
Result
You have configured and integrated the CCR with MantaRay NM.
To check the CCR service status, seeChecking Central Container Registry service status under
Administering Central Container Registry in MantaRay NM Operating Documentation.
To check the CCR integration status, seeVerifying Container Registry integration under
Integrating Container Registry to MantaRay NM in MantaRay NM Operating Documentation.
To remove the CCR integration to MantaRay NM, see Removing Container Registry integration
under Integrating Container Registry to MantaRay NM in MantaRay NM Operating
Documentation.
You need to integrate Nokia Edge Automation Tool (NEAT) with MantaRay NM to enable the cloud
instance integration and deintegration operations in MantaRay NM.
Purpose
Note:
The following procedure presents a high-level overview of the NEAT integration process.
For complete instructions, see Integrating Nokia Edge Automation Tool to MantaRay NM
in MantaRay NM Operating Documentation.
In AirScale Cloud RAN BTS (Cloud RAN BTS), the cloud-native network functions (CNFs) are
deployed on Red Hat OpenShift Container Platform (OCP) clusters. The OCP clusters are deployed
and managed by NEAT. You need to integrate NEAT with MantaRay NM to integrate or deintegrate
the target OCP cluster using MantaRay NM.
After the integration, NEAT is connected to MantaRay NM through an IP network, using HTTPS and
the Domain Name Server (DNS) backbone.
Figure 19: Connection between the integrated network elements and MantaRay NM
Procedure
1 Prepare NEAT for integration with MantaRay NM.
For instructions, see Preparing NEAT for integration under Integrating Nokia Edge
Automation Tool to MantaRay NM in MantaRay NM Operating Documentation.
For instructions, see Preparing MantaRay NM for NEAT integration under Integrating Nokia
Edge Automation Tool to MantaRay NM in MantaRay NM Operating Documentation.
Result
You have configured and integrated NEAT with MantaRay NM.
To check the NEAT integration status, see Verifying NEAT integration under Integrating Nokia
Edge Automation Tool to MantaRay NM in MantaRay NM Operating Documentation.
To remove the NEAT integration to MantaRay NM, see Removing NEAT integration under
Integrating Nokia Edge Automation Tool to MantaRay NM in MantaRay NM Operating
Documentation.
You need to integrate the Red Hat OpenShift Container Platform (OCP) cluster with MantaRay NM
before you can deploy the cloud-native network functions (CNFs) on this OCP cluster.
In AirScale Cloud RAN BTS (Cloud RAN BTS), the CNFs are deployed on OCP clusters. You need to
integrate the target Kubernetes cluster with MantaRay NM before deploying the CNF using the
Cloud Integrate operation, performed by the MantaRay NM Workflow Engine.
You can start the Cloud Integrate operation in MantaRay NM with the following methods:
You can integrate the Red Hat OpenShift Container Platform (OCP) instance with MantaRay NM by
using MantaRay NM graphical user interface (GUI).
Purpose
In AirScale Cloud RAN BTS (Cloud RAN BTS), the CNFs are deployed on OCP clusters. You need to
integrate the target OCP cluster with MantaRay NM before deploying the CNF using the Cloud
Integrate operation, performed by the MantaRay NM Workflow Engine.
As a result of the Cloud Integrate operation, the CLOUD and K8SAPI objects with the
Kubernetes configuration are created in MantaRay NM System Data Access (NASDA) database
under the PLMN-PLMN root object. Integrated objects are visible in MantaRay NM Monitor.
Note:
<CLOUD DN> is the distinguished name (DN) of the CLOUD object. You need to define it
during the OCP deployment. For more information, see Operating
Documentation/AirScale Cloud RAN BTS System/Operation/Deploying Nokia and
Operating Red Hat OCP.
Procedure
1 Log in to the MantaRay NM start page using a web browser.
2.1 In the Opening racpmc.jnlp dialog window, click use Java Web Start
Launcher.
2.2 Confirm all security questions about the digital signature and software provider.
Step result
The CM Operations Manager Java application opens.
Step result
Figure 22: Workflow Engine›Operation list›Cloud Operations view
5 Click the green arrow icon next to the Cloud Integration›Cloud Integrate
operation.
Step result
The Cloud Integrate dialog pops-up.
6 In the Cloud Integrate dialog, provide information about the CLOUD object.
6.2 Select and upload the kubeconfig file with the Kubernetes configuration from your
local disc.
6.3 In the Type field, select the type of the CLOUD object.
Step example
Example of the cloud software version: ocp-4.12.25-nokia.23.8.2
Step result
The cloud instance has been integrated into MantaRay NM.
You can check the integrated CLOUD objects in MantaRay NM Monitor application. For
instructions, see Checking integrated CLOUD objects in Monitor under Managing cloud
services/Cloud operations/Cloud Integrate in MantaRay NM Operating Documentation.
You can integrate the Red Hat OpenShift Container Platform (OCP) instance with MantaRay NM by
using MantaRay NM CLI.
Purpose
In AirScale Cloud RAN BTS (Cloud RAN BTS), the CNFs are deployed on OCP clusters. You need to
As a result of the Cloud Integrate operation, the CLOUD and K8SAPI objects with the
Kubernetes configuration are created in MantaRay NM System Data Access (NASDA) database
under the PLMN-PLMN root object. Integrated objects are visible in MantaRay NM Monitor.
Note:
<CLOUD DN> is the distinguished name (DN) of the CLOUD object. You need to define it
during the OCP deployment. For more information, see Operating
Documentation/AirScale Cloud RAN BTS System/Operate/Deploying and Operating
Nokia Red Hat OCP.
Procedure
1 Log in as the omc user to the MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
print(json.dumps(yaml.load(sys.stdin)))' <
Tip:
The kubeconfig file is created during the OCP deployment. It needs to contain the
embedded certificates, which allow setting up secure Transport Layer Security (TLS)
connectivity to the OCP Kubernetes application programming interface (API).
Step result
You have created the kubeconfig_parameters.json file with the converted content
of the kubeconfig.yaml file.
Step example
"managedObjects": [
"moId": "PLMN-PLMN/CLOUD-1/K8SAPI-1/KUBECONFIG-1",
"moClass": {
"id": "com.nokia.mantaraynm.lcm.ssh:KUBECONFIG",
"version": "1.0"
},
"planOperation": "CREATE",
5 Import the kubeconfig.json file content to the MantaRay NM Security Sensitive Data
(SSD) by entering:
operations/api/sm/operations/v1/execute/ssd-import?overwrite=true" -v
6 Verify if the KUBECONFIG object was created in the MantaRay NM SSD by entering:
PLMN/CLOUD-1/K8SAPI-1/KUBECONFIG-1"]}}' -H "Content-Type:application/vnd.nokia-
fqdn>/sm-operations/api/sm/operations/v1/execute/ssd-export?fileType=JSON
Step result
PLMN-PLMN/CLOUD-1/K8SAPI-1/KUBECONFIG-1 is visible in the output.
body={"managedObjects":[{"moId":"PLMN-
PLMN/CLOUD-1/K8SAPI-1/KUBECONFIG-1","moClass":{"id":"com.nokia.netact.lcm.ssd:KUB
ECONFIG","version":"1.0"},"planOperation":"CREATE","parameters":{"apiVersion":"",
"kind":"","clusters":"","contexts":"","current-
context":"","preferences":{},"users":""}}]}
You can integrate the Red Hat OpenShift Container Platform (OCP) instance with MantaRay NM by
using MantaRay NM REST application programming interface (API).
Purpose
In AirScale Cloud RAN BTS (Cloud RAN BTS), the CNFs are deployed on OCP clusters. You need to
integrate the target OCP cluster with MantaRay NM before deploying the CNF using the Cloud
Integrate operation, performed by the MantaRay NM Workflow Engine.
As a result of the Cloud Integrate operation, the CLOUD and K8SAPI objects with the
Kubernetes configuration are created in MantaRay NM System Data Access (NASDA) database
under the PLMN-PLMN root object. Integrated objects are visible in MantaRay NM Monitor.
Note:
<CLOUD DN> is the distinguished name (DN) of the CLOUD object. You need to define it
during the OCP deployment. For more information, see Operating
Documentation/AirScale Cloud RAN BTS System/Operation/Deploying and Operating
Nokia Red Hat OCP.
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
2 Access the startOperation endpoint of the CM Operations REST API using the HTTP
POST method.
The request should contain the Content-Type header with the multipart/form-data
value.
Step example
https://<cluster hostname>/mantaraynm/cm/open-api/operations/v1/start
Step example
"operationAttributes": {
"cloudDn": "PLMN-PLMN/CLOUD-1",
"version": "ocp-4.12.25-nokia.23.8.2"
Result
The cloud instance has been integrated into MantaRay NM.
You can check the integrated CLOUD objects in MantaRay NM Monitor application. For
You need a deployment plan to trigger the cloud-native network function (CNF) deployment
operation in MantaRay NM. You can prepare a deployment plan file in a text editor and import it to
MantaRay NM, or create it in MantaRay NM Configuration Management (CM) Editor.
In AirScale Cloud RAN BTS (Cloud RAN BTS), CNFs are deployed on the cloud infrastructure using
Kubernetes containers organized into pods. Pod deployment is automated using Helm, which is a
system for simplifying container management.
The CNF package contains all elements needed to instantiate the CNF, organized in several Helm
chart folders:
CNF manifest file JSON file that describes the complete package contents and serves
as a simple CNF workload descriptor.
Values override files YAML files that collect the CNF parameters which can be modified by
the operator.
ne_compatibility file YAML file that contains information about the compatible software
versions.
JSON schema JSON file that imposes a structure on the YAML value override files.
Container images CNF images that are onboarded to the container registry before the
CNF deployment or upgrade operation.
The CNF deployment packages for different CNFs may contain a different number of Helm charts.
You can check the number of Helm charts in the manifest.json file in the deployment
package. MantaRay NM deploys the Helm charts contained in the CNF deployment package in an
order defined by the deployment plan.
After creating the plan, you need to override the default configuration values in Helm charts with
parameters that you have defined in the values override files. For more information, see Updating
Helm chart configuration values from values override files.
Cloud-native network function (CNF) parameters used in AirScale Cloud RAN BTS (Cloud RAN BTS)
life cycle management (LCM) operation plans
cloudDN Distinguished name (DN) of cloud Cloud instance DN of the cloud instance intended for the Mandatory
integrated to MantaRay NM vDU or the vCU. The value of the parameter needs to be
unique in the MantaRay NM instance.
deploymentPr List of the Helm charts to be executed. For chart, use the names of the tgz packages containing Mandatory
ofile The number of items should be the the Helm charts in the HelmChart folder inside the CNF
same as the number of Helm charts distribution package. For the virtualized gNB distributed unit
containing the (vDU), these are:
deploymentSequence parameter • values-override.aic-vdu-cluster-
inside the manifest file. preparation.yaml
• values-override.aic-vdu-
prerequisite.yaml
• values-override.aic-vdu.yaml
For the virtualized gNB central unit (vCU), these are:
• values-override.aic-vcu-cluster-
preparation.yaml
• values-override.aic-vcu-
prerequisite.yaml
• values-override.aic-vcu.yaml
chart Name of the Helm chart in the Helm For example: aic-vcu-cluster- Mandatory
chart folder inside the CNF distribution preparation-17.250.398.tgz
package.
values The Base64-encoded values override If you used the CNF Plan Prepare operation, leave this Mandatory
file. parameter empty. For more information, see Updating
Helm chart configuration values from values override files.
releaseName Name of a Helm chart instance <CNF id>-<chart name and release> Mandatory
running in the Kubernetes cluster.
secrets Name of Kubernetes secrets. User defined name of the Kubernetes secret. Optional
containerReg Name of container registries used to User defined name of the container registry. Mandatory
istries install CNF.
name Name of the managed object. User defined name of the CNF object. Optional
neRelId Name of the network element this CNF For example: PLMN-PLMN/MRBTS-15 Optional
instance represents.
namespace Target Kubernetes cluster namespace, You can use either an already existing namespace or create Optional
where the CNF instance resources are a new one. If you provide an already existing namespace, it
deployed. will be used. If the namespace doesn't exist yet, it will be
created. Use a unique namespace for each CNF.
type Type of the deployed CNF or Nokia For example: vDUCNF, vCUCNF, NOP_ran-nic-sw- Optional
Kubernetes Operator for Cloud RAN controller, NOP_nokia-sync-operator
BTS.
release Main release of the CNF software For example: 24R2 Optional
package.
You can create a cloud-native network function (CNF) deployment plan using a text editor and
manually import it to MantaRay NM Configuration Management (CM) Editor.
Purpose
If you use a text editor, prepare a plan in one of the following formats:
RAML2.0
CSV
Simple CSV
Note:
XLSX format doesn't support long string values in the deploymentProfile/values
field of the CNF object. XLSX format supports up to 32,767 characters.
Procedure
1 Base64-encode the filled-in value override files using the following Linux command:
Run the command for each file in the CNF deployment package and copy the output.
2 Create a deployment plan file using a text editor and manually fill the contents of the
Plan mandatory and optional parameters according to CNF object parameters in LCM
operation plans.
Step example
Example of a virtualized gNB central unit (vCU) deployment plan in a RAML format:
id="PlanConfiguration( 34 )">
<header>
<log dateTime="2023-06-28T01:13:24.000+02:00"
</header>
id="102397" operation="create">
<defaults name="System"/>
<list name="deploymentProfile">
<item>
charts></p>
<p name="chart">aic-vcu-cluster-preparation-17.402.0.tgz</p>
<p name="order">0</p>
<p name="releaseName">aic-vcu-cluster-preparation</p>
</item>
<item>
<p name="chart">aic-vcu-prerequisite-18.554.6.tgz</p>
<p name="order">1</p>
<p name="releaseName">aic-vcu-prerequisite</p>
</item>
<item>
<p name="chart">aic-vcu-19.250.59.tgz</p>
<p name="order">2</p>
<p name="releaseName">aic-vcu-main-chart</p>
</item>
</list>
<p name="cloudId">PLMN-PLMN/CLOUD-2127</p>
<p name="cnfSwId">vCUCNF23R3_19.250.59</p>
<list name="containerRegistries">
<p>ImageRegistry-1</p>
</list>
<p name="namespace">cran1</p>
<p>CNF-cmp-secret</p>
<p>Mantaray-MN-taTrustChain-secret</p>
<p>pull-secret</p></list>
</managedObject>
</cmData>
</raml>
Step result
The Import window pops up.
3.4 Select the plan file to be imported and edit the plan name and import options if
necessary.
You can create a cloud-native network function (CNF) deployment plan with MantaRay NM
Configuration Management (CM) Editor.
Procedure
1 Log in to the MantaRay NM start page using a web browser.
Step result
Figure 24: MantaRay NM start page
2.1 In the Opening cmedit.jnlp dialog window, click use Java Web Start
Launcher.
2.2 Confirm all security questions about the digital signature and software provider.
Step result
The CM Editor Java application opens.
Step example
Step result
The new plan is added to the navigation tree. The Plan Header view is shown in the main
view. You can modify the plan name that has been automatically assigned by the application.
Step example
Note:
Maximum length of a plan name is 200 characters.
4.2 [Optional] Check the Enable policy plans box to define whether policy plans
(exception or reservation plans) are taken into account.
When you select Enable policy plans, the content of the plan is validated when
browsing or editing parameter values or managed objects (MOs) in the MO views.
When there are deviating parameter values or MOs, tooltips show the deviating
values, and the parameter field background is yellow.
4.3 [Optional] From the Group drop-down list, choose a group for the plan.
4.4 [Optional] From the Expiration drop-down list, select the plan expiration date.
The Expiration option defines how long the object or parameter modification is
prevented for the policy plan content.
4.5 From the Target configuration drop-down list, select the target configuration
for the plan.
Step result
The new plan name and the plan parameters you have configured are saved.
6.1 In the navigation tree on the left-hand side, expand the plan.
6.2 From the Managed Objects, right-click on the PLMN-xxx root object and select
New Managed Object.
Step example
Figure 27: Creating a new MO for a plan
Step result
The New Managed Object dialog opens. In this dialog, you can create a new MO
for the MO you have selected in the navigation tree.
Step example
Locally unique means that the ID is unique under its parent object. If you leave this
field blank, an ID is automatically reserved when the instance ID data type is a number
without a validation pattern. For the rest of the instance IDs, including the MRBTS MO
under the PLMN-PLMN MO, you need to define a locally unique instance ID.
6.6 From the Template list, select a template you want to assign.
The template name is automatically selected from the parent object. If a template
with the same name cannot be found or the parent is not assigned, CM Editor
automatically selects a system template. There are different system templates for
different network element versions. If you want to change the template name for the
new MO, select another template name from the Template list. The Template list
presents the available templates for this MO class.
Step result
The CNF MO is added to the navigation tree under the created plan.
Result
You have created the CNF deployment plan. The DeploymentProfile structure parameter
under the CNF MO in the plan needs to be filled in with CNF configuration values from the
Base64-encoded CNF values override files according to CNF object parameters in LCM operation
plans. You can do this manually in CM Editor or automatically using the CNF Plan Prepare
For more information on the CNF Plan Prepare operation, see Updating Helm chart
configuration values from values override files.
You need to update the cloud-native network function (CNF) deployment plan with the
configuration values defined in the values override files. You can either do this manually by
editing the CNF deployment plan in a text editor or automatically using the CNF Plan
Prepare operation in MantaRay NM.
Figure 29: Updating Helm chart configuration values from values override files
In AirScale Cloud RAN BTS (Cloud RAN BTS), CNFs are deployed on the cloud infrastructure using
Helm, which is a system for simplifying container management. The Helm charts contain the
details of resource creation and pod deployment with default values of the CNF deployment
parameters. You can override the default configuration values with parameters specific to your
infrastructure and the chosen deployment type by modifying the values override YAML files
included in the deployment package. Before the deployment, you need to provide the parameter
values from the values override files to the CNF deployment plan. You can either do this manually
by editing the CNF deployment plan in a text editor or automatically using the CNF Plan
Prepare operation in MantaRay NM.
You can use the CNF Plan Prepare operation in MantaRay NM to update the cloud-native
network function (CNF) deployment plan with the configuration values from the values override
files. You can start the CNF Plan Prepare operation using MantaRay NM graphical user
interface (GUI).
Purpose
You can override the default CNF configuration values with parameters specific to your
infrastructure and the chosen deployment type by modifying the values override YAML files
included in the CNF deployment package. The CNF Plan Prepare operation in MantaRay NM
allows you to automatically import the content of the values override files to the deployment
plan.
Note:
If you add the content of the values override files to the CNF deployment plan manually,
you need to Base64-encode the values override files first. If you use the CNF Plan
Prepare operation, the Base64-encoding is automatic.
Procedure
1 Log in to the MantaRay NM start page using a web browser.
2.1 In the Opening racpmc.jnlp dialog window, click use Java Web Start
Launcher.
2.2 Confirm all security questions about the digital signature and software provider.
Step result
The CM Operations Manager Java application opens.
3 In the Plans view, right-click on the prepared CNF deployment plan and select Workflow
Engine for Plan.
Step example
Step result
The Workflow Engine window pops-up.
5 Click the green arrow icon next to the CNF Plan Prepare operation.
Step result
The CNF Plan Prepare dialog pops-up.
6 In the CNF Plan Prepare dialog, configure the settings for the operation.
6.3 In the Replace existing values field, select if the values of the parameters in
the plan should be replaced with the content of the values override files.
To replace the existing parameter values with the content from the values override
files, choose Yes.
6.4 Select and upload the prepared values override files from your local disc.
7 Click Start.
8 Repeat the procedure for all CNF objects in the Scope field.
You can use the CNF Plan Prepare operation in MantaRay NM to update the cloud-native
network function (CNF) deployment plan with the configuration values from the values override
files. You can start the CNF Plan Prepare operation using MantaRay NM CLI.
Purpose
You can override the default CNF configuration values with parameters specific to your
infrastructure and the chosen deployment type by modifying the values override YAML files
included in the CNF deployment package. The CNF Plan Prepare operation in MantaRay NM
allows you to automatically import the content of the values override files to the deployment
plan.
Note:
If you add the content of the values override files to the CNF deployment plan manually,
you need to Base64-encode the values override files first. If you use the CNF Plan
Prepare operation, the Base64-encoding is automatic.
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
2 Update the CNF deployment plan with the content of the main chart values override file by
entering:
values-override.aic-<CNF_name>.yaml -v
3 Update the CNF deployment plan with the content of the cluster preparation chart,
prerequisites chart, and main chart values override files by entering:
clusterPreparationValues values-override.aic-<CNF_name>-clusterpreparation.
Postrequisites
You can check if the content of the values override files was added to the deployment plan by
opening the plan in MantaRay NM CM Editor and navigating to the selected CNF object. In CM
Editor, you can also plan and modify all other mandatory and optional parameter values for a
selected CNF object.
You need to configure a secrets list for the cloud-native network function (CNF) to perform the
CNF deployment in MantaRay NM with autoconfiguration.
You can deploy the AirScale Cloud RAN BTS (Cloud RAN BTS) CNFs with autoconfiguration using a
Plug and Play (PnP) procedure in MantaRay NM. Using autoconfiguration requires you to provide
additional parameter values for autoconnection and certificate management in the CNF values
override files and in the additional secret files:
cmp_secret.yaml
taTrustChain_secret.yaml (optional)
pull_secret.yaml
Note:
You can find the cmp_secret.yaml and taTrustChain_secret.yaml files inside
the Secret folder of the CNF deployment package. The pull_secret.yaml file needs
to be prepared manually. For instructions, see Creating an image pull secret for the vDU
deployment.
For instructions on filling in the values overrides and secret files, see:
Filling in the values override files for the vCU
Filling in the values override files for the vDU
Note:
Cloud RAN BTS doesn't support autoconfiguration for the deployment of the Nokia
proprietary Kubernetes operators for Cloud RAN BTS. Therefore, you can provide additional
parameters only for the virtualized gNB central unit (vCU) and the virtualized gNB
distributed unit (vDU).
During the CNF deployment, before the installation of Helm charts, MantaRay NM creates secrets
in the appropriate namespace of the target Kubernetes cluster, based on the information from
the secrets list in a CNF object. To define secret items in the CNF object list, you need to add
appropriate secrets to MantaRay NM Security Sensitive Data (SSD). You can add secrets to
MantaRay NM SSD with the following methods:
Note:
If during the deployment the vCU or the vDU gets a certificate from a different root CA
than MantaRay NM, you need to import the vCU or vDU root certificate to MantaRay NM.
For more information about certificate import, see the Adding additional trust anchors
chapter under Administering MantaRay NM System Security category in MantaRay NM
Operating Documentation.
Notice:
The name of the Certificate Management Protocol (CMP) secret needs to be unique for
each CNF and include a CNF instance-specific identifier, for example: CNF2127-cmp-
secret.
You need to manually create an image pull secret and save the contents in the pull-
secret.yaml file before you can deploy the virtualized gNB distributed unit (vDU).
Purpose
Before the deployment of the vDU, you need to create an image pull secret, which stores the
credentials for authentication during the connection with a container registry. You need the
credentials to enable the deployment of Nokia Cloud RAN SmartNICs (RAN NICs) together with the
vDU cloud-native network function (CNF).
Make sure you have defined the value of the imagePullSecret parameter in values-
override.aic-vdu.yaml. For instructions, see Filling in the values override files for the vDU.
--docker-username=neat --docker-password=neat
Note:
In this example, the secret is created in Nokia Edge Automation Tool (NEAT) Docker
registry. The same steps apply if you use another registry.
Step example
apiVersion: v1
data:
.dockerconfigjson:
eyJhdXRocyI6eyJjbGFiMTA2OG5vZGUxNy5uZXRhY3QubnNuLXJkbmV0Lm5ldDo4NDQzIjp7InVzZXJuY
W1lIjoiYWRtaW4iLCJwYXNzd29yZCI6IkFYZjZ2TVQwZnpJTGN2NlVxTTFna2V2RFYrT0VhcXlRdjZraU
NlV2tDTjA9IiwiZW1haWwiOiJuZXRhY3RAbm9raWEuY29tIiwiYXV0aCI6IllXUnRhVzQ2UVZobU5uWk5
WREJtZWtsTVkzWTJWWEZOTVdkclpYWkVWaXRQUldGeGVWRjJObXRwUTJWWGEwTk9NRDA9In19fQ==
kind: Secret
metadata:
creationTimestamp: "2023-07-25T13:19:41Z"
name: pull-secret
namespace: cran1
resourceVersion: "8065193"
uid: 3cae2177-3c32-4e24-ac25-700761b2cc7b
type: kubernetes.io/dockerconfigjson
Result
You have created a new Kubernetes secret and saved the contents locally as a pull-
secret.yaml file.
Postrequisites
You need to add the contents of the pull-secret.yaml file to MantaRay NM Security
Sensitive Data (SSD) for the target CNF object with one of the following methods:
You can use MantaRay NM graphical user interface (GUI) to add appropriate secrets to MantaRay
NM Security Sensitive Data (SSD).
Purpose
During the cloud-native network function (CNF) deployment, before the installation of Helm
charts, MantaRay NM creates secrets in the appropriate namespace of the target Kubernetes
cluster based on the information from the secrets list in a CNF object. To define secret items in
the CNF object list, add appropriate secrets to MantaRay NM SSD.
Note:
You need to perform the procedure separately for each secret file.
Procedure
1 Log in to the MantaRay NM start page using a web browser.
Step result
Figure 33: MantaRay NM start page view
2.1 In the Opening racpmc.jnlp dialog window, click use Java Web Start
Launcher.
Step result
The CM Operations Manager Java application opens.
3 In the Plans view, right-click on the prepared CNF deployment plan and select Workflow
Engine for Plan.
Step example
Figure 34: Opening the Workflow Engine for a selected plan
Step result
The Workflow Engine window pops-up.
5 Click the green arrow icon next to the CNF Secret Import operation.
Step result
The CNF Secret Import dialog pops-up.
6 In the CNF Secret Import dialog, select and upload the secret file with the Kubernetes
configuration from your local disc.
7 Click Start.
You can use MantaRay NM REST application programming interface (API) to add appropriate
secrets to MantaRay NM Security Sensitive Data (SSD).
Purpose
During the cloud-native network function (CNF) deployment, before the installation of Helm
charts, MantaRay NM creates secrets in the appropriate namespace of the target Kubernetes
Note:
You need to perform the procedure separately for each secret file.
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
2 Access the secrets storing endpoint using the HTTP POST method.
Step example
https://<cluster hostname>/mantaraynm/cm/internal-api/cnflcm/v1/secrets
--data-raw 'apiVersion: v1
kind: Secret
metadata:
namespace: default
type: Opaque
stringData:
psk:
OGg3N1BMQng3SUVjY1Vra0YtWkRzelZlUEdzeDBBbGZQbi1nc3BuMG5saVFkYnla
refNum: VUR4cTMxUElRNWl5YUR6eVd0cTJpQg'
Note:
You need to include an authorization token in the in HTTP authorization header of the
request. For instructions on issuing the token, see the Authentication and
authorization chapter under RESTful Web Service Data Access API in MantaRay
NM Operating Documentation.
Before deploying AirScale Cloud RAN BTS (Cloud RAN BTS) , you need to upload the required
software packages to MantaRay NM from your local computer.
The software packages used in the Cloud RAN BTS life cycle management (LCM) operations need
to be delivered and onboarded to the tool, which executes the deployment. The CNF packages
need to also be onboarded to the container registry, integrated with MantaRay NM.
Note:
You can onboard CNF packages to the central container registry (CCR) during their
onboarding to MantaRay NM or during the CNF deployment operation. If neecessary, you
can also onboard the CNF packages to CCR independently. For more information, see
Onboarding software images to the container registry.
Before onboarding the components of a new AirScale Cloud RAN BTS (Cloud RAN BTS) software
release to MantaRay NM, you need to install a new fast pass service package.
The MantaRay NM fast pass service package is not a part of the Cloud RAN BTS software release.
However, it contains the new vCU, vDU, and FHGW adaptation model, compatible with the new
Cloud RAN BTS software release. Multiple adaptation packages can exist in MantaRay NM
simultaneously without disturbing the MantaRay NM or the integrated network elements (NEs)
operations. You can download the MantaRay NM fast pass service package for the selected Cloud
RAN BTS release from Nokia Software Supply Tool (SWSt).
For instructions on installation of the MantaRay NM fast pass service package, see the Installing
MantaRay NM fast pass Service Packages section in MantaRay NM Operating Documentation.
You need to upload the Nokia Root Certificate Authority (CA) certificates to the Software Integrity
Protection Trust Store before you can upload the cloud-native network function (CNF)
deployment packages to MantaRay NM.
Purpose
The complete CNF deployment package includes two files:
CNF package in ZIP format
Signature file in PKC7 format
When you import the complete CNF package to MantaRay NM, MantaRay NM verifies the signature
file using Nokia Root CA certificates stored in Software Integrity Protection Trust Store. If the
operation fails, you can't import the CNF package to MantaRay NM.
Procedure
1 Log in as the root user to the dmgr virtual machine (VM) of MantaRay NM using Secure Shell
(SSH).
Note:
Note the output for use in further steps.
3 Add the certificate PEM file to Software Integrity Protection Trust Store by entering:
/var/opt/oss/global/swm/swm_keystore/swm_SWIP_truststore.jks
Result
You have added the Nokia Root CA certificate from SWSt to Software Integrity Protection Trust
Store. To list or remove the certificates available in Software Integrity Protection Trust Store, see
the Certificates management for Software Integrity Protection Trust Store chapter in MantaRay
NM Operating Documentation.
To deploy AirScale Cloud RAN BTS using MantaRay NM, you need to first import and onboard the
required software packages to Storage Archive in MantaRay NM Software Manager.
Perform Downloading software packages from SWSt for the following network elements:
Nokia AirFrame Fronthaul Gateway (FHGW)
Virtualized gNB central unit (vCU)
Virtualized gNB distributed unit (vDU)
Nokia proprietary Kubernetes operators for Cloud RAN BTS
Tip:
You need to have the id="swmPackageImportPermission" permission in MantaRay
NM to add software packages to the Software Archive. Otherwise, the Add
software package function is disabled.
Procedure
1 Log in to the MantaRay NM start page using a web browser.
Step result
Figure 36: MantaRay NM start page view
Step example
Figure 38: Selecting theApplication menu option
Step example
Step result
You are in the classic Software Manager.
5 Click the Add software package link above the navigation tree.
Step example
Figure 40: Software Manager›Add software package view
5.1 In the Opening webstart.jnlp dialog window, click use Java Web Start
Launcher.
5.2 Confirm all security questions about the digital signature and software provider.
6 From the MO type drop-down list on the left-hand side, select the managed object (MO)
type.
For the Cloud RAN BTS, you can choose the following MO types:
CNF for the vCU, vDU, and Nokia proprietary Kubernetes operators for Cloud RAN BTS
FHGW
Note:
Import of Signed Software Package is supported only for the CNF MO type.
Step result
The Open dialog window opens.
8 From the selected local or MantaRay NM directory, choose one or several files containing
software packages and click Open.
There can be an additional XML description file added to the software package. This file
contains all necessary information about the given software package.
Note:
The supported package extensions for all type of software packages are: ZIP, TAR,
TAR.GZ, and TGZ. Additionally, for unsigned software packages, extensions ISO, CPIO,
and RPM are supported.
MantaRay NM prevents using an incompatible package automatically. The descriptor XML file
defines software compatibility and incompatibility releases, versions, and regular expressions.
Step result
The selected file is placed in the list for import. The Status parameter informs you about
the software package validation result. If you want to delete a software package from the list,
click the Remove icon.
9 To open the SW Package Details dialog window, click the Details icon.
The SW Package Details dialog includes all the necessary information about the
imported software package. If an XML description file is added to the software package, the
SW Package Details is automatically filled in with the information taken from this file.
Otherwise, the SW Package Details is filled in by default only with the mandatory
parameters.
The Version ID and MO type parameters are mandatory for this form.
Step result
The SW Package Details dialog opens. Change or complement the software packages
10 [Optiona] To automatically onboard the imported software images to the central container
registry (CCR), check the Distribute to Central Container Registry box.
11 To import one file or all files at a time, click the Import icon in the selected software
package row or the Import all icon at the bottom of the SW Import Manager
window.
Importing the selected software packages begins. The import progress for the selected file is
presented in the Status field. If you want to stop the import of the selected software
package, click the Stop icon. The status changes to Import stopped. While the import
process is stopped, you can edit the software details in the SW Import Form. To proceed
with the file import, click the Import icon located in the selected software package row once
again.
Note:
If you stop the file import or it is stopped because of a failure, SW Import
Manager resumes the process from the breakpoint. The actual transfer status is
additionally visualized in the progress bar.
When you stop the transfer or it is interrupted by a failure, the already transferred file parts
are temporarily stored in the MantaRay NM software archive temporary directory. They are
checked against the storage time by Software Manager and automatically cleaned
according to the guidelines, which are defined in the Software Manager configuration
files. If you want to delete these files manually, use the Clean icon. From the Clean
MantaRay NM software temporary directory dialog window, select files for
deletion. Confirm the choice by clicking Delete.
Result
A new software package is imported to the Software Archive in MantaRay NM SM. The
package is listed in the navigation tree of available software under the suitable MO type and
release in the Software Archive tab. You can import the same software package multiple
times but with different software properties.
If you have checked the Distribute to Central Container Registry box during the
import operation, the software package has also been onboarded to the CCR.
If you want to close the SW Import Manager, click Close. Confirm the operation by clicking
OK.
You need to onboard the Parameter Description Language (PDL) validation plugin to MantaRay
NM before you can deploy the cloud-native network functions (CNFs).
Purpose
Note:
You need to perform this action only once for a MantaRay NM instance in a given AirScale
Cloud RAN BTS software release.
Procedure
1 Import the PDL package to MantaRay NM.
Follow the instructions in Onboarding software packages to MantaRay NM manually, but use
the vDUCNF<version>_pdl_validation_service_plugin.zip package.
<targetBD swReleaseVersionName="vDUCNF24R1_0.300.4384"/>
Keep the value of the swReleaseVesionName attribute, as you will need it for the
site configuration file (SCF). In the example, it's vDUCNF24R1_0.300.4384.
When onboarding the cloud-native network function (CNF) software packages to the MantaRay
NM Software Archive, during the CNF deployment, and CNF upgrade, you can automatically
onboard the CNF software images to an external container registry or the central container
registry (CCR).
There are two operations in MantaRay NM, which allow onboarding the software packages for the
virtualized gNB central unit (vCU), virtualized gNB distributed unit (vDU), and Nokia proprietary
Kubernetes operators for AirScale Cloud RAN BTS (Cloud RAN BTS) to the container registry:
the CNF Distribute Software operation onboards the CNF packages to a container
registry which is integrated with MantaRay NM and defined by the containerRegistries
parameter of the deployment plan.
the CNF Distribute Software to CCR operation onboards the CNF software
packages to the CCR.
You can trigger the CNF software onboarding operations in MantaRay NM in the following ways:
By checking the Distribute Software to CCR box during the CNF package import to
MantaRay NM Software Archive. For instructions, see Onboarding software packages to
MantaRay NM manually.
By checking the Distribute CNF Software option in MantaRay NM Workflow Engine
during the CNF deployment or upgrade. For instructions, see Deploying a CNF without
autoconfiguration using MantaRay NM GUI.
Independently in the MantaRay NM Workflow Engine, using MantaRay NM graphical user
interface (GUI) (only the CNF Distribute Software operation) or MantaRay NM CLI (both
CNF Distribute Softwareand CNF Distribute Software to CCR) operations.
The CNF Distribute Software operation in MantaRay NM allows you to onboard the cloud-
native network function (CNF) software images to the target container registry, integrated with
MantaRay NM. You can start the CNF Distribute Software operation in MantaRay NM
using MantaRay NM graphical user interface (GUI).
Purpose
With the CNF Distribute Software operation, you can onboard the CNF software images
to the container registry independently from the CNF software upload, CNF deployment, and CNF
upgrade operations.
Step result
Figure 43: MantaRay NM start page view
2.1 In the Opening racpmc.jnlp dialog window, click use Java Web Start
Launcher.
2.2 Confirm all security questions about the digital signature and software provider.
Step result
The CM Operations Manager Java application opens.
3 In the Plans view, right-click on the prepared CNF deployment plan and select Workflow
Engine for Plan.
Step example
Step result
The Workflow Engine window pops-up.
5 Click the green arrow icon next to the CNF Distribute Software operation.
Step result
The CNF Distribute Software dialog pops-up.
6 [Optional] In the CNF Distribute Software dialog, configure the settings for the
operation.
6.2 In the containerRegistries field, fill in one or more distinguished names (DN) of
container registries integrated to MantaRay NM where the CNF software images are
to be distributed.
6.3 In the cnfSwId field, fill in the CNF software version imported to the MantaRay NM
Software Archive, for which CNF images are to be distributed.
To replace the existing parameter values with the content from the values override files,
choose Yes.
Step result
The Feedback dialog opens and the status of the CNF Distribute Software
operation changes to Started. When the operation is performed successfully, the status
changes to Finished.
Postrequisites
You can check the status of the operation in the following places:
Feedback dialog
Workflow Engine›Operation field
CM Operation Manager›Operation history tab
The CNF Distribute Software operation in MantaRay NM allows you to onboard the cloud-
native network function (CNF) software images to the target container registry, integrated with
MantaRay NM. You can start the CNF Distribute Software operation in MantaRay NM
using MantaRay NM CLI.
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
Table 54: Starting parameters for the CNF Distribute Software operation in
MantaRay NM CLI
For more information on the CLI operations in MantaRay NM, see the Executing command
line operations chapter under Command Line Operations in MantaRay NM Operating
Documentation.
Postrequisites
You can check the status of the operation in the following places:
Command line
CM Operation Manager›Operation history tab
The CNF Distribute Software to CCR operation in MantaRay NM allows you to onboard
the cloud-native network function (CNF) software images to the central container registry (CCR).
You can start the CNF Distribute Software to CCR operation in MantaRay NM using
MantaRay NM CLI.
Purpose
Central container registry (CCR) is an Open Container Initiative (OCI) compliant registry that
provides a storage and content delivery system for OCI artifacts. The centrally stored content is
available:
for direct use by Kubernetes, which controls the applications running in Container as a Service
(CaaS) clusters.
for indirect use by distributed container registries closer to Kubernetes, which controls the
applications running in CaaS clusters.
With the CNF Distribute Software to CCR operation, you can onboard the CNF
software images to the CCR independently from the CNF software upload, CNF deployment, and
CNF upgrade operations.
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
Table 55: Starting parameters for the CNF Distribute Software operation in
MantaRay NM CLI
For more information on the CLI operations in MantaRay NM, see the Executing command
line operations chapter under Command Line Operations in MantaRay NM Operating
Documentation.
Command line
CM Operation Manager›Operation history tab
In AirScale Cloud RAN BTS (Cloud RAN BTS), the cloud-native network function (CNF) deployment
is automated with MantaRay NM.
CaaS Container-as-a-Service
CM Configuration Manager
RU radio unit
To deploy a CNF without autoconfiguration, use the CNF Deployment operation. You can start
the CNF Deployment operation using MantaRay NM graphical user interface (GUI), MantaRay
NM CLI, or MantaRay NM REST application programming interface (API).
Note:
If you deploy a CNF without autoconfiguration, you need to commission it with CU WebEM
or vDU WebEM. For more information, see Commissioning Cloud RAN BTS.
The CNF Deployment allows you to also onboard the CNF software package to the CCR. For
more information about the CCR, see Central container registry in Cloud RAN BTS solution.
The CNF Deployment operation in MantaRay NM allows you to deploy a cloud-native network
function (CNF) without autoconfiguration. You can start the CNF Deployment operation in
MantaRay NM using MantaRay NM graphical user interface (GUI).
Purpose
You can use the CNF Deployment operation in MantaRay NM to deploy the following CNF
objects:
Procedure
1 Log in to the MantaRay NM start page using a web browser.
Step result
Figure 47: MantaRay NM start page view
2.1 In the Opening racpmc.jnlp dialog window, click use Java Web Start
Launcher.
2.2 Confirm all security questions about the digital signature and software provider.
Step result
The CM Operations Manager Java application opens.
3 In the Plans view, right-click on the prepared CNF deployment plan and select Workflow
Engine for Plan.
Step result
The Workflow Engine window pops-up.
5 Click the green arrow icon next to the CNF Deployment operation.
Step result
The CNF Deployment dialog pops-up.
6 In the CNF Deployment dialog, configure the settings for the operation.
6.2 [Optional] In the Distribute CNF Software field, select if the CNF software
image should be distributed to the target container registry, integrated with
MantaRay NM.
Step result
The Feedback dialog opens and the status of the CNF Deployment operation changes
to Started. When the operation is performed successfully, the status changes to
Finished.
Postrequisites
You can check the status of the operation in the following places:
Feedback dialog
Workflow Engine›Operation field
CM Operation Manager›Operation history tab
The CNF Deployment operation in MantaRay NM allows you to deploy a cloud-native network
function (CNF) without autoconfiguration. You can start the CNF Deployment operation in
MantaRay NM using MantaRay NM CLI.
Purpose
You can use the CNF Deployment operation in MantaRay NM to deploy the following CNF
objects:
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
For more information on the CLI operations in MantaRay NM, see the Executing command
line operations chapter under Configuration Management Operating
Procedures/Command Line Operations in MantaRay NM Operating Documentation.
Result
When the CNF Deployment operation is finished, the following happens:
All Helm charts are validated.
The namespace is created.
All secrets defined in the plan are created.
All Helm releases defined in the deployment plan are created and the CNF is activated.
The CNF test is executed for the main release.
Postrequisites
You can check the status of the operation in the following places:
Command line
CM Operation Manager›Operation history tab
The CNF Deployment operation in MantaRay NM allows you to deploy a cloud-native network
function (CNF) without autoconfiguration. You can start the CNF Deployment operation in
MantaRay NM using MantaRay NM REST application programming interface (API).
Purpose
You can use the CNF Deployment operation in MantaRay NM to deploy the following CNF
objects:
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
2 Access the startOperation endpoint of the CM Operations REST API using the HTTP
POST method.
The request should contain the Content-Type header with the multipart/form-data
value.
Step example
https://<cluster hostname>/mantaraynm/cm/open-api/operations/v1/start
3 Start the CNF Deployment operation by placing the following JSON request:
"operationAttributes": {
Result
When the CNF Deployment operation is finished, the following happens:
All Helm charts are validated.
The namespace is created.
All secrets defined in the plan are created.
All Helm releases defined in the deployment plan are created and the CNF is activated.
The CNF test is executed for the main release.
Postrequisites
You can check the status of the operation in the following places:
REST API using the saved operationId. For more information, see the Starting an
operation chapter under CM Web Service RESTful API in MantaRay NM Operating
Documentation.
CM Operation Manager›Operation history tab
In AirScale Cloud RAN BTS (Cloud RAN BTS), the deployment of the virtualized gNB central unit
(vCU) and the virtualized gNB distributed unit (vDU) is automated with MantaRay NM and supports
autoconfiguration with Plug and Play (PnP).
CaaS Container-as-a-Service
CM Configuration Manager
RU radio unit
Cloud RAN BTS supports autoconfiguration of vCU and vDU with the use of PnP. PnP is a
functionality that helps to efficiently deploy new sites by simplifying the planning process and
providing automated site commissioning and configuration once the new site is connected and
identified by MantaRay NM. This means that you can define the cloud-native network function
(CNF) parameters in SCF and import it to MantaRay NM before deploying the vDU or the vCU. For
more information on PnP, see Configuring Radio Network Elements with Plug and Play in
MantaRay NM Operating Documentation.
You can deploy the vCU or the vDU with autoconfiguration using the Unified PnP - Direct
integration planning workflow. The Unified PnP - Direct integration
planning workflow includes the following operations:
Unified PnP - Create BTS configuration plan
Unified PnP - Search and apply Site Template (optional)
Unified PnP - Validate configuration plan
Unified PnP - Activate autoidentification
Unified PnP - Deploying planned CNFs (optional)
In MantaRay NM GUI, you can either start the operations one by one or run the Unified PnP -
One-button planning operation, which automatically starts the next Unified PnP -
You can also start the Unified PnP - Direct integration planning workflow in
MantaRay NM CLI. This method is called zero-touch planning, as it allows you to run all Unified
PnP - Direct integration planning operations one by one with a single
Unified_PnP_COAM_Site_Preparation command.
Purpose
You can deploy the vCU or the vDU with autoconfiguration using the Unified PnP - Direct
integration planning workflow. The Unified PnP - Direct integration
planning workflow includes the following operations:
Unified PnP - Create BTS configuration plan
Unified PnP - Search and apply Site Template (optional)
Unified PnP - Validate configuration plan
Unified PnP - Activate autoidentification
Unified PnP - Deploying planned CNFs (optional)
In MantaRay NM GUI, you can either start the operations one by one or run the Unified PnP -
One-button planning operation, which automatically starts the next Unified PnP -
Direct integration planning operation when the previous one is successfully
completed.
Procedure
1 Log in to the MantaRay NM start page using a web browser.
Step result
Figure 51: MantaRay NM start page view
2.1 In the Opening racpmc.jnlp dialog window, click use Java Web Start
Launcher.
2.2 Confirm all security questions about the digital signature and software provider.
Step result
The CM Operations Manager Java application opens.
3 In the Plans view, right-click on the prepared CNF deployment plan and select Workflow
Engine for Plan.
Step result
The Workflow Engine window pops-up.
5 Click the green arrow icon next to the One button planning›Unified PnP - One-
button planning operation.
Step result
The Unified PnP - One-button planning dialog pops up. By running the
Unified PnP - One-button planning operation, you launch the complete planning
process with one action, which automatically triggers all Unified PnP - Direct
integration planning operations in a sequence.
6 In the Unified PnP - One-button planning dialog, configure the settings for the
operation.
Unified PnP - With this operation, you perform validation of a complete plan. If
Validate the validation fails, you need to correct the plan in MantaRay NM
configuration plan CM Editor and repeat the validation operation.
Unified PnP - With this operation, you can deploy CNFs from the plan. The
Deploy planned operation uses Helm installation to create Helm charts on the vCU
CNFs or vDU side.
You can also provide this information in the dialogs of all Unified PnP - Direct
integration planning operations.
6.2 In the Input file field, select the prepared SCF file.
You can also provide this information in the dialog of the Unified PnP - Create
BTS configuration plan operation.
6.3 In the File format field, select the RAML2 or CSV file format.
You can also provide this information in the dialog of the Unified PnP - Create
BTS configuration plan operation.
6.4 In the UI field, if you used internal parameter values during the SCF preparation, set
the value to No.
You can also provide this information in the dialog of the Unified PnP - Create
6.5 In the Activate Site Templates field, select if you want to use a site template
during the autoconfiguration planning.
6.6 In the GPS identification used field, set the value to No.
You can also provide this information in the dialog of the Unified PnP -
Activate autoidentification operation.
6.7 In the Maintenance Region DN field, set the maintenance region for the CNF if
you haven't defined it in the SCF.
You can also provide this information in the dialog of the Unified PnP - Create
BTS configuration plan operation.
Note:
TheMRC-1/MR-PNP value is used by default in pnp_autoconnection
service. You can set any other value, but the same one should be configured in
the pnp_autoconnection properties.
Step result
The Feedback dialog opens and the status of the Unified PnP - One-button
planning operation changes to Started. When the operation is performed successfully,
the status changes to Finished.
Postrequisites
You can check the status of the operation in the following places:
Feedback dialog
Workflow Engine›Operation field
CM Operation Manager›Operation history tab
Purpose
You can deploy the vCU or the vDU with autoconfiguration using the Unified PnP - Direct
integration planning workflow. The Unified PnP - Direct integration
planning workflow includes the following operations:
Unified PnP - Create BTS configuration plan
Unified PnP - Search and apply Site Template (optional)
Unified PnP - Validate configuration plan
Unified PnP - Activate autoidentification
Unified PnP - Deploying planned CNFs (optional)
You can start the Unified PnP - Direct integration planning workflow in
MantaRay NM CLI. This method is called zero-touch planning, as it allows you to run all Unified
PnP - Direct integration planning operations one by one with a single
Unified_PnP_COAM_Site_Preparation command.
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
fileFormat <RAML2 or CSV> -inputFile <full name of the input file> -profileFile
<name of profile file, used when fileFormat is CSV> -UIValues <true or false> -
-profileFile This parameter defines the name of the profile file. It is used
when the value of the fileFormat parameter is CSV
-maintenanceRegionDn This parameter defines the maintenance region for the CNF if
you haven't defined it in the SCF.
For more information on the CLI operations in MantaRay NM, see the Executing command
line operations chapter under Command Line Operations in MantaRay NM Operating
Documentation.
Postrequisites
You can check the status of the operation in the following places:
Commission network elements by configuring the software and the parameters for AirScale
Cloud RAN BTS (Cloud RAN BTS).
CaaS Container-as-a-Service
CM Configuration Manager
RU radio unit
The commissioning process involves software and parameters configurations. For more
information, see:
Site configuration file (SCF)
Parameters and hardware database obtained from a gNB
The Cloud RAN BTS vDU and vCU cloud-native network functions (CNFs) support
autoconfiguration. This means that you can define the parameters in an SCF and import it to
MantaRay NM before deploying the vDU or the vCU. Deploying the vDU or the vCU involves then
creating a deployment plan, importing it to MantaRay NM, and executing the deployment and
autoconfiguration procedure with the network management system (NMS). For instructions, see
Deploying a CNF with autoconfiguration using MantaRay NM GUI (one-button planning).
If you didn't use autoconfiguration when deploying the vCU or the vDU, you can commission the
CNFs after deployment. The commissioning process after deployment covers the following:
1. Software upgrade if necessary. Ensure that you use the correct and compatible AirScale Cloud
RAN BTS software version. For more information, see Software version verification.
2. Initial configuration using an SCF. Once the configuration plan is activated, you need to validate
You can use CU WebEM and vDU WebEM to make sure that your version of the AirScale Cloud
RAN BTS (Cloud RAN BTS) software is correct and compatible.
All commissioning tasks ensure that the installation is correct, there are no faulty modules, and
the whole system is ready for the final integration. Once the virtualized gNB distributed unit (vDU)
connects to the virtualized gNB central unit (vCU) for the first time, the vCU starts checking if the
vDU is running a correct software version. To establish communication channels with cloud-native
network functions (CNFs) in the Cloud RAN BTS environment, you need to use the software that is
compatible with the gNB. You need to prepare the software on the hard disk of the system before
you start the commissioning process.
Note:
Always ensure you are using a proper software image that is included in the Cloud RAN BTS
software package. Also check if the CNF and radio units (RUs) software versions are an
exact match for the complete service.
You can use CU WebEM to check the software version of the virtualized gNB central unit (vCU)
and of the hardware configured on the site.
Procedure
1 Log in to CU WebEM.
Tip:
You can also check the software release version in the Main Panel bar.
Figure 55: Software information in CU WebEM
You can use vDU WebEM to check the software version of the virtualized gNB distributed unit
(vDU) and of the hardware configured on the site.
Procedure
1 Log in to vDU WebEM.
Tip:
You can also check the software release version in the Main Panel bar.
Figure 57: Software information in vDU WebEM
You can use CU WebEM to create a site configuration file (SCF) and commission the virtualized
gNB central unit (vCU) that was deployed without autoconfiguration.
If you used autoconfiguration when deploying the vCU by following the Deploying a CNF with
autoconfiguration using MantaRay NM GUI (one-button planning) procedure, you do not need to
commission it with CU WebEM.
Tip:
Before using CU WebEM for the first time, make sure that you use:
a recommended browser.
the correct software version of the vCU and of the hardware configured on the site.
For more information, see Verifying a software version with CU WebEM.
You can upload a prepared site configuration file (SCF) with a configuration plan using CU WebEM.
Procedure
1 Log in to CU WebEM.
Step result
A new window pops up.
4 Click Browse and navigate to the SCF file on your local device.
Before activating a plan, you need to validate it. To validate the SCF file while loading it, check
Yes. If you want to validate the SCF later, check No.
Step example
Figure 60: Creating an example plan by uploading a test SCF file
Wait until the SCF file is uploaded. The green check mark icons in the Load SCF Status
indicate the steps of the process that have been completed.
Step example
Figure 61: Loading an SCF progress bar
9 Click Close.
Result
The new configuration plan is uploaded and visible in the Planned Configurations view.
New objects are added and parameter values are defined.
Postrequisites
You need to validate the plan before activating it in CU WebEM. For instructions, see Validating an
SCF in CU WebEM.
Once you commission and configure the connection between the vCU and the virtualized gNB
distributed unit (vDU), you can verify the connection status in CU WebEM. For more information,
see Verifying vCU-vDU connection.
You can create a configuration plan from scratch using the Parameter Editor in CU WebEM.
For parameters and their values used to configure the gNB, see the Reference
Documentation/Reference document/AirScale Cloud RAN BTS Parameters document, and the
vCU and vDU parameters chapter.
Note:
Most parameters are not included in this procedure. For more information, see vCU and
vDU parameters.
Procedure
1 Log in to CU WebEM.
Step result
The new window pops up.
This is applicable if the plan is already uploaded, for example, with a site configuration file
(SCF) template. Download the SCF file from Support Portal and save it on your workstation
or local drive. For more information, see Loading an SCF to CU WebEM.
Check the Duplicate plan from box and, from the drop-down list, choose the plan
that you want to duplicate.
Step example
Figure 63: Duplicating a configuration plan
7 Click OK.
Step result
The plan creation operation starts. A notification appears in the bottom right-hand corner.
The new configuration plan is successfully created and visible under the Planned
Configurations view.
Step result
A new window pops up.
Step example
Step result
The successful operation notification appears in the bottom right-hand corner, and the new
object is visible in the Objects panel.
Insert values for the MRBTS BTS name (btsName) and Multi Radio BTS Instance
Id (mrbtsId) parameters.
Figure 67: Adding MRBTS parameters in CU WebEM
Step result
The new window pops up.
11.3 [Optional] Load the default value for a chosen object class.
You can automatically set the default values for objects with defined default values by
checking the Load default values box.
Step result
The successful operation notification appears in the bottom right-hand corner and the new
object is added under the object tree view in the Objects panel.
12 [Optional] Repeat step 11. Add a new object in the Objects panel to add all new required
objects.
13 Fill in all mandatory parameters marked with from the MRBTS tree with the correct values.
Mandatory parameters are required for the gNB to work properly. Expand the MRBTS list and
fill in the parameters according to the displayed messages regarding missing or invalid
parameters for your new plan. The icon indicates mandatory parameters, and the icon
indicates invalid parameter values.
To properly connect the vCU to the vDU, you need to set up:
the parameters listed in Table: vCU parameters required for a working connection to vDU
using CU WebEM.
the parameters listed in Table: vDU parameters required for a working connection to vCU
using vDU WebEM.
You can also configure these parameters using an SCF. For more information, see
Configuring vCU parameters in SCF and Configuring vDU parameters in SCF.
Result
The new configuration plan is uploaded and visible in the Planned Configurations view.
New objects are added and the parameter values are defined.
Postrequisites
Validate the plan before activating it in CU WebEM. For instructions, see Validating an SCF in CU
WebEM.
You can save your configuration plan for further use. For more information, see Saving an SCF in
CU WebEM.
Once you commission and configure the connection between the vCU and the virtualized gNB
distributed unit (vDU), you can verify the connection status in CU WebEM. For more information,
see Verifying vCU-vDU connection.
You can validate a configuration plan in CU WebEM after uploading a site configuration file (SCF).
Make sure that CU WebEM is opened and a connection to the virtualized gNB central unit
(vCU) is established.
Load the SCF file to CU WebEM. For instructions, see Loading an SCF to CU WebEM.
Notice:
For any cell addition or reconfiguration, be sure to check the cell parameters by using the
Adaptive PDCCH Configuration Tool. This is mandatory to make sure that the cell
configuration is correct and potential configuration errors are eliminated.
Procedure
1 Log in to CU WebEM.
3 From the drop-down list, choose the configuration plan you need to validate.
Note:
The Validate Plan button is active in the Delta configurations and
Planned configurations views.
Step result
The validation is finished, and a notification appears in the bottom right-hand corner. If any
error occurs during validation, the red dot appears on the Errors tab.
5 [Optional] Correct any errors that have appeared during the validation.
5.1 Go to the Errors tab, and verify a particular error type in a chosen plan.
For more details on error troubleshooting, see the CU WebEM User Guide
document.
5.2 [Option 1] To manually fix errors, insert the correct value or select it from the drop-
down list in the Value field.
Step result
All errors are fixed and validation is successful.
Postrequisites
Once the SCF is uploaded and validated, you can activate the configuration plan in CU WebEM. For
instructions, see Activating an SCF in CU WebEM.
You can activate the successfully uploaded and validated plan from a site configuration file (SCF)
using CU WebEM.
Make sure that CU WebEM is opened and a connection to the virtualized gNB central unit
(vCU) is established.
Load the SCF file to CU WebEM. For instructions, see Loading an SCF to CU WebEM.
Validate the configuration plan in CU WebEM. For instructions, see Validating an SCF in CU
WebEM.
Procedure
1 Log in to CU WebEM.
3 From the drop-down list, choose the planned configuration you want to activate.
Step result
If you check the Download plan without activation box and click Execute, the
downloading progress window appears. Once the download is complete, you can close the
window by clicking the Close button.
If you click the Execute button without checking the Download plan without
activation box, the activation process starts and the progress window appears. The
activation process consists of four phases:
1. Plan activation started
2. Plan download
3. Plan validation
4. Plan activation finished
After the plan validation phase, the caution window pops up and you need to click
Once the activation process is completed, a notification pops up in the bottom right-hand
corner.
Result
The configuration plan is successfully activated and visible under the Active configuration
view. vCU reset is triggered. Cloud infrastructure and cells are available in the Status view and
Postrequisites
Once you commission and configure the connection between the vCU and the virtualized gNB
distributed unit (vDU), you can verify the connection status in CU WebEM. For more information,
see Verifying vCU-vDU connection.
You can save a site configuration file (SCF) using CU WebEM to back up your site configuration.
Procedure
1 Log in to CU WebEM.
Tip:
You can also save an SCF from Dashboard›Operation widget›CU
Operations by clicking the Save configuration button.
Figure 79: Saving the SCF from the Dashboard widget
5 Click OK.
Step result
The notification appears in the bottom right-hand corner. The SCF is saved in the default
download location on your local device.
You can use vDU WebEM to create a site configuration file (SCF) and commission the virtualized
gNB central unit (vCU) that was deployed without autoconfiguration.
If you used autoconfiguration when deploying the vDU by following the Deploying a CNF with
autoconfiguration using MantaRay NM GUI (one-button planning) procedure, you do not need to
commission it with vDU WebEM.
vDU WebEM is a web-based application for vDU maintenance, configuration management, and
commissioning. You can use it for various commissioning-related tasks, including:
Loading SCF
Creating SCF manually
Validating SCF
Activating SCF
Saving SCF for the further use
For more information, see the vDU WebEM User Guide document.
You can upload a prepared site configuration file (SCF) with a configuration plan using vDU
WebEM.
Procedure
1 Log in to vDU WebEM.
4 Click Browse and navigate to the SCF file on your local device.
Before activating a plan, you need to validate it. To validate the SCF file while loading it, check
Yes. If you want to validate the SCF later, check No.
Step example
Step result
Wait until the SCF file is uploaded. The green check mark icons in the Load SCF Status
indicate the steps of the process that have been completed.
Figure 83: Loading an SCF progress bar
9 Click Close.
Postrequisites
You need to validate the plan before activating it in vDU WebEM. For instructions, see Validating
an SCF in vDU WebEM.
Once you commission and configure the connection between the vDU and the virtualized gNB
central unit (vCU), you can verify the connection status in CU WebEM. For more information, see
Verifying vCU-vDU connection.
You can create a configuration plan from scratch using the Parameter Editor in vDU WebEM.
For parameters and their values used to configure the gNB, see the Reference
Documentation/Reference document/AirScale Cloud RAN BTS Parameters document, and the
vCU and vDU parameters chapter.
Note:
Most parameters are not included in this procedure. For more information, see vCU and
vDU parameters.
Procedure
1 Log in to vDU WebEM.
This is applicable if the plan is already uploaded, for example, with a site configuration file
(SCF) template. Download the SCF file from Support Portal and save it on your workstation
or local drive. For more information, see Loading an SCF to vDU WebEM.
Check the Duplicate plan from box and choose the plan that you want to duplicate
from the drop-down list.
Step example
Step result
Every object and parameter from the chosen plan is duplicated to the new plan.
7 Click OK.
Step result
The plan creation operation starts. A notification appears in the bottom right-hand corner.
The new configuration plan is successfully created and visible under the Planned
Configurations view.
Select your newly created configuration plan from the drop-down list.
Figure 86: Selecting planned configurations
Step result
Successful operation notification appears in the bottom right-hand corner
Insert values for BTS name and Multi Radio BTS Instance Id.
Step result
A new window pops up.
Figure 87: Adding a new object
11.3 [Optional] Load the default value for a chosen object class.
You can automatically set the default values for objects with defined default values.
Check the Load default values box.
Step result
The successful operation notification appears in the bottom right-hand corner and the new
object is added under the object tree view in the Objects panel.
12 [Optional] Repeat step 11. Add a new object in the Objects panel to add all required new
objects.
13 Fill in all mandatory parameters marked with from the MRBTS tree with the correct values.
Mandatory parameters are required for the gNB to work properly. Expand the MRBTS list and
fill in the parameters according to the displayed messages regarding missing or invalid
parameters for your new plan. The icon indicates mandatory parameters, and the icon
indicates invalid parameter values.
To connect the vCU to the vDU, you need to set up:
the parameters listed in Table: vCU parameters required for a working connection to vDU
using CU WebEM.
the parameters listed in Table: vDU parameters required for a working connection to vCU
using vDU WebEM.
You can also configure these parameters using an SCF. For more information, see
Configuring vCU parameters in SCF and Configuring vDU parameters in SCF.
Note:
Additionally, parameters must be set in the CU WebEM. For more information, see
Creating a configuration plan in CU WebEM.
For more information on parameter validation, see Validating an SCF in vDU WebEM.
Postrequisites
Validate the plan before activating it in vDU WebEM. For instructions, see Validating an SCF in vDU
WebEM.
You can save your configuration plan for further use. For more information, see Saving an SCF in
vDU WebEM.
Once you commission and configure the connection between the vDU and the virtualized gNB
central unit (vCU), you can verify the connection status in CU WebEM. For more information, see
Verifying vCU-vDU connection.
You can validate a configuration plan in vDU WebEM after uploading a site configuration file (SCF).
Make sure that vDU WebEM is opened and a connection to the virtualized gNB distributed unit
(vDU) is established.
Load the SCF file to vDU WebEM. For instructions, see Loading an SCF to vDU WebEM.
Notice:
For any cell addition or reconfiguration, be sure to check the cell parameters by using the
Adaptive PDCCH Configuration Tool. This is mandatory to make sure that the cell
configuration is correct and potential configuration errors are eliminated.
Procedure
1 Log in to vDU WebEM.
Note:
The Validate Plan button is active in the Delta configurations and
Planned configurations views.
Step result
The validation is finished, and a notification appears in the bottom right-hand corner. If any
error occurs during validation, the red dot appears on the Errors tab.
5 [Optional] Correct any errors that have appeared during the validation.
5.1 Go to the Errors tab, and verify a particular error type in a chosen plan.
For more details on error troubleshooting, see the CU WebEM User Guide
document.
5.2 [Option 1] To manually fix errors, insert the correct value or select it from the drop-
down list in the Value field.
5.3 [Option 2] To automatically create all missing mandatory objects, click Fix
Errors, and select the missing instances you want to create for a configuration plan
you want to edit.
Step result
All errors are fixed and validation is successful.
Postrequisites
Once the SCF is uploaded and validated, you can activate the configuration plan in vDU WebEM.
For instructions, see Activating an SCF in vDU WebEM.
You can activate the successfully uploaded and validated plan from a site configuration file (SCF)
using vDU WebEM.
Make sure that vDU WebEM is opened and a connection to the virtualized gNB distributed unit
(vDU) is established.
Load the SCF file to vDU WebEM. For instructions, see Loading an SCF to vDU WebEM.
Validate the configuration plan in vDU WebEM. For instructions, see Validating an SCF in vDU
WebEM.
Procedure
1 Log in to vDU WebEM.
3 From the drop-down list, choose the planned configuration you want to activate.
Step result
The caution window pops up.
Figure 90: Activating configuration plan
Step result
If you check the Download plan without activation box and click Execute, the
downloading progress window appears. Once the download is complete, you can close the
window by clicking the Close button.
If you click the Execute button without checking the Download plan without
activation box, the activation process starts and the progress window appears. The
activation process consists of four phases:
1. Plan activation started
2. Plan download
3. Plan validation
4. Plan activation finished
After the plan validation phase, the caution window pops up and you need to click
Once the activation process is completed, a notification pops up in the bottom right-hand
corner.
Result
The configuration plan is successfully activated and visible under the Active configuration
view. vDU reset is triggered. Radio modules, antenna line devices (ALDs), cloud infrastructure
Postrequisites
Once you commission and configure the connection between the vDU and the virtualized gNB
central unit (vCU), you can verify the connection status in CU WebEM. For more information, see
Verifying vCU-vDU connection.
You can save a site configuration file (SCF) using vDU WebEM to back up your site configuration.
Procedure
1 Log in to vDU WebEM.
Tip:
You can also save an SCF from Dashboard›Operation widget›vDU
Operations by clicking the Save configuration button.
Figure 95: Saving the SCF from the Dashboard widget
5 Click OK.
Step result
The notification appears in the bottom right-hand corner. The SCF is saved in the default
download location on your local device.
You can use CU WebEM to verify that the virtualized gNB central unit (vCU) is properly connected
to the virtualized gNB distributed unit (vDU).
Procedure
1 Log in to CU WebEM.
3 Verify that the vDU is displayed and has F1 link status Available.
Step example
In AirScale Cloud RAN BTS (Cloud RAN BTS), the cloud-native network function (CNF) termination
is automated with MantaRay NM. To terminate a CNF, use the CNF Termination operation.
You can start it using MantaRay NM graphical user interface (GUI), MantaRay NM CLI, or MantaRay
NM REST application programming interface (API).
CaaS Container-as-a-Service
CM Configuration Manager
RU radio unit
You need a termination plan to trigger the cloud-native network function (CNF) termination
operation in MantaRay NM. You can prepare a termination plan file in a text editor and import it to
MantaRay NM, or create it in MantaRay NM Configuration Management (CM) Editor.
In AirScale Cloud RAN BTS (Cloud RAN BTS), CNF termination is automated with MantaRay NM. To
perform any operation on the MantaRay NM CNF object, you need first to create a termination
plan.
You can create a cloud-native network function (CNF) termination plan using a text editor and
manually import it to MantaRay NM Configuration Management (CM) Operations Manager.
Purpose
If you use a text editor, prepare a plan in one of the following formats:
RAML2.0
CSV
Simple CSV
Procedure
1 Create a termination plan file using a text editor.
Step example
<header>
</header>
</cmData>
</raml>
Step result
The Import window pops up.
2.4 Select the plan file to be imported, edit the plan name, and import options if
necessary.
You can create a cloud-native network function (CNF) termination plan with MantaRay NM
Configuration Management (CM) Editor.
Procedure
1 Log in to the MantaRay NM start page using a web browser.
Step result
Figure 99: MantaRay NM start page
2.1 In the Opening cmedit.jnlp dialog window, click use Java Web Start
Launcher.
2.2 Confirm all security questions about the digital signature and software provider.
Step result
The CM Editor Java application opens.
Step example
Step result
The new plan is added to the navigation tree. The Plan Header view is shown in the main
view. You can modify the plan name that has been automatically assigned by the application.
Step example
Note:
Maximum length of a plan name is 200 characters.
5 Click Update.
Step result
The new plan name and the plan parameters you have configured have been saved.
6 In the navigation tree on the left-hand side, expand the plan, and from the Managed
Objects select the PLMN-xxx root object.
7 Expand the PLMN-xxx root object and select the CNF object you want to delete.
The CNF Terminate operation in MantaRay NM allows you to terminate a cloud-native network
function (CNF). You can start the CNF Termination operation in MantaRay NM using MantaRay
NM graphical user interface (GUI).
Purpose
You can use the CNF Terminate operation in MantaRay NM to terminate the following CNF
objects:
Procedure
1 Log in to the MantaRay NM start page using a web browser.
Step result
2.1 In the Opening racpmc.jnlp dialog window, click use Java Web Start
Launcher.
2.2 Confirm all security questions about the digital signature and software provider.
Step result
The CM Operations Manager Java application opens.
3 In the Plans view, right-click on the prepared CNF termination plan and select Workflow
Engine for Plan.
Step example
Step result
The Workflow Engine window pops-up.
5 Click the green arrow icon next to the CNF Terminate operation.
Step result
The CNF Terminate dialog pops-up.
6 In the CNF Terminate dialog, configure the settings for the operation.
6.2 [Optional] In the Deintegrate MRBTS field, select Yes to deintegrate the
associated MRBTS objects (neRelId) from the actual configuration.
6.3 [Optional] In the Delete k8s objects field, select the secret and namespace
objects to delete them from the Kubernetes cluster together with the CNF object.
Step result
The Feedback dialog opens and the status of the CNF Terminate operation changes to
Started. When the operation is performed successfully, the status changes to Finished.
Postrequisites
You can check the status of the operation in the following places:
Feedback dialog
Workflow Engine›Operation field
CM Operation Manager›Operation history tab
The CNF Terminate operation in MantaRay NM allows you to terminate a cloud-native network
function (CNF). You can start the CNF Terminate operation in MantaRay NM using MantaRay
NM CLI.
Purpose
You can use the CNF Terminate operation in MantaRay NM to terminate the following CNF
objects:
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
Table 58: Starting parameters for the CNF Terminate operation in MantaRay NM CLI
For more information on the CLI operations in MantaRay NM, see the Executing command
line operations chapter under Command Line Operations in MantaRay NM Operating
Documentation.
Postrequisites
You can check the status of the operation in the following places:
Command line
CM Operation Manager›Operation history tab
Purpose
You can use the CNF Termination operation in MantaRay NM to terminate the following CNF
objects:
Procedure
1 Log in as the omc user to the WAS MantaRay NM virtual machine (VM) node.
Note:
To locate the VM, see the Locating the right virtual machine for a service chapter in
MantaRay NM Operating Documentation.
2 Access the startOperation endpoint of the CM Operations REST API using the HTTP
POST method.
The request should contain the Content-Type header with the multipart/form-data
value.
Step example
https://<cluster hostname>/mantaraynm/cm/open-api/operations/v1/start
3 Start the CNF Terminate operation by placing the following JSON request:
"operationAttributes": {
parameter>"
For more information on the REST API operations in MantaRay NM, see the Starting an
operation chapter under CM Web Service RESTful API in MantaRay NM Operating
Documentation.
Postrequisites
You can check the status of the operation in the following places:
REST API using the saved operationId. For more information, see the Starting an
operation chapter under CM Web Service RESTful API in MantaRay NM Operating
Documentation.
CM Operation Manager›Operation history tab