Flexran Reference Solution NB Iot User Guide
Flexran Reference Solution NB Iot User Guide
User Guide
April 2018
Intel Confidential
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software, or service activation. Learn
more at intel.com or from the OEM or retailer.
No computer system can be absolutely secure. Intel does not assume any liability for lost or stolen data or systems or any damages resulting from
such losses.
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products
described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter
disclosed herein.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
The products described may contain design defects or errors, known as errata, which may cause the product to deviate from published
specifications. Current characterized errata are available on request.
This document contains information on products, services, and/or processes in development. All information provided here is subject to change
without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular
purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
Copies of documents that have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting
www.intel.com/design/literature.htm.
Intel, the Intel logo, Transcede, SpeedStep, and Xeon are trademarks of Intel Corporation in the United States and other countries.
Figures
Figure 1. FlexRAN NB-IoT Block Diagram (Co-deployment with FlexRAN LTE)................ 6
Figure 2. FlexRAN Software Stack ...................................................................................................... 10
Tables
Table 1. Acronyms ...................................................................................................................................... 8
Table 2. Reference Documents and Resources ............................................................................. 9
575823 1.1 Correct typographical, description, and formatting errors April 2018
1 Introduction
The FlexRAN NB-IoT eNB L User Guide describes the overall setup and components
required to demonstrate the capabilities and performance of a FlexRAN NB-IoT
implementation based on an Intel® Xeon® processor using Wind River* Open
Virtualization Platform 6 (OVP6).
FlexRAN stands for a Flexible and Programmable Platform for Software-Defined Radio
Access Networks.
The goal of this document is to enable users to quickly configure, build, and run
FlexRAN NB-IoT L1 applications on an Intel® Xeon® processor-based platform. The NB-
IoT L1 application runs in standalone mode and uses a different binary than FlexRAN
LTE.
1.1 Overview
Figure 1. FlexRAN NB-IoT Block Diagram (Co-deployment with FlexRAN LTE)
VM-1
WLS Interface
TM500 + D500/or
FerryBridge NBIoT NB-IoT Standard
NB-IoT PHY Tasks Ethernet-1 Commercial NBIoT
(bypass mode) Radio
UE
CPRI RF
FerryBridge API
(IQ data inside, 1.92MHz
sampling data rate)
One central processing unit (CPU) or virtual machine (VM) can be used for one cell of
20M bandwidth and two carriers of 3GPP release 10. The MaxCore* chassis is
configured to map a PCIe* slot with an Intel® 82599ES 10 GB Ethernet controller to an
Intel® Xeon® D CPU, which in turn is connected via a 10 GB optical Ethernet to a Ferry
Bridge field programmable gate array (FPGA) that converts Ethernet protocols to the
Common Public Radio Interface (CPRI). The Ferry Bridge FPGA’s CPRI port then
connects to an AceAxis* Advanced Radio Tester (ART) remote radio head (RRH). Radio
Frequency (RF) ports of the RRH are connected to either commercial user equipment
(CUEs) or a Cobham* TM500 multi-UE emulator. The Artesyn* blade runs the eNodeB
function of a long-term evolution (LTE) network. The setup includes an Evolved Packet
Core (EPC) and a Linux* FTP server/Cobham* D500 to perform end-to-end testing data
of traffic over the LTE network.
Note: For more details, refer to the FlexRAN LTE User Guide (refer to Table 2).
Another CPU (or VM) can be used for FlexRAN NB-IoT, as described in this document.
The FlexRAN NB-IoT L1 application runs in standalone mode, with a peak
Unacknowledged Mode DL throughput of 68 Kbps, and all IFFT/FFT/PRACH
calculations are done by the NB-IoT L1 application. The Ferry Bridge NB-IoT is running
in bypass mode, and receiving/sending NB-IoT time domain IQ sampling data (sampling
rate is 1.92 million (M) samples per second). The Intel Ferry Bridge FPGA module does
not support the NB-IoT sampling rate, so customers need to update their own FGPA
module to support NB-IoT.
For NB-IoT standard radio, contact the vendor for NB-IoT band radio (for standalone
mode, NB-IoT radio band is the same for the GSM band). The proposed setup also
includes a Polaris Networks* NB-IoT EPC and a Cobham* TM500 (support NB-IoT).
1.2 Hardware
The hardware configuration used for NB-IoT testing:
Artesyn* SharpServer* PCIe-7410 with two 8-core Intel® Xeon® processors at 2.1
GHz
Intel® 82599 10 Gigabit Ethernet Controller (NIC)
Ferry Bridge FPGA module (customers update their own FPGA modules)
RRH—off-the-shelf RRH (needs to support the NB-IoT standalone radio band and
1.92M sampling rate)
FTP server
Cobham* D500 traffic generation
User equipment (UE) side:
Commercial standalone NB-IoT user equipment (UE) module
Cobham* TM500 Multi-UE emulator (standalone NB-IoT)
1-Gigabit switch
1.3 Acronyms
Table 1. Acronyms
Term Description
PF Physical Function
RF Radio Frequency
UE User Equipment
Term Description
VF Virtual Function
VM Virtual Machine
WR Wind River*
https://www.artesyn.com/computin
MaxCore* Platform
g/products/product/max-core
http://aceaxis.co.uk/website/wp-
Advanced Radio Tester Product Fact Sheet content/uploads/2016/07/Advance
dRadioTester_July2016.pdf
https://software.intel.com/en-us/c-
Intel® C++ Compiler in Intel® Parallel Studio XE
compilers/ipsxe
http://dpdk.org/browse/dpdk/snaps
DPDK 17..11
hot/dpdk-17.11.tar.gz
http://files.ettus.com/manual/page_
USRP Hardware Driver and USRP Manual
install.html
The following versions of OVP6 have been tested with the current release of NB-IoT
eNodeB:
Host OS Version: Wind River* OVP 6.0.0.25
Guest OS Version: Wind River* OVP 6.0.0.23
In general, other versions of Wind River* OVP can be used as well, because no direct
dependency occurs between NB-IoT eNodeB applications and OVP components.
However, Intel recommends using a proven combination of versions to avoid
unexpected issues with the build and configuration of the FlexRAN application.
The default configuration of the PCIe switch on the Artesyn* MaxCore* chassis does not
support enabling the CONFIG_PCIEASPM option for PCIe Power Management. Use the
Linux* kernel boot option pcie_aspm=off for host and guest to disable PCIe power
management.
PROJECT_HOME=`pwd`
export WIND_HOME=/home/user/wr-ovp6 #location where wr-ovp6 is
installed.
cd $WIND_HOME
mkdir -p sscache
export SSTATE_DIR=$WIND_HOME/sscache
mkdir -p ccache
export CCACHE_DIR=$WIND_HOME/ccache
export BSP=x86-64-kvm-guest
export ROOTFS=ovp-guest+initramfs-integrated
cd $PROJECT_HOME
export PROJECT=$PROJECT_HOME/ovp-guest
mkdir -p $PROJECT
cd $PROJECT
$WIND_HOME/wrlinux-6/wrlinux/configure \
--with-rcpl-version=0023 \
--enable-board=$BSP \
--enable-rootfs=$ROOTFS \
FlexRAN Reference Solution NB-IoT
April 2018 User Guide
Document Number: 575823-1.1 Intel Confidential 11
WR-OVP6 SDK Compilation
--enable-parallel-pkgbuilds=6 \
--enable-jobs=6 \
--enable-ccache=yes \
--with-ccache-dir=$WIND_HOME/ccache \
--with-sstate-dir=$WIND_HOME/sstate \
--enable-reconfig \
--enable-addons=wr-ovp \
--with-template=feature/debug,feature/gdb,feature/target-
toolchain
2.1.2 Compile
# make
#!/bin/sh
PROJECT_HOME=`pwd`
export WIND_HOME=/home/user/wr-ovp6 #location where wr-ovp6 is
installed
cd $WIND_HOME
mkdir -p sscache
export SSTATE_DIR=$WIND_HOME/sscache
mkdir -p ccache
export CCACHE_DIR=$WIND_HOME/ccache
export BSP=intel-x86-64
export ROOTFS=ovp-kvm+installer-support
cd $PROJECT_HOME
export PROJECT=$PROJECT_HOME/ovirt-node-ovp-guest
mkdir -p $PROJECT
cd $PROJECT
#configure ovp-host
$WIND_HOME/wrlinux-6/wrlinux/configure \
--with-rcpl-version=0025 \
--enable-board=$BSP \
--enable-rootfs=$ROOTFS \
--enable-addons=wr-ovp \
--with-layer=wr-kvm-binary-guest-images \
--with-kvm-guest-kernel=$PROJECT_HOME/ovp-
guest/export/images/bzImage-x86-64-kvm-guest.bin \
--with-kvm-guest-img=$PROJECT_HOME/ovp-
guest/export/images/wrlinux-image-initramfs-x86-64-kvm-
guest.cpio.gz \
--enable-parallel-pkgbuilds=6 \
--enable-jobs=6 \
--enable-ccache=yes \
--with-ccache-dir=$WIND_HOME/ccache \
--with-sstate-dir=$WIND_HOME/sstate \
--enable-reconfig \
--enable-internet-download=yes \
--with-template=feature/debug,feature/gdb,feature/self-
hosted,feature/target-toolchain,feature/kernel-
tune,feature/libhugetlbfs,feature/analysis,feature/system-stats
2.2.2 Compile
# make
#!/bin/sh
PROJECT_HOME=`pwd`
export WIND_HOME=/home/user/wr-ovp6 #location where wr-ovp6 sdk
is installed
cd $WIND_HOME
mkdir -p sscache
export SSTATE_DIR=$WIND_HOME/sscache
mkdir -p ccache
export CCACHE_DIR=$WIND_HOME/ccache
export BSP=intel-x86-64
export KERNEL=standard
export ROOTFS=wr-installer
cd $PROJECT_HOME
export PROJECT=$PROJECT_HOME/ovirt-node-ovp-guest-installer
mkdir -p $PROJECT
cd $PROJECT
$WIND_HOME/wrlinux-6/wrlinux/configure \
--with-rcpl-version=0023 \
--enable-board=$BSP \
--enable-kernel=$KERNEL \
--enable-rootfs=$ROOTFS \
--enable-target-installer=yes \
--enable-bootimage=iso \
--with-installer-target-build=$PROJECT_HOME/ovirt-node-ovp-
guest/export/images/wrlinux-image-ovp-kvm-intel-x86-64.ext3 \
--enable-parallel-pkgbuilds=6 \
--enable-jobs=6 \
--enable-ccache=yes \
--with-ccache-dir=$WIND_HOME/ccache \
--with-sstate-dir=$WIND_HOME/sstate \
--enable-reconfig
2.3.2 Compile
# make
# make usb-image
EFI/BOOT/grub.cfg
# Automatically created by OE
#serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
serial --speed 38400 --unit=0 --word=8 --parity=no --stop=1
default=boot
timeout=10
menuentry 'boot'{
#linux /vmlinuz LABEL=boot root=/dev/ram0
linux /vmlinuz LABEL=boot console=ttyS0,38400 root=/dev/ram0
initrd /initrd
}
Use the USB installer image to install the host image on the target hardware.
Installation of Wind River* images using the installer is similar to a typical Linux*
distribution installation process. The Wind River Linux User Guide, 6.0 (refer to Table 2)
provides additional information on how to install WR Linux using the installer image.
<Host>#cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.91-ovp-rt97-WR6.0.0.25_preempt-rt
root=/dev/sda2 rw console=ttyS0,38400 clock=pit rcu_nocbs=1-7
rcu_nocb_poll=1 isolcpus=1-7 irqaffinity=0 tsc=perfect selinux=0
enforcing=0 intel_iommu=on iommu=pt default_hugepagesz=1G
hugepagesz=1G hugepages=21 nohz_full=1-7 intel_pstate=disable
idle=poll noswap pci=pcie_bus_perf
Overall, the Intel® Xeon® processor running the eNodeB application requires two virtual
Ethernet (VE) ports—one for the host system to perform control and management of
VMs, and one for the guest S1/eGTP connection to the Evolved Packet Core (EPC).
For an RRH connection via Ferry Bridge, the dual port Intel® 82599ES 10 Gigabit
Ethernet controller has to be mapped to the CPU and configured in pass-through mode
to be used by the virtual machine.
From the master CPU on the MaxCore* Platform (slot 1 CPU 1), the following
commands need to be executed to correctly set up networking for the FlexRAN
application. Any given port number and slot number are provided as an example, and
users are expected to modify the settings according to the physical configuration of the
chassis being used. For more information, refer to the FlexRAN Reference Solution L1
User Guide (refer to Table 2).
After performing the commands, the system must be power cycled. The new settings
are applied on the next boot of the master CPU (slot 1 CPU 1).
The given configuration uses the slot 16 device as a port physically connected to the
external network (where the EPC is connected). Describing a configuration where the
EPC runs on the same MaxCore* chassis is outside the scope of this document.
ethtool -i eth0
driver: ixgbevf
version: 2.7.12-k
firmware-version:
bus-info: 0000:08:11.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
cat /etc/network/interfaces
# The loopback interface
auto lo
iface lo inet loopback
# Wireless interfaces
iface wlan0 inet dhcp
wireless_mode managed
wireless_essid any
wpa-driver wext
wpa-conf /etc/wpa_supplicant.conf
Where 08:11.2 is the control plane and data plane of the eNodeB application, and
09:00.0 is the 10-Gigabit interface for RRH connection. Corresponding ports mapped
to VM can be configured via QEMU* start-up script, as follows:
#cat /opt/vm/q_vm1_xd.sh
…
export VM_HOME=`pwd`
export AFFINITY_ARGS="taskset -c 1,2,3,4,5,6,7"
export QEMU=/usr/bin/qemu-system-x86_64
export DPDK_ARGS="-c 0x0f -n 4 --proc-type=secondary -- -enable-
dpdk"
$AFFINITY_ARGS $QEMU\
-cpu host \
-nographic \
-k en-us \
-m 20480 \
-mem-path /mnt/huge \
-mem-prealloc \
-mlock \
-name GUEST_1 \
-enable-kvm \
-smp cpus=7,cores=7,threads=1,sockets=1 \
-vcpu 0,affinity=0x0002,prio=0 \
-vcpu 1,affinity=0x0004,prio=95 \
-vcpu 2,affinity=0x0008,prio=95 \
-vcpu 3,affinity=0x0010,prio=95 \
-vcpu 4,affinity=0x0020,prio=95 \
-vcpu 5,affinity=0x0040,prio=95 \
-vcpu 6,affinity=0x0080,prio=95 \
-kernel $VM_HOME/vm1/bzImage_3.10.91-ovp-rt97-
WR6.0.0.25_preempt-rt \
-append 'root=/dev/vda ro console=ttyS0 isolcpus=1-6
irqaffinity=0 nohz_full=0-6 clocksource=tsc tsc=perfect selinux=0
enforcing=0 default_hugepagesz=1G hugepagesz=1G hugepages=10
noswap idle=poll' \
-drive file=$VM_HOME/vm1/wrlinux-image-ovp-kvm-x86-64-kvm-
guest.ext3,if=virtio \
-drive file=$VM_HOME/vm1/disk_ph3.qcow2,if=none,id=drive-ide0-
0-0,format=qcow2 \
-device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-
0-0,bootindex=1 \
-device pci-assign,host=0000:09:00.0,vcpuaffine \
-device pci-assign,host=0000:09:00.1,vcpuaffine \
-device pci-assign,host=0000:08:11.2,vcpuaffine
After the VM is booted, the corresponding configuration of the Ethernet port has to be
done for guest, as shown:
tm500-xeon-d-vm0:#cat /etc/network/interfaces
# /etc/network/interfaces -- configuration file for ifup(8),
ifdown(8)
# Wireless interfaces
iface wlan0 inet dhcp
wireless_mode managed
wireless_essid any
wpa-driver wext
wpa-conf /etc/wpa_supplicant.conf
tm500-xeon-d-vm0:#ifconfig
eth1 Link encap:Ethernet HWaddr 02:01:00:10:01:07
inet addr:10.233.183.18 Bcast:10.233.183.255
Mask:255.255.252.0
inet6 addr: fe80::1:ff:fe10:107/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2883 errors:0 dropped:1 overruns:0 frame:0
TX packets:59 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:380166 (371.2 KiB) TX bytes:10244 (10.0 KiB)
4. Configure DPDK:
/opt/dpdk-17.11# cd ./tools/
/opt/dpdk-17.11/tools# ./dpdk-setup.sh
Select
[14] x86_64-native-linuxapp-icc
Select
[16] Insert IGB UIO module
The following shows the DPDK patch to compile 17.11 with OVP6 and IRQ mode in user
space:
IXGBE_MAX_INTR_QUEUE_NUM);
- return -ENOTSUP;
+// return -ENOTSUP;
+ intr_vector = IXGBE_MAX_INTR_QUEUE_NUM;
}
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -1;
The following commands are used for downloading, building, and installing CMAKE:
To extract all components of the FlexRAN release, use extract.sh. For example:
/home/turner/work/flexran
├── bin/lte/l1/lte_phy_nbiot
├── build/lte/l1app_nbiot
├── doxygen
├── ferrybridge
├── framework
├── misc
├── sdk
├── source/lte/threads_nbiot
├── source/lte/fec_dec_nbiot
├── source/lte/api_nbiot
├── source/lte/dl_mod_nbiot
├── source/lte/ul_demod_nbiot
├── wls_libs
└── wls_mod
export SDK_BUILD=build-avx2-icc
export WIRELESS_SDK_TARGET_ISA=avx2
export DIR_WIRELESS_SDK=${DIR_WIRELESS_SDK_ROOT}/${SDK_BUILD}
export SDK_BUILD=build-avx512-icc
export WIRELESS_SDK_TARGET_ISA=avx512
export DIR_WIRELESS_SDK=${DIR_WIRELESS_SDK_ROOT}/${SDK_BUILD}
cd ./ferrybridge/lib
make clean;
make;
After this step is completed, in a new setup, there is no need to rebuild the Ferry
Bridge library in order to build the L1 FlexRAN application.
3. For a first-time installation, build the mlog library, as follows:
cd ./wls_lib/mlog
./build.sh
After this step is completed, in a new setup, there is no need to rebuild the mlog
library.
4. For a first-time installation, build the wls interface module and library, as follows:
cd ./wls_mod/
./build.sh clean
./build.sh
After this step is completed for a new setup, there is no need to rebuild libwls.so
and wls.ko.
5. Build the wireless SDK component. The SDK supports compilation for either the
AVX2 or AVX512 system, depending on your environment variables, as per Step 1:
cd /home/turner/work/flexran/sdk/
./create-makefiles-linux.sh
cd build-avx2-icc
make install
cd build-avx512-icc
make install
cd /home/turner/work/framework/bbupool
make clean
make
Note: The default compilation with BBU_POOL=1 does not support compilation with
FEC_HW_ACCEL enabled.
cd ../build/l1app_nbiot
./build.sh xclean
./build.sh
/build/l1app_nbiot$ ./build.sh
Number of commandline: 0
Build using xHost params
RELEASEBUILD=1
BUILD_OPT: Host
DEV_DETECT: NON_AVX512
CPU_TYPE: 0
COMMAND_LINE=
====================================================
Building l1app:
BUILD_OPT = Host
RTE_TARGET = x86_64-native-linuxapp-icc
DEVICE = NON_AVX512
RTE_SDK = /opt/dpdk-17.11
DIR_WIRELESS_SDK = /home/turner/work/flexran/sdk/build-
avx2-icc
DIR_WIRELESS_FW = /home/turner/work/flexran/sdk/framework
WCK_DIR =
BUILD = Release
BBU_POOL = 1
FEC_HW_ACCEL = 0
====================================================
====================================================
[BUILD] lib : physrc
[AR] libphysrc_r.a
====================================================
[BUILD] lib : auxlib
[AR] libauxlib_r.a
====================================================
./lte_phy/lte_phy_nbiot
|-- dpdk.sh
|-- l1.sh
|-- l1app
|-- phycfg.xml
`-- phycfg_timer.xml
export DIR_WIRELESS_SDK_ROOT=/home/turner/work/flexran/sdk
export SDK_BUILD=build-avx2-icc
export WIRELESS_SDK_TARGET_ISA=avx2
export DIR_WIRELESS_SDK=${DIR_WIRELESS_SDK_ROOT}/${SDK_BUILD}
export SDK_BUILD=build-avx512-icc
export WIRELESS_SDK_TARGET_ISA=avx512
export DIR_WIRELESS_SDK=${DIR_WIRELESS_SDK_ROOT}/${SDK_BUILD}
cd /home/turner/work/flexran/sdk/
./create-makefiles-linux.sh
cd build-avx2-icc
make install
cd build-avx512-icc
make install
cd /home/turner/work/flexran/build/l1driver_nbiot
./build.sh xclean
./build.sh
/home/turner/work/flexran/lte_l1driver/nbiot/
#./config_host.sh
#./dpdk.sh
./q_vm1_xd.sh
#The above command will start the VM, redirect the VM console on
the same shell where the script is launched.
Hit enter to get login prompt.
4.5.3 Start L1
To start L1, copy the L1 binaries and start-up scripts to the VM. In the example, the
location of the L1 application is /home/turner/work/, and the location of the L2
application is /home/turner/work/rsys/bundle/.
The dpdk.sh script has to be updated with the correct PCIe bus information for a given
PCIe NIC, as follows:
#
00:04.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
SFI/SFP+ Network Connection (rev 01)
#
#cat ./dpdk.sh
…
$RTE_SDK/tools/dpdk_nic_bind.py --bind=igb_uio 0000:00:04.0
…
<ferryBridgeEthPort>1</ferryBridgeEthPort>
<!-- FB Synchronized CPRI ports [0 - no reSync
REC & RE FPGA, 1 - reSync REC & REC FPGA] -->
<ferryBridgeSyncPorts>0</ferryBridgeSyncPorts>
<!-- FB Loopback Mode [0 - no optical
loopback connected REC<->RE, 1 - optical loopback connected REC<-
>RE] -->
<ferryBridgeOptCableLoopback>0</ferryBridgeOptCableLoopback>
<radioCfg0PCIeEthDev>0000:00:04.0</radioCfg0PCIeEthDev>
<!-- DPDK: RX Thread core id [0-max core] -->
<radioCfg0DpdkRx>1</radioCfg0DpdkRx>
<!-- DPDK: TX Thread core id [0-max core] -->
<radioCfg0DpdkTx>2</radioCfg0DpdkTx>
<!-- Number of Tx Antenna [1, 2, 4] -->
<radioCfg0TxAnt>1</radioCfg0TxAnt>
<!-- Number of Rx Antenna [1, 2, 4] -->
<radioCfg0RxAnt>1</radioCfg0RxAnt>
<!-- Rx AGC configuration [0 - Rx AGC
disabled, 1 - Rx AGC enabled (default for fpga release 1.3.1)] --
>
<radioCfg0RxAgc>0</radioCfg0RxAgc>
<!-- Number of cells running on this port [1 -
Cell , 2 - Cells ] -->
<radioCfg0NumCell>1</radioCfg0NumCell>
<!-- First Phy instance ID mapped to this port [0-
7 - valid range of PHY instance for first cell ] -->
<radioCfg0Cell0PhyId>0</radioCfg0Cell0PhyId>
<!-- Second Phy instance ID mapped to this port
[0-7 - valid range of PHY instance for second cell ] -->
<radioCfg0Cell1PhyId>1</radioCfg0Cell1PhyId>
</RadioConfig0>
cd /home/turner/work/l1_sw/bin/lte/l1/lte_phy_nbiot
export export RTE_SDK=/opt/dpdk-17.11
export DIR_WIRELESS_SDK=/home/turner/work/sdk
./l1.sh
Note: After starting the L1 application, wait for at least 10 seconds before starting the L2
application.
...
<!-- Frame Work Cores (Bit mask of all cores that are
used for BBU Pool in decimal) -->
<bbuPoolCores>28</bbuPoolCores>
<!-- Frame Work Core Priority -->
<bbuPoolCorePriority>94</bbuPoolCorePriority>
<!-- Frame Work Core Policy 0: SCHED_FIFO 1: SCHED_RR -->
<bbuPoolCorePolicy>0</bbuPoolCorePolicy>
</Threads>
4.5.4 Start L2
Starting L2 depends on the L2 vendor. Currently, integration with Radisys L2 testing is
ongoing. This section serves as a placeholder. For additional information about starting
L2, contact Radisys.
===================================
0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
if=eth32 drv=ixgbe unused=igb_uio
0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
if=eth34 drv=ixgbe unused=igb_uio
0000:04:00.0 'I350 Gigabit Network Connection' if=eth31 drv=igb
unused=igb_uio
0000:04:00.3 'I350 Gigabit Network Connection' if=eth30 drv=igb
unused=igb_uio
0000:81:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
if=eth33 drv=ixgbe unused=igb_uio
0000:81:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
if=eth35 drv=ixgbe unused=igb_uio
---------------------------
FlexRAN Reference Solution NB-IoT
User Guide April 2018
36 Intel Confidential Document Number: 575823-1.1
FlexRAN NB-IoT Applications
Apply config..
cline_print_info:Incomming settings:
--version=6.2
--mac2PhyBatchApi=1
--phy2MacBatchApi=1
--successiveNoApi=15
--wls_dev_name=/dev/wls
--dlIqLog=0
--ulIqLog=0
--iqLogDumpToFile=0
--phyMlog=1
--phyStats=1
--radioEnable=1
--dpdkMemorySize=1024
--dpdkIrqMode=1
--ferryBridgeMode=0
--ferryBridgeEthPort=1
--ferryBridgeSyncPorts=0
--ferryBridgeOptCableLoopback=0
--radioCfg0PCIeEthDev=0000:08:00.0
--radioCfg0DpdkRx=1
--radioCfg0DpdkTx=2
--radioCfg0TxAnt=4
--radioCfg0RxAnt=4
--radioCfg0RxAgc=0
--radioCfg1PCIeEthDev=0000:00:05.0
--radioCfg1DpdkRx=5
--radioCfg1DpdkTx=6
--radioCfg1TxAnt=4
--radioCfg1RxAnt=4
--radioCfg1RxAgc=0
--radioCfg2PCIeEthDev=0000:00:06.0
--radioCfg2DpdkRx=5
--radioCfg2DpdkTx=6
--radioCfg2TxAnt=4
--radioCfg2RxAnt=4
--radioCfg2RxAgc=0
--radioCfg3PCIeEthDev=0000:00:07.0
--radioCfg3DpdkRx=5
--radioCfg3DpdkTx=6
--radioCfg3TxAnt=4
--radioCfg3RxAnt=4
--radioCfg3RxAgc=0
--radioCfg4PCIeEthDev=0000:00:08.0
--radioCfg4DpdkRx=5
--radioCfg4DpdkTx=6
--radioCfg4TxAnt=4
--radioCfg4RxAnt=4
--radioCfg4RxAgc=0
--radioPort0=0
--radioPort1=1
--radioPort2=0
--radioPort3=1
--taFiltEnable=1
--ircEnable=0
--mmseDisable=0
--pucchFormat2DetectThreshold=0
--prachDetectThreshold=100
--MlogSubframes=256
--MlogCores=8
--MlogSize=3084
--apiThread=2
--prachThread=4
--fftMainThread=3
--fftProc0Thread=3
--fftProc1Thread=4
--fftProc2Thread=5
--fftProc3Thread=6
--ifftMainThread=2
--ifftProc0Thread=2
--ifftProc1Thread=4
--ifftProc2Thread=5
--ifftProc3Thread=6
--dlMainThread=2
--dlProc0Thread=6
--dlProc1Thread=6
--dlProc2Thread=2
--dlProc3Thread=6
--ulMainThread=3
--ulProc0Thread=4
--ulProc1Thread=4
--ulProc2Thread=3
--ulProc3Thread=4
--wlsThread=2
--radioDpdkMaster=0
--systemThread=0
--timerThread=0
--singleCore=0
sys_init:Initialization
cmgr_init:initialization of console
System clock (rdtsc) resolution 2294693120 [Hz]
Ticks per us 2294 []
tlMlogInit: resource_freq: 2294, mlog_mask: 0xffffffff, filename:
mlog
set mlog_mask = 0xffffffff
set filename = mlog.bin
MLogOpen: filename(mlog.bin) mlogSubframes (256), mlogCores(8),
mlogSize(3084)
mlogSubframes (256), mlogCores(8), mlogSize(3084)
MLOG not opened!!!
MLog Storage: 0x7f6a05fc8010 -> 0x7f6a065cec3c
MLogInitializeMlogBuffer
The validation is based on IQ data verification for downlink with a fixed TestMAC test
configuration. For uplink, validation is based on uplink IQ data and fixed TestMAC test
configuration. The tests cover L2-L1 API, NPSS, SSS, NPBCH, NPRACH, NPDSCH,
PDSCCH, and NPUSCH physical processing.
Note: The PHY code is ported directly from NB-IoT small cell product, which already passed
through the operator’s field trial (end-to-end system, including UE and EPC) together
with partner L2L3. For FlexRAN NB-IOT, integration tests with L2L3 are ongoing
(Frequency Band 8).
Other scenarios are outside of the scope of this User Guide and are not supported.
More information on features and limitations can be found in the FlexRAN Reference
Solution Software v1.5.0 Release Notes (refer to Table 2).
5.3 eNodeB
The eNodeB configuration is performed via the configuration file wr_cfg.txt, which is
part of the Radisys* package for the L2 application.
Note: For now, Radisys L2 Integration testing with FlexRAN NB-IoT L1 is ongoing.
<ferryBridgeOptCableLoopback>0</ferryBridgeOptCableLoopback>
<Radio>
<radioEnable>1</radioEnable>
<dpdkMemorySize>6144</dpdkMemorySize>
<dpdkIrqMode>0</dpdkIrqMode>
<ferryBridgeMode>1</ferryBridgeMode>
<ferryBridgeEthPort>1</ferryBridgeEthPort>
<ferryBridgeSyncPorts>0</ferryBridgeSyncPorts>
<ferryBridgeOptCableLoopback>0</ferryBridgeOptCableLoopback>
5.4 EPC
FlexRAN NB-IoT is ported from Intel® Transcede™ brand NB-IoT L1, which passed
through IODT with ZTE Commercial NB-IoT EPC. For the NB-IoT EPC simulator, contact
Polaris Neworks* or Radisys*.
For now, Intel uses NI USRP as an integration test setup with Radisys L2L3, because
Ferry Bridge does not support NB-IoT. The Test Radio Band is Band 8. The mezzanine
cards of USRP can support different band ranges. Intel suggests using the UBX-160
USRP mezzanine card.
Note: The VM does not need to be restarted between eNodeB sessions. Restarting the L2 and
L1 applications is sufficient.
A Appendix
A.1 Recommended Kernel Configuration for Host and
Guest OS
Wind River 6.0 build
recipes-kernel/linux/linux-windriver/config_baseline.cfg
CONFIG_LOCALVERSION="-WR6.0.0.25_preempt-rt"
#
# Power management and ACPI options
#
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
# CONFIG_HIBERNATION is not set
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
# CONFIG_PM_RUNTIME is not set
CONFIG_PM=y
# CONFIG_PM_DEBUG is not set
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
# CONFIG_ACPI_PROCFS is not set
# CONFIG_ACPI_PROCFS_POWER is not set
# CONFIG_ACPI_EC_DEBUGFS is not set
# CONFIG_ACPI_AC is not set
# CONFIG_ACPI_BATTERY is not set
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
# CONFIG_ACPI_DOCK is not set
# CONFIG_ACPI_PROCESSOR is not set
# CONFIG_ACPI_IPMI is not set
CONFIG_ACPI_NUMA=y
CONFIG_ACPI_CUSTOM_DSDT_FILE=""
# CONFIG_ACPI_CUSTOM_DSDT is not set
#
# CPU Frequency scaling
#
# CONFIG_CPU_FREQ is not set
#
# CPU Idle
#
# CONFIG_CPU_IDLE is not set
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_PCI_IOV=y
CONFIG_CRYPTO_ZLIB=y
CONFIG_CRYPTO_LZO=y
CPU Frequency scaling and power control on CPU and PCIe were disabled to guarantee
the response time for the real-time application.
The destination for the kernel sources in the VM can be configured as per Linux kernel
build scripts for any typical Linux system. For example, the following links can be set up
to allow a successful build:
root@tm500-xeon-d-vm0:/lib/modules/3.10.87-ovp-rt93-
WR6.0.0.23_preempt-rt# ls -l
lrwxrwxrwx 1 root root 59 Dec 4 2015 build ->
/hdd2/kernels/kernel-3.10.87-ovp-rt93-wr6.0.0.23-preempt-rt
lrwxrwxrwx 1 root root 59 Aug 12 22:21 source ->
/hdd2/kernels/kernel-3.10.87-ovp-rt93-wr6.0.0.23-preempt-rt
A.3.1 pci_linux-suppress-cc-reset.patch
+ return 0;
+}
+
A.3.2 pci_linux-suppress-cc-save.patch
A.3.3 pci_qemu-pm-disable.patch
+ if (cap == PCI_CAP_ID_PM) {
+ printf (" Device %x: PCI_CAP_ID_PM(%d) skipped\n", d->devfn,
cap);
+ return 0;
+ }
+
status = assigned_dev_pci_read_byte(d, PCI_STATUS);
if ((status & PCI_STATUS_CAP_LIST) == 0) {
return 0;
A.4.1 config_host.sh
#cat /opt/vm/config_host.sh
# *
# * Copyright 2009-2014 Intel Corporation All Rights Reserved.
# *
# * This program is free software; you can redistribute it and/or
modify
# * it under the terms of version 2 of the GNU General Public
License as
# * published by the Free Software Foundation.
# *
# * This program is distributed in the hope that it will be
useful, but
# * WITHOUT ANY WARRANTY; without even the implied warranty of
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU
# * General Public License for more details.
# *
# * You should have received a copy of the GNU General Public
License
# * along with this program; if not, write to the Free Software
# * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA
02110-1301 USA.
# * The full GNU General Public License is included in this
distribution
# * in the file called LICENSE.GPL.
# *
# * Contact Information:
# * Intel Corporation
# *
/etc/init.d/glusterd stop
/etc/init.d/openvswitch-controller stop
/etc/init.d/openvswitch-switch stop
/etc/init.d/libvirtd stop
/etc/init.d/sanlock stop
pkill wdmd
/etc/init.d/nfsserver stop
service auditd stop # now
chkconfig auditd off # after reboot
/etc/init.d/auditd stop
pkill distccd
/etc/init.d/crond stop
ulimit -c unlimited
MACHINE_TYPE=`uname -m`
ulimit -c unlimited
echo 1 > /proc/sys/kernel/core_uses_pid
sysctl -w kernel.sched_rt_runtime_us=-1
sysctl -w kernel.sched_rt_period_us=-1
for c in $(ls -d /sys/devices/system/cpu/cpu[0-9]*); do echo
performance >$c/cpufreq/scaling_governor; done
echo 0 > /proc/sys/kernel/nmi_watchdog
echo 1 >
/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress
else
echo "Machine type is not supported $MACHINE_TYPE"
exit -1
fi
exit 0
A.4.2 dpdk.sh
#cat /opt/vm/dpdk.sh
#! /bin/bash
export RTE_SDK=/opt/dpdk-17.11
export RTE_TARGET=x86_64-native-linuxapp-icc
#
# Unloads igb_uio.ko.
#
remove_igb_uio_module()
{
echo "Unloading any existing DPDK UIO module"
/sbin/lsmod | grep -s igb_uio > /dev/null
if [ $? -eq 0 ] ; then
sudo /sbin/rmmod igb_uio
fi
}
#
# Loads new igb_uio.ko (and uio module if needed).
#
load_igb_uio_module()
{
if [ ! -f $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko ];then
echo "## ERROR: Target does not have the DPDK UIO Kernel
Module."
echo " To fix, please try to rebuild target."
return
fi
remove_igb_uio_module
if [ -f /lib/modules/$(uname -
r)/kernel/drivers/uio/uio.ko ] ; then
echo "Loading uio module"
sudo /sbin/modprobe uio
fi
fi
load_igb_uio_module
A.4.3 q_vm1_xd.sh
A detailed description of QEMU Hypervisor configuration is available in the Wind River*
OVP6 User Guide (refer to Table 2) and corresponding documentation.
#cat /opt/vm/q_vm1_xd.sh
export VM_HOME=`pwd`
export AFFINITY_ARGS="taskset -c 1,2,3,4,5,6,7"
export QEMU=/usr/bin/qemu-system-x86_64
export DPDK_ARGS="-c 0x0f -n 4 --proc-type=secondary -- -enable-
dpdk"
$AFFINITY_ARGS $QEMU\
-cpu host \
-nographic \
-k en-us \
-m 20480 \
-mem-path /mnt/huge \
-mem-prealloc \
-mlock \
-name GUEST_1 \
-enable-kvm \
-smp cpus=7,cores=7,threads=1,sockets=1 \
-vcpu 0,affinity=0x0002,prio=0 \
-vcpu 1,affinity=0x0004,prio=95 \
-vcpu 2,affinity=0x0008,prio=95 \
-vcpu 3,affinity=0x0010,prio=95 \
-vcpu 4,affinity=0x0020,prio=95 \
-vcpu 5,affinity=0x0040,prio=95 \
-vcpu 6,affinity=0x0080,prio=95 \
-kernel $VM_HOME/vm1/bzImage_3.10.91-ovp-rt97-
WR6.0.0.25_preempt-rt \
-append 'root=/dev/vda ro console=ttyS0 isolcpus=1-6
irqaffinity=0 nohz_full=0-6 clocksource=tsc tsc=perfect selinux=0
enforcing=0 default_hugepagesz=1G hugepagesz=1G hugepages=10
noswap idle=poll' \
-drive file=$VM_HOME/vm1/wrlinux-image-ovp-kvm-x86-64-kvm-
guest.ext3,if=virtio \
-drive file=$VM_HOME/vm1/disk_ph3.qcow2,if=none,id=drive-ide0-
0-0,format=qcow2 \
-device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-
0-0,bootindex=1 \
-device pci-assign,host=0000:09:00.0,vcpuaffine \
-device pci-assign,host=0000:09:00.1,vcpuaffine \
-device pci-assign,host=0000:08:11.2,vcpuaffine \
-fsdev local,security_model=passthrough,id=fsdev-fs0,path=/ \
-device virtio-9p-pci,id=fs0,fsdev=fsdev-
fs0,mount_tag=hdd,bus=pci.0,addr=0x7
Note: The default OVP6 build does not include a virtualized HDD. It is possible to add this
feature using a standard virtualization technique.
Note: By default, the /bin/sh command is a symbolic link to the command /bin/dash.
Redirect the link to /bin/sh----> /bin/bash.
# ln -s -f /bin/bash /bin/sh