0% found this document useful (0 votes)
536 views229 pages

CNE Installation Guide

The Oracle Communications OC-CNE Installation Guide provides detailed procedures for installing the Oracle Communications Signaling Network Function Cloud Native Environment (OCCNE) version 1.0. It includes prerequisites, installation steps, post-installation activities, and reference materials for users, particularly Oracle engineers. The document emphasizes licensing restrictions and the importance of safe use in non-dangerous applications.

Uploaded by

bharathvenna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
536 views229 pages

CNE Installation Guide

The Oracle Communications OC-CNE Installation Guide provides detailed procedures for installing the Oracle Communications Signaling Network Function Cloud Native Environment (OCCNE) version 1.0. It includes prerequisites, installation steps, post-installation activities, and reference materials for users, particularly Oracle engineers. The document emphasizes licensing restrictions and the importance of safe use in non-dangerous applications.

Uploaded by

bharathvenna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 229

Oracle® Communications

OC-CNE Installation Guide

Release 1.0
F16979-01
July 2019
Oracle Communications OC-CNE Installation Guide, Release 1.0

F16979-01

Copyright © 2019, Oracle and/or its affiliates. All rights reserved.

This software and related documentation are provided under a license agreement containing restrictions on use and
disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or
allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit,
perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation
of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find
any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of
the U.S. Government, then the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any
programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial
computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating
system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license
terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications. It is not
developed or intended for use in any inherently dangerous applications, including applications that may create a risk of
personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all
appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates
disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their
respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under
license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and
the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.

This software or hardware and documentation may provide access to or information about content, products, and
services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all
warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an
applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss,
costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in
an applicable agreement between you and Oracle.
Contents

1 Introduction
Glossary 1-1
Key terms 1-1
Key Acronyms and Abbreviations 1-2
Overview 1-3
OCCNE Installation Overview 1-3
Frame and Component Overview 1-4
Frame Overview 1-4
Host Designations 1-5
Node Roles 1-6
Transient Roles 1-7
Create OCCNE Instance 1-8
How to use this document 1-10
Documentation Admonishments 1-11
Locate Product Documentation on the Oracle Help Center Site 1-12
Customer Training 1-12
My Oracle Support 1-12
Emergency Response 1-13

2 Installation Prerequisites
Obtain Site Data and Verify Site Installation 2-1
Configure Artifact Acquisition and Hosting 2-1
Oracle eDelivery Artifact Acquisition 2-1
Third Party Artifacts 2-1
Populate the MetalLB Configuration 2-2

3 Install Procedure
Initial Configuration - Prepare a Minimal Boot Strapping Environment 3-1
Installation of Oracle Linux 7.5 on Bootstrap Host 3-1
Configure the Installer Bootstrap Host BIOS 3-8
Configure Top of Rack 93180YC-EX Switches 3-14

iii
Configure Addresses for RMS iLOs, OA, EBIPA 3-27
Configure Legacy BIOS on Remaining Hosts 3-35
Configure Enclosure Switches 3-41
Bastion Host Installation 3-47
Install Host OS onto RMS2 from the Installer Bootstrap Host (RMS1) 3-48
Installation of the Bastion Host 3-57
Configuration of the Bastion Host 3-63
Software Installation Procedures - Automated Installation 3-72
Oracle Linux OS Installer 3-72
Install Backup Bastion Host 3-81
Database Tier Installer 3-82
OCCNE Kubernetes Installer 3-86
OCCNE Automated Initial Configuration 3-89

4 Post Installation Activities


Post Install Verification 4-1

A Artifacts
Repository Artifacts A-1
Docker Repository Requirements A-13
OCCNE YUM Repository Configuration A-14
OCCNE HTTP Repository Configuration A-16
OCCNE Docker Image Registry Configuration A-22

B Reference Procedures
Inventory File Template B-1
Inventory File Preparation B-2
OCCNE Artifact Acquisition and Hosting B-8
Installation PreFlight Checklist B-9
Installation Use Cases and Repository Requirements B-30
Topology Connection Tables B-46
Network Redundancy Mechanisms B-51
Install VMs for MySQL Nodes and Management Server B-58

iv
List of Figures
1-1 Frame Overview 1-5
1-2 Host Designations 1-6
1-3 Node Roles 1-7
1-4 Transient Roles 1-8
1-5 OCCNE Installation Overview 1-9
1-6 Example of a Procedure Steps Used in This Document 1-11
B-1 Rackmount ordering B-10
B-2 Frame reference B-31
B-3 Setup the Notebook and USB Flash Drive B-32
B-4 Setup the Management Server B-33
B-5 Management Server Unique Connections B-34
B-6 Configure OAs B-35
B-7 Configure the Enc. Switches B-35
B-8 OceanSpray Download Path B-36
B-9 Install OS on CNE Nodes - Server boot instruction B-37
B-10 Install OS on CNE Nodes - Server boot process B-38
B-11 Update OS on CNE Nodes - Ansible B-39
B-12 Update OS on CNE Nodes - Yum pull B-40
B-13 Harden the OS B-41
B-14 Create the Guest B-42
B-15 Install the Cluster on CNE Nodes B-43
B-16 Install the Cluster on CNE Nodes - Pull in Software B-44
B-17 Execute Helm on Master Node B-45
B-18 Master Node Pulls from Repositories B-46
B-19 Blade Server NIC Pairing B-52
B-20 Rackmount Server NIC Pairing B-52
B-21 Logical Switch View B-54
B-22 OAM Uplink View B-55
B-23 Top of Rack Customer Uplink View B-56
B-24 OAM and Signaling Separation B-57
B-25 MySQL Cluster Topology B-59

v
List of Tables
1-1 Key Terms 1-1
1-2 Key Acronyms and Abbreviations 1-2
1-3 Admonishments 1-11
2-1 Oracle eDelivery Artifact Acquisition 2-1
2-2 Procedure to configure MetalLB pools and peers 2-2
3-1 Bootstrap Install Procedure 3-2
3-2 Procedure to configure the Installer Bootstrap Host BIOS 3-9
3-3 Procedure to configure Top of Rack 93180YC-EX Switches 3-14
3-4 Procedure to verify Top of Rack 93180YC-EX Switches 3-24
3-5 Procedure to configure Addresses for RMS iLOs, OA, EBIPA 3-27
3-6 Procedure to configure the Legacy BIOS on Remaining Hosts 3-36
3-7 Procedure to configure enclosure switches 3-42
3-8 Procedure to install the OL7 image onto the RMS2 via the installer bootstrap host 3-49
3-9 Procedure to Install the Bastion Host 3-58
3-10 Procedure to configure Bastion Host 3-64
3-11 Procedure to run the auto OS-installer container 3-73
3-12 Procedure to Install Backup Bastion Host 3-81
3-13 OCCNE Database Tier Installer 3-83
3-14 Procedure to install OCCNE Kubernetes 3-86
3-15 Procedure to install common services 3-89
4-1 OCCNE Post Install Verification 4-1
A-1 OL YUM Repository Requirements A-1
A-2 Docker Repository Requirements A-13
A-3 Steps to configure OCCNE HTTP Repository A-17
A-4 Steps to configure OCCNE Docker Image Registry A-23
B-1 Procedure for OCCNE Inventory File Preparation B-4
B-2 Enclosure Switch Connections B-10
B-3 ToR Switch Connections B-12
B-4 Rackmount Server Connections B-14
B-5 Complete Site Survey Subnet Table B-15
B-6 Complete Site Survey Host IP Table B-16
B-7 Complete VM IP Table B-17
B-8 Complete OA and Switch IP Table B-18
B-9 ToR and Enclosure Switches Variables Table (Switch Specific) B-20
B-10 Complete Site Survey Repository Location Table B-21

vi
B-11 Enclosure Switch Connections B-47
B-12 ToR Switch Connections B-48
B-13 Management Server Connections B-51
B-14 Procedure to install VMs for MySQL Nodes and Management Server B-62

vii
1
Introduction
This document details the procedure for installing an Oracle Communications Signaling,
Network Function Cloud Native Environment, referred to in these installation procedures
simply as OCCNE. The intended audiences for this document are Oracle engineers who work
with customers to install a Cloud Native Environment (CNE) on-site at customer facilities.
This document applies to version 1.0 of the OCCNE installation procedure.

Glossary
Key terms
This table below lists terms used in this document.

Table 1-1 Key Terms

Term Definition
Host A computer running an instance of an operating system with an IP address. Hosts can
be virtual or physical. The HP DL380 Gen10 Rack Mount Servers and BL460c
Gen10 Blades are physical hosts. KVM based virtual machines are virtual hosts.
Hosts are also referred to as nodes, machines, or computers.
Database Host The Database (DB) Host is a physical machine that hosts guest virtual machines
which in turn provide OCCNE's MySQL service and Database Management System
(DBMS). The Database Hosts are comprised of two Rack Mount Servers (RMSs)
below the Top of Rack (TOR) switches. For some customers, these will be HP Gen10
servers.
Management The Management Host is a physical machine in the frame that has a special
Host configuration to support hardware installation and configuration of other components
within a frame. For CNE, there is one machine with dedicated connectivity to out of
band (OOB) interfaces on the Top of Rack switches. The OOB interfaces provide
connectivity needed to initialize the ToR switches. In OCCNE 1.0, the Management
Host role and Database Host roles are assigned to the same physical machine. When
referring to a machine as a "Management Host", the context is with respect to its
OOB connections which are unique to the Management Host hardware.
Bastion Host The Bastion Host provides general orchestration support for the site. The Bastion
Host runs as a virtual machine on a Database Host. Sometimes referred to as the
Management VM. During the install process, the Bastion Host is used to host the
automation environment and execute install automation. The install automation
provisions and configures all other hosts, nodes, and switches within the frame. After
the install process is completed, the Bastion Host continues to serve as the customer
gateway to cluster operations and control.
Installer As an early step in the site installation process, one of the hosts (which is eventually
Bootstrap Host re-provisioned as a Database Server) is minimally provisioned to act as an Installer
Bootstrap Host. The Installer Bootstrap Host has a very short lifetime as its job is to
provision the first Database Server. Later in the install process, the server being used
to host the Bootstrap server is re-provisioned as another Database Server. The
Installer Bootstrap Host is also referred to simply as the Bootstrap Host.

1-1
Chapter 1
Glossary

Table 1-1 (Cont.) Key Terms

Node A logical computing node in the system. A node is usually a networking endpoint.
May or may not be virtualized or containerized. Database nodes refer to hosts
dedicated primarily to running Database services. Kubernetes nodes refer to hosts
dedicated primarily to running Kubernetes.
Master Node Some nodes in the system (three RMSs in the middle of the equipment rack) are
dedicated to providing Container management. These nodes are responsible for
managing all of the containerized services (which run on the worker nodes.)
Worker Node Some nodes in the system (the blade servers at the bottom of the equipment rack) are
dedicated to hosting Containerized software and providing the 5G application
services.
Container An encapsulated software service. All 5G applications and OAM functions are
delivered as containerized software. The purpose of the OCCNE is to host
containerized software providing 5G Network Functions and services.
Cluster A collection of hosts and nodes dedicated to providing either Database or
Containerized services and applications. The Database service is comprised of the
collection of Database nodes and is managed by MySQL. The Container cluster is
comprised of the collection of Master and Worker Nodes and is managed by
Kubernetes.

Key Acronyms and Abbreviations


This table below lists abbreviations, and acronyms specific to this document.

Table 1-2 Key Acronyms and Abbreviations

Acronym/ Definition
Abbreviation/Term
5G NF 3GPP 5G Network Function
BIOS Basic Input Output System
CLI Command Line Interface
CNE Cloud Native Environment
DB Database
DBMS Database Management System
DHCP(D) Dynamic Host Configuration Protocol
DNS Domain Name Server
EBIPA Enclosure Bay IP Addressing
FQDN Fully Qualified Domain name
GUI Graphical User Interface
HDD Hard Disk Drive
HP Hewlett Packard
HPE Hewlett Packard Enterprise
HTTP HyperText Transfer Protocol
iLO HPE Integrated Lights-Out Management System
IP Internet Protocol; may be used as shorthand to refer to an IP layer 3 address.
IPv4 Internet Protocol version 4
IPv6 Internet Protocol version 6

1-2
Chapter 1
Overview

Table 1-2 (Cont.) Key Acronyms and Abbreviations

IRF Intelligent Resilient Framework (IRF) is a proprietary software virtualization


technology developed by H3C (3Com). Its core idea is to connect multiple
network devices through physical IRF ports and perform necessary
configurations, and then these devices are virtualized into a distributed device.
ISO International Organization for Standardization; typically used as shorthand to
refer to an ISO 9660 optical disk file system image
KVM Keyboard, Video, Mouse
K8s Shorthand alias for Kubernetes
MAC Media Access Control address
MBE Minimal Bootstrapping Environment
NFS Network File System
NTP Network Time Protocol
OA HP BladeSystem Onboard Administrator
OAM Operations, Administration, Maintenance
OCCNE Oracle Communications Signaling, Network Function Cloud Native
Environment
OS Operating System
OSDC Oracle Software Download Center
PKI Public Key Infrastructure
POAP PowerOn Auto Provisioning
PXE Pre-Boot Execution Environment
RAID Redundant Array of Independent Disks
RAM Random Access Memory
RBSU ROM Based Setup Utility
RMS Rack Mount Server
RPM Red Hat Package Manager
SAS Serial Attached SCSI
SSD Solid State Drive
TAR Short for Tape Archive, and sometimes referred to as tarball, a file that has the
TAR file extension is a file in the Consolidated Unix Archive format.
TLA Three Letter Acronym
TLD Top Level Domain
ToR Top of Rack - Colloquial term for the pair of Cisco 93180YC-EX switches
UEFI Unified Extensible Firmware Interface
URL Uniform Resource Locator
VM Virtual Machine
VSP Virtual Serial Port
YUM Yellowdog Updator, Modified (a Linux Package Manager)

Overview
OCCNE Installation Overview
The installation procedures in this document provision and configure an Oracle
Communications Signaling, Network Function Cloud Native Environment (OCCNE). Using

1-3
Chapter 1
Overview

Oracle partners, the customer purchases the required hardware which is then configured and
prepared for installation by Oracle Consulting.
To aid with the provisioning, installation, and configuration of OCCNE, a collection of
container-based utilities are used to automate much of the initial setup. These utilities are based
on tools such as PXE, the Kubespray project, and Ansible:
• PXE helps reliably automate provisioning the hosts with a minimal operating system.
• Kubespray helps reliably install a base Kubernetes cluster, including all dependencies (like
etcd), using the Ansible provisioning tool.
• Ansible is used to deploy and manage a collection of operational tools (Common Services)
provided by open source third party products such as Prometheus, Grafana, ElasticSearch
and Kibana.
• Common services and functions such as load balancers and ingress controllers are
deployed, configured, and managed as Helm packages.

Frame and Component Overview


The initial release of the OCCNE system provides support for on-prem deployment to a very
specific target environment consisting of a frame holding switches and servers. This section
describes the layout of the frame and describes the roles performed by the racked equipment.

Note:
In the installation process, some of the roles of servers change as the installation
procedure proceeds.

Frame Overview
The physical frame is comprised of HP c-Class enclosure (BL460c blade servers), 5 DL380
rack mount servers, and 2 Top of Rack (ToR) Cisco switches.

1-4
Chapter 1
Overview

Figure 1-1 Frame Overview

Host Designations
Each physical server has a specific role designation within the CNE solution.

1-5
Chapter 1
Overview

Figure 1-2 Host Designations

Node Roles
Along with the primary role of each host, a secondary role may be assigned. The secondary role
may be software related, or, in the case of the Bootstrap Host, hardware related, as there are
unique OOB connections to the ToR switches.

1-6
Chapter 1
Overview

Figure 1-3 Node Roles

Transient Roles
Transient role is unique in that it has OOB connections to the ToR switches, which brings the
designation of Bootstrap Host. This role is only relevant during initial switch configuration and
disaster recovery of the switch. RMS1 also has a transient role as the Installer Bootstrap Host,
which is only relevant during initial install of the frame, and subsequent to getting an official
install on RMS2, this host is re-paved to its Storage Host role.

1-7
Chapter 1
Overview

Figure 1-4 Transient Roles

Create OCCNE Instance


This section describes the steps and procedures required to create an OCCNE instance at a
customer site. The following diagrams shows the installation context:

1-8
Chapter 1
Overview

Figure 1-5 OCCNE Installation Overview

The following is an overview or basic install flow for reference to understand the overall effort
contained within these procedures:
1. Check that the hardware is on-site and properly cabled and powered up.
2. Pre-assemble the basic ingredients needed to perform a successful install:
a. Identify
i. Download and stage software and other configuration files using provided
manifests. Refer to Artifacts for manifests information.
ii. Identify the layer 2 (MAC) and layer 3 (IP) addresses for the equipment in the
target frame
iii. Identify the addresses of key external network services (e.g., NTP, DNS, etc.)

iv. Verify / Set all of the credentials for the target frame hardware to known settings
b. Prepare
i. Software Repositories: Load the various SW repositories (YUM, Helm, Docker,
etc.) using the downloaded software and configuration
ii. Configuration Files: Populate the hosts inventory file with credentials and layer 2
and layer 3 network information, switch configuration files with assigned IP
addresses, and yaml files with appropriate information.
3. Bootstrap the System:
a. Manually configure a Minimal Bootstrapping Environment (MBE); perform the
minimal set of manual operations to enable networking and initial loading of a single
Rack Mount Server - RMS1 - the transient Installer Bootstrap Host. In this procedure,

1-9
Chapter 1
How to use this document

a minimal set of packages needed to configure switches, iLOs, PXE boot environment,
and provision RMS2 as an OCCNE Storage Host are installed.
b. Using the newly constructed MBE, automatically create the first (complete)
Management VM on RMS2. This freshly installed Storage Host will include a virtual
machine for hosting the Bastion Host.
c. Using the newly constructed Bastion Host on RMS2, automatically deploy and
configure the OCCNE on the other servers in the frame
4. Final Steps
a. Perform post installation checks
b. Perform recommended security hardening steps

Cluster Bootstrapping Overview


This install procedure is targeted at installing OCCNE onto a new hardware absent of any
networking configurations to switches, or operating systems provisioned. Therefore, the initial
step in the installation process is to provision RMS1 (see Figure 1-5) as a temporary Installer
Bootstrap Host. The Bootstrap Host is configured with a minimal set of packages needed to
configure switches, iLOs, PXE boot environment, and provision RMS2 as an OCCNE Storage
Host. A virtual Bastion Host is also provisioned on RMS2. The Bastion Host is then used to
provision (and in the case of the Bootstrap Host, re-provision) the remaining OCCNE hosts,
install Kubernetes, Database services, and Common Services running within the Kubernetes
cluster.

How to use this document


Although this document is primarily to be used as an initial installation guide, its secondary
purpose is to be used as a reference for Disaster Recovery procedures.
When executing this document for either purpose, there are a few points which help to ensure
that the user understands the author’s intent. These points are as follows:
1. Before beginning a procedure, completely read the instructional text (it will appear
immediately after the Section heading for each procedure) and all associated procedural
WARNINGS or NOTES.
2. Before execution of a STEP within a procedure, completely read the left and right columns
including any STEP specific WARNINGS or NOTES.
If a procedural STEP fails to execute successfully, STOP and contact Oracle’s Customer
Service for assistance before attempting to continue. My Oracle Support for information on
contacting Oracle Customer Support.

1-10
Chapter 1
Documentation Admonishments

Figure 1-6 Example of a Procedure Steps Used in This Document

Documentation Admonishments
Admonishments are icons and text throughout this manual that alert the reader to assure
personal safety, to minimize possible service interruptions, and to warn of the potential for
equipment damage.

Table 1-3 Admonishments

Icon Description
Danger:
(This icon and text indicate the possibility of
personal injury.)

Warning:
(This icon and text indicate the possibility of
equipment damage.)

Caution:
(This icon and text indicate the possibility of
service interruption.)

1-11
Chapter 1
Locate Product Documentation on the Oracle Help Center Site

Locate Product Documentation on the Oracle Help


Center Site
Oracle Communications customer documentation is available on the web at the Oracle Help
Center site, http://docs.oracle.com. You do not have to register to access these documents.
Viewing these files requires Adobe Acrobat Reader, which can be downloaded at http://
www.adobe.com.
1. Access the Oracle Help Center site at http://docs.oracle.com.
2. Click Industries.
3. Under the Oracle Communications subheading, click Oracle Communications
documentation link.
The Communications Documentation page displays.
4. Click on your product and then the release number.
A list of the documentation set for the selected product and release displays.
5. To download a file to your location, right-click the PDF link, select Save target as (or
similar command based on your browser), and save to a local folder.

Customer Training
Oracle University offers training for service providers and enterprises. Visit our web site to
view, and register for, Oracle Communications training at http://education.oracle.com/
communication.
To obtain contact phone numbers for countries or regions, visit the Oracle University Education
web site at www.oracle.com/education/contacts.

My Oracle Support
My Oracle Support (https://support.oracle.com) is your initial point of contact for all product
support and training needs. A representative at Customer Access Support can assist you with
My Oracle Support registration.
Call the Customer Access Support main number at 1-800-223-1711 (toll-free in the US), or call
the Oracle Support hotline for your local country from the list at http://www.oracle.com/us/
support/contact/index.html. When calling, make the selections in the sequence shown below on
the Support telephone menu:
1. Select 2 for New Service Request.
2. Select 3 for Hardware, Networking and Solaris Operating System Support.
3. Select one of the following options:
• For Technical issues such as creating a new Service Request (SR), select 1.
• For Non-technical issues such as registration or assistance with My Oracle Support,
select 2.
You are connected to a live agent who can assist you with My Oracle Support registration and
opening a support ticket.

1-12
Chapter 1
Emergency Response

My Oracle Support is available 24 hours a day, 7 days a week, 365 days a year.

Emergency Response
In the event of a critical service situation, emergency response is offered by the Customer
Access Support (CAS) main number at 1-800-223-1711 (toll-free in the US), or by calling the
Oracle Support hotline for your local country from the list at http://www.oracle.com/us/support/
contact/index.html. The emergency response provides immediate coverage, automatic
escalation, and other features to ensure that the critical situation is resolved as rapidly as
possible.
A critical situation is defined as a problem with the installed equipment that severely affects
service, traffic, or maintenance capabilities, and requires immediate corrective action. Critical
situations affect service and/or system operation resulting in one or several of these situations:
• A total system failure that results in loss of all transaction processing capability
• Significant reduction in system capacity or traffic handling capability
• Loss of the system’s ability to perform automatic system reconfiguration
• Inability to restart a processor or the system
• Corruption of system databases that requires service affecting corrective actions
• Loss of access for maintenance or recovery operations
• Loss of the system ability to provide any required critical or major trouble notification
Any other problem severely affecting service, capacity/traffic, billing, and maintenance
capabilities may be defined as critical by prior discussion and agreement with Oracle.

1-13
2
Installation Prerequisites
Complete the procedures outlined in this section before moving on to the Install Procedures
section. OCCNE installation procedures require certain artifacts and information to be made
available prior to executing installation procedures. This section addresses these prerequisites.

Obtain Site Data and Verify Site Installation


Execute the procedure to obtain site survey data (IP address allocations, repository locations,
etc), verify the frame configuration, and obtain important files used in installation procedures.

Configure Artifact Acquisition and Hosting


OCCNE requires artifacts from Oracle eDelivery and certain open-source projects. OCCNE
deployment environments are not expected to have direct internet access. Thus, customer-
provided intermediate repositories are necessary for the OCCNE installation process. These
repositories will need OCCNE dependencies to be loaded into them. This section will address
the artifacts list needed to be in these repositories.

Oracle eDelivery Artifact Acquisition


The following artifacts require download from eDelivery and/or OHC.

Table 2-1 Oracle eDelivery Artifact Acquisition

Artifact File Type Description


occne- Tar GZ OCCNE Installers (Docker images). Available in
images-1.0.1.tgz Docker Registry
v980756-01.zip Zip of tar file Zip file of MySQL Cluster Manager 1.4.7+Cluster.
Available in File repository
v975367-01.iso ISO OL7 ISO. Available in File repository
Install Docs PDFs This document explains Install procedures.
Documents are available in OHC.
Templates Config files (.conf, .ini) Switch config files, hosts.ini file templates from
OHC. It is Local media

Third Party Artifacts


OCCNE dependencies that come from open-source software must be available in repositories
reachable by the OCCNE installation tools. For an accounting of third party artifacts needed for
this installation, refer to the Artifacts.

2-1
Chapter 2
Populate the MetalLB Configuration

Populate the MetalLB Configuration


Introduction
The metalLB configMap file (mb_configmap.yaml) contains the manifest for the metalLB
configMap, this defines the BGP peers and address pools for metalLB. This file
(mb_configmap.yaml) should be placed in the same directory (/var/occne/<cluster_name>) as
the hosts.ini file.

Table 2-2 Procedure to configure MetalLB pools and peers

Step # Procedure Description


Add BGP peers Referring to the data collected in the Preflight Checklist, add BGP
1. and address peers ( ToRswitchA_Platform_IP, ToRswitchB_Platform_IP) and
groups address groups for each address pool. Address-pools lists the IP
addresses that metalLB is allowed to allocate.
Edit the Edit the mb_configmap.yaml file with the site-specific values found in
2. mb_configmap.ya the Preflight Checklist
ml file Note: The name "signaling" is prone to different spellings (UK vs US),
therefore pay special attention to how this signaling pool is referenced.
configInline:
peers:
- peer-address: <ToRswitchA_Platform_IP>
peer-asn: 64501
my-asn: 64512
- peer-address: <ToRswitchB_Platform_IP>
peer-asn: 64501
my-asn: 64512
address-pools:
- name: signaling
protocol: bgp
auto-assign: false
addresses:
- '<MetalLB_Signal_Subnet_With_Prefix>'
- name: oam
protocol: bgp
auto-assign: false
addresses:
- '<MetalLB_OAM_Subnet_With_Prefix>'

2-2
3
Install Procedure

Initial Configuration - Prepare a Minimal Boot


Strapping Environment
In the first step of the installation, a minimal bootstrapping environment is established that is to
support the automated installation of the CNE environment. The steps in this section provide
the details necessary to establish this minimal bootstrap environment on the Installer Bootstrap
Host using a Keyboard, Video, Mouse (KVM) connection.

Installation of Oracle Linux 7.5 on Bootstrap Host


This procedure outlines the installation steps for installing OL7 onto the OCCNE Installer
Bootstrap Host. This host is used to configure the networking throughout the system and install
OL7 onto RMS2. The Bootstrap server is re-paved as a Database Host in a later procedure.

Prerequisites
1. USB drive of sufficient size to hold the ISO (approximately 5Gb)
2. Oracle Linux 7.x iso
3. YUM repository file
4. Keyboard, Video, Mouse (KVM)

Limitations and Expectations


1. The configuration of the Installer Bootstrap Host is meant to be quick and easy, without a
lot of care on appropriate OS configuration. The Installer Bootstrap Host is re-paved with
the appropriate OS configuration for cluster and DB operation at a later stage of
installation. The Installer Bootstrap Host needs a Linux OS and some basic network to get
the installation process started.
2. All steps in this procedure are performed using Keyboard, Video, Mouse (KVM).

References
1. Oracle Linux 7 Installation guide: https://docs.oracle.com/cd/E52668_01/E54695/html/
index.html
2. HPE Proliant DL380 Gen10 Server User Guide

3-1
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Bootstrap Install Procedure

Table 3-1 Bootstrap Install Procedure

Step # Procedure Description


Create Bootable 1. Download the Oracle Linux
1. USB Media
Download the Oracle Linux ISO from OHC onto a user accessible
location (eg. Installer's notebook). The exact details on how to
perform this step is specific to the users equipment).
2. Push the OL ISO image onto the USB Flash Drive.
Since the installer's notebook may be Windows or Linux OS-based,
the user executing this procedure determines the appropriate detail to
execute this task. For a Linux based notebook, insert a USB Flash
Drive of the appropriate size into a Laptop (or some other linux host
where the iso can be copied to), and run the dd command to create a
bootable USB drive with the Oracle Linux 7 iso.
$ dd if=<path to ISO> of=<USB device path>
bs=1048576

Example (assuming the USB is on /dev/sdf and the


iso file is at /var/occne)

$ dd if=/var/occne/OracleLinux-7.5-x86_64-disc1.iso
of=/dev/sdf bs=1048576

3-2
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-1 (Cont.) Bootstrap Install Procedure

Step # Procedure Description


Install OL7 on the 1. Connect a Keyboard, Video, and Mouse (KVM) into the Installer
2. Installer Bootstrap Host's monitor and USB ports.
Bootstrap Host.
2. Plug the USB flash drive containing the bootable iso into an
available USB port on the Bootstrap host (usually in the front panel).
3. Reboot the host by momentarily pressing the power button on the
host's front panel. The button will go yellow. If it holds at yellow,
press the button again. The host should auto-boot to the USB flash
drive.
Note: If the host was previously configured and the USB is not a
bootable path in the boot order, it may not boot successfully.
4. If the host does not boot to the USB, repeat step 3, and interrupt the
boot process by pressing F11 which brings up the Boot Menu. If the
host has been recently booted with an OL, the Boot Menu will
display Oracle Linux at the top of the list. Select Generic USB Boot
as the first boot device and proceed.
5. The host attempts to boot from the USB. The following menu is
displayed on the screen. Select Test this media & install Oracle
Linux 7.x and hit ENTER. This begins the verification of the media
and the boot process.
After the verification reaches 100%, the following Welcome screen
is displayed. When prompted for the language to use, select the
default setting: English (United States) and hit Continue in the
lower left corner.
6. The INSTALLATION SUMMARY page, is displayed. The
following setting are expected:
a. LANGUAGE SUPPORT: English (United States)
b. KEYBOARD: English (US)
c. INSTALLATION SOURCE: Local Media
d. SOFTWARE SELECTION: Minimal Install
INSTALLATION DESTINATION should display No disks
selected. Select INSTALLATION DESTINATION to indicate
the drive to install the OS on.
Select the first HDD drive (in this case that would be the first one
listed) and select DONE in the upper right corner. If a dialog
appears indicating there is not enough free space (which might mean
an OS has already been installed), select the Reclaim space button.
Another dialog appears. Select the Delete all button and the
Reclaim space button again. Select DONE to return to the
INSTALLATION SUMMARY screen.

7. Select DONE. This returns to the INSTALLATION SUMMARY


page.
8. At the INSTALLATION SUMMARY screen, select Begin
Installation. The CONFIGURATION screen is displayed.
9. At the CONFIGURATION screen, select ROOT PASSWORD.

3-3
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-1 (Cont.) Bootstrap Install Procedure

Step # Procedure Description

Enter a root password appropriate for this installation. It is good


practice to use a customer provided secure password to minimize the
host being compromised during installation.

10. At the conclusion of the install, remove the USB and select Reboot
to complete the install and boot to the OS on the host. At the end of
the boot, the login prompt appears.

3-4
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-1 (Cont.) Bootstrap Install Procedure

Step # Procedure Description


Install Additional Additional packages are needed to complete the installation and move on
3. Packages. to the next step in the overall procedure. These additional packages are
available within the OL install media on the USB. To install these
packages, a YUM repo file is configured to use the install media. The
additional packages to install are:
• dnsmasq
• dhcp
• xinetd
• tftp-server
• dos2unix
• nfs-utils
1. Login with the root user and password configured above.
2. Create the mount directory:
$ mkdir /media/usb
3. Insert the USB into an available USB port (usually the front USB
port) of the Installer Bootstrap Host.
4. Find and mount the USB partition.
Typically the USB device is enumerated as /dev/sda but that is not
always the case. Use the lsblk command to find the USB device.
An example lsblk output is below. The capacity of the USB drive
is expected to be approximately 30GiB, therefore the USB drive is
enumerated as device /dev/sda in the example below:

$ lsblk
sdd 8:48 0 894.3G 0 disk
sde 8:64 0 1.7T 0 disk
sdc 8:32 0 894.3G 0 disk
├─sdc2 8:34 0 1G 0 part /boot
├─sdc3 8:35 0 893.1G 0 part
│ ├─ol-swap 252:1 0 4G 0 lvm [SWAP]
│ ├─ol-home 252:2 0 839.1G 0 lvm /home
│ └─ol-root 252:0 0 50G 0 lvm /
└─sdc1 8:33 0 200M 0 part /boot/efi
sda 8:0 1 29.3G 0 disk
├─sda2 8:2 1 8.5M 0 part
└─sda1 8:1 1 4.3G 0 part

The dmesg command also provides information about how the


operating system enumerates devices. In the example below, the
dmesg output indicates the USB drive is enumerated as device /dev/
sda.
Note: The output is shortened here for display purposes.
$ dmesg
...
[8850.211757] usb-storage 2-6:1.0: USB Mass Storage
device detected
[8850.212078] scsi host1: usb-storage 2-6:1.0
[8851.231690] scsi 1:0:0:0: Direct-Access
SanDisk Cruzer Glide 1.00 PQ: 0 ANSI: 6
[8851.232524] sd 1:0:0:0: Attached scsi generic sg0

3-5
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-1 (Cont.) Bootstrap Install Procedure

Step # Procedure Description

type 0
[8851.232978] sd 1:0:0:0: [sda] 61341696 512-byte
logical blocks: (31.4 GB/29.3 GiB)
[8851.234598] sd 1:0:0:0: [sda] Write Protect is off
[8851.234600] sd 1:0:0:0: [sda] Mode Sense: 43 00
00 00
[8851.234862] sd 1:0:0:0: [sda] Write cache:
disabled, read cache: enabled, doesn't support DPO
or FUA
[8851.255300] sda: sda1 sda2
...

The USB device should contain at least two partitions. One is the
boot partition and the other is the install media. The install media is
the larger of the two partitions. To find information about the
partitions use the fsdisk command to list the filesystems on the
USB device. Use the device name discovered via the steps outlined
above. In the examples above, the USB device is /dev/sda.

$ fdisk -l /dev/sda
Disk /dev/sda: 31.4 GB, 31406948352 bytes, 61341696
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512
bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x137202cf

Device Boot Start End Blocks


Id System
/dev/sda1 * 0 8929279
4464640 0 Empty
/dev/sda2 3076 20503 8714
ef EFI (FAT-12/16/32)

In the example output above, the /dev/sda2 partition is the EFI


boot partition. Therefore the install media files are on /dev/sda1.
Use the mount command to mount the install media file system. The
same command without any options is used to verify the device is
mounted to /media/usb.

$ mount /dev/sda1 /media/usb

$ mount
...
/dev/sda1 on /media/usb type iso9660
(ro,relatime,nojoliet,check=s,map=n,blocksize=2048)
5. Create a yum config file to install packages from local install
media.
Create a repo file /etc/yum.repos.d/Media.repo with the
following information:

3-6
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-1 (Cont.) Bootstrap Install Procedure

Step # Procedure Description

[ol7_base_media]
name=Oracle Linux 7 Base Media
baseurl=file:///media/usb
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
6. Disable the default public yum repo. This is done by renaming the
current .repo file to end with something other than .repo.
Adding .disabled to the end of the file name is standard.
Note: This can be left in this state as the Installer Bootstrap Host is
re-paved in a later procedure.
$ mv /etc/yum.repos.d/public-yum-ol7.repo /etc/
yum.repos.d/public-yum-ol7.repo.disabled
7. Use the yum repolist command to check the repository
configuration.
The output of yum repolist should look like the example below.
Verify there no errors regarding un-reachable yum repos.
$ yum repolist
Loaded plugins: langpacks, ulninfo
repo id repo
name
status
ol7_base_media Oracle Linux 7
Base Media
5,134

repolist: 5,134
8. Use yum to install the additional packages from the USB repo.
$ yum install dnsmasq
$ yum install dhcp
$ yum install xinetd
$ yum install tftp-server
$ yum install dos2unix
$ yum install nfs-utils
9. Verify installation of dhcp, xinetd, and tftp-server.
Note: Currently dnsmasq is not being used. The verification of tftp
makes sure the tftp file is included in the /etc/xinetd.d directory.
Installation/Verification does not include actually starting any of the
services. Service configuration/starting is performed in a later
procedure.
Verify dhcp is installed:
-------------------------
$ cd /etc/dhcp
$ ls
dhclient.d dhclient-exit-hooks.d dhcpd6.conf
dhcpd.conf scripts

Verify xinetd is installed:

3-7
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-1 (Cont.) Bootstrap Install Procedure

Step # Procedure Description

---------------------------
$ cd /etc/xinetd.d
$ ls
chargen-dgram chargen-stream daytime-dgram
daytime-stream discard-dgram discard-stream
echo-dgram echo-stream tcpmux-server time-dgram
time-stream

Verify tftp is installed:


-------------------------
$ cd /etc/xinetd.d
$ ls
chargen-dgram chargen-stream daytime-dgram
daytime-stream discard-dgram discard-stream
echo-dgram echo-stream tcpmux-server tftp time-
dgram time-stream
10. Unmount the USB and remove the USB from the host. The mount
command can be used to verify the usb is no longer mounted to /
media/usb.

$ umount /media/usb

$ mount
Verify that /dev/sda1 is no longer shown as mounted
to /media/usb.
11. This procedure is complete.

Configure the Installer Bootstrap Host BIOS


Introduction
These procedures define the steps necessary to set up the Legacy BIOS changes on the
Bootstrap host using the KVM. Some of the procedures in this document require a reboot of the
system and are indicated in the procedure.

Prerequisites
Procedure OCCNE Installation of Oracle Linux 7.5 on Bootstrap Host is complete.

Limitations and Expectations


1. Applies to HP Gen10 iLO 5 only.
2. The procedures listed here applies to the Bootstrap host only.

3-8
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Steps to OCCNE Configure the Installer Bootstrap Host BIOS

Table 3-2 Procedure to configure the Installer Bootstrap Host BIOS

Step # Procedure Description


1. Expose the System This procedure details how to expose the HP iLO 5 System
Configuration Configuration Utility main page from the KVM. It does not provide
Utility instructions on how to connect the console as these may be different
on each installation.
1. After making the proper connections for the KVM on the back
of the Bootstrap host to have access to the console, the user
should reboot the host by momentarily pressing the power
button on the front of the Bootstrap host.
2. Expose the HP Proliant DL380 Gen10 System Utilities.
Once the remote console has been exposed, the system must be
reset to force it through the restart process. When the initial
window is displayed, hit the F9 key repeatedly. Once the F9 is
highlighted at the lower left corner of the remote console, it
should eventually bring up the main System Utility.
3. The System Utilities screen is exposed in the remote console.

2. Change over from Should the System Utility default the booting mode to UEFI or has
UEFI Booting been changed to UEFI, it will be necessary to switch the booting
Mode to Legacy mode to Legacy.
BIOS Booting
1. Expose the System Configuration Utility by following Step 1.
Mode
2. Select System Configuration.
3. Select BIOS/Platform Configuration (RBSU).
4. Select Boot Options.
If the Boot Mode is set to UEFI Mode then this procedure
should be used to change it to Legacy BIOS Mode.
Note: The server reset must go through an attempt to boot
before the changes will actually apply.
5. The user is prompted to select the Reboot Required popup
dialog. This will drop back into the boot process. The boot must
go into the process of actually attempting to boot from the boot
order. This should fail since the disks have not been installed at
this point. The System Utility can be accessed again.
6. After the reboot and the user re-enters the System Utility, the
Boot Options page should appear.
7. Select F10: Save if it's desired to save and stay in the utility or
select the F12: Save and Exit if its desired to save and exit to
complete the current boot process.

3-9
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-2 (Cont.) Procedure to configure the Installer Bootstrap Host BIOS

Step # Procedure Description


3. Adding a New This procedure provides the steps required to add a new user account
User Account to the server iLO 5 interface.
Note: This user must match the pxe_install_lights_out_usrfields as
provided in the hosts inventory files created using the template:
OCCNE Inventory File Preparation.
1. Expose the System Utility by following Step 1.
2. Select System Configuration.
3. Select iLO 5 Configuration Utility.
4. Select User Management, and then Add User.
5. Select the appropriate permissions. For the root user set all
permissions to YES. Enter root as New User Name and Login
Name fields, and enter <password> in the Password field.
6. Select F10: Save to save and stay in the utility or select the F12:
Save and Exit to save and exit, to complete the current boot
process.

3-10
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-2 (Cont.) Procedure to configure the Installer Bootstrap Host BIOS

Step # Procedure Description


4. Force PXE to boot During host PXE, the DHCP DISCOVER requests from the hosts
from the first must be broadcast over the 10Gb port. This procedure provides the
Embedded steps necessary to configure the broadcast to use the 10Gb ports
FlexibleLOM HPE before it attempts to use the 1Gb ports. Moving the 10Gb port up on
Ethernet 10Gb 2- the search order helps to speed up the response from the host
port Adapter servicing the DHCP DISCOVER. Enclosure blades have 2 10GE
NICs which default to being configured for PXE booting. The RMS
are re-configured to use the PCI NICs using this procedure.
1. Expose the System Utility by following Step 1.
2. Select System Configuration.
3. Select BIOS/Platform Configuration (RBSU).
4. Select Boot Options.
This menu defines the boot mode which should be set to Legacy
BIOS Mode, the UEFI Optimized Boot which should be
disabled, and the Boot Order Policy which should be set to
Retry Boot Order Indefinitely (this means it will keep trying to
boot without ever going to disk). In this screen select Legacy
BIOS Boot Order. If not in Legacy BIOS Mode, please follow
procedure 2.2 Change over from UEFI Booting Mode to Legacy
BIOS Booting Mode to set the Configuration Utility to Legacy
BIOS Mode.
5. Select Legacy BIOS Boot Order
This page defines the legacy BIOS boot order. This includes the
list of devices from which the server will listen for the DHCP
OFFER (includes the reserved IPv4) after the PXE DHCP
DISCOVER message is broadcast out from the server.
In the default view, the 10Gb Embedded FlexibleLOM 1 Port 1
is at the bottom of the list. When the server begins the scan for
the response, it scans down this list until it receives the
response. Each NIC will take a finite amount of time before the
server gives up on that NIC and attempts another in the list.
Moving the 10Gb port up on this list should decrease the time
that is required to finally process the DHCP OFFER.
To move an entry, select that entry, hold down the first mouse
button and move the entry up in the list below the entry it must
reside under.
6. Move the 10 Gb Embedded FlexibleLOM 1 Port 1 entry up
above the 1Gb Embedded LOM 1 Port 1 entry.
7. Select F10: Save to save and stay in the utility or select the F12:
Save and Exit to save and exit, to complete the current boot
process.

3-11
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-2 (Cont.) Procedure to configure the Installer Bootstrap Host BIOS

Step # Procedure Description


5. Enabling This procedure provides the steps required to enable virtualization on
Virtualization a given Bare Metal Server. Virtualization can be configured using the
default settings or via the Workload Profiles.
1. Verifying Default Settings
a. Expose the System Configuration Utility by following Step
1.
b. Select System Configuration.
c. Select BIOS/Platform Configuration (RBSU)
d. Select Virtualization Options
This screen displays the settings for the Intel(R)
Virtualization Technology (IntelVT), Intel(R) VT-d, and
SR-IOV options (Enabled or Disabled). The default values
for each option is Enabled.

e. Select F10: Save to save and stay in the utility or select the
F12: Save and Exit to save and exit, to complete the
current boot process.

6. Disable RAID 1. Expose the System Configuration Utility by following Step 1.


Configurations
2. Select System Configuration.
3. Select Embedded RAID 1 : HPE Smart Array P408i-a SR Gen
10.
4. Select Array Configuration.
5. Select Manage Arrays.
6. Select Array A (or any designated Array Configuration if there
are more than one).
7. Select Delete Array.
8. Select Submit Changes.
9. Select F10: Save to save and stay in the utility or select the F12:
Save and Exit to save and exit, to complete the current boot
process.

3-12
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-2 (Cont.) Procedure to configure the Installer Bootstrap Host BIOS

Step # Procedure Description


7. Enable the This procedure provides the steps necessary to configure the primary
Primary Boot bootable device for a given Gen10 Server. In this case the RMS
Device would include two devices as Hard Drives (HDDs). Some
configurations may also include two Solid State Drives (SSDs). The
SSDs are not to be selected for this configuration. Only the primary
bootable device is set in this procedure since RAID is being disabled.
The secondary bootable device remains as Not Set.
1. Expose the System Configuration Utility by following Step 1.
2. Select System Configuration.
3. Select Embedded RAID 1 : HPE Smart Array P408i-a SR Gen
10.
4. Select Set Bootable Device(s) for Legacy Boot Mode. If the
boot devices are not set then it will display Not Set for the
primary and secondary devices.
5. Select Select Bootable Physical Drive.
6. Select Port 1| Box:3 Bay:1 Size:1.8 TB SAS HP
EG00100JWJNR.
Note: This example includes two HDDs and two SSDs. The
actual configuration may be different.
7. Select Set as Primary Bootable Device.
8. Select Back to Main Menu.
This will return to the HPE Smart Array P408i-a SR Gen10
menu. The secondary bootable device is left as Not Set.
9. Select F10: Save to save and stay in the utility or select the F12:
Save and Exit to save and exit, to complete the current boot
process.

8. Configure the iLO When configuring the Bootstrap host, the static IP address for the
5 Static IP Address iLO 5 must be configured.
Note: This procedure requires a reboot after completion.
1. Expose the System Configuration Utility by following Step 1.
2. Select System Configuration.
3. Select iLO 5 Configuration Utility.
4. Select Network Options.
5. Enter the IP Address, Subnet Mask, and Gateway IP Address
fields provided in OCCNE 1.0 Installation PreFlight Checklist.
6. Select F12: Save and Exit to complete the current boot process.
A reboot is required when setting the static IP for the iLO 5. A
warning appears indicating that the user must wait 30 seconds
for the iLO to reset and then a reboot is required. A prompt
appears requesting a reboot. Select Reboot.
7. Once the reboot is complete, the user can re-enter the System
Utility and verify the settings if necessary.

3-13
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Configure Top of Rack 93180YC-EX Switches


Introduction
This procedure provides the steps required to initialize and configure Cisco 93180YC-EX
switches as per the topology defined in Physical Network Topology Design.

Note:
All instructions in this procedure are executed from the Bootstrap Host.

Prerequisites
1. Procedure OCCNE Installation of Oracle Linux 7.5 on Bootstrap Host has been completed.
2. The switches are in factory default state.
3. The switches are connected as per OCCNE 1.0 Installation PreFlight Checklist. Customer
uplinks are not active before outside traffic is necessary.
4. DHCP, XINETD, and TFTP are already installed on the Bootstrap host but are not
configured.
5. The Utility USB is available containing the necessary files as per: OCCNE 1.0 Installation
PreFlight checklist: Create Utility USB.
Limitations/Expectations
All steps are executed from a Keyboard, Video, Mouse (KVM) connection.
References
https://github.com/datacenter/nexus9000/blob/master/nx-os/poap/poap.py

Procedures

Configuration

Table 3-3 Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description


1. Login to the Using the KVM, login to the Bootstrap host as root.
Bootstrap Note: All instructions in this procedure are executed from the Bootstrap
host as root. Host.
2. Insert and Insert and mount the Utility USB that contains the configuration and script
mount the files. Verify the files are listed in the USB using the ls /media/usb
Utility USB command.
Note: Instructions for mounting the USB can be found in: OCCNE
Installation of Oracle Linux 7.5 on Bootstrap Server : Install Additional
Packages. Only steps 2 and 3 need to be followed in that procedure.

3-14
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-3 (Cont.) Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description


3. Create bridge Create bridge interface to connect both management ports and setup the
interface management bridge to support switch initialization.
Note: <CNE_Management_IP_With_Prefix> is from OCCNE 1.0
Installation PreFlight Checklist : Complete Site Survey Host IP Table. Row
1 CNE Management IP Addresess (VLAN 4) column.
<ToRSwitch_CNEManagementNet_VIP> is from OCCNE 1.0 Installation
PreFlight Checklist : Complete OA and Switch IP Table.
$ nmcli con add con-name mgmtBridge type bridge ifname
mgmtBridge
$ nmcli con add type bridge-slave ifname eno2 master
mgmtBridge
$ nmcli con add type bridge-slave ifname eno3 master
mgmtBridge
$ nmcli con mod mgmtBridge ipv4.method manual
ipv4.addresses 192.168.2.11/24
$ nmcli con up mgmtBridge

$ nmcli con add type team con-name team0 ifname team0


team.runner lacp
$ nmcli con add type team-slave con-name team0-slave-1
ifname eno5 master team0
$ nmcli con add type team-slave con-name team0-slave-2
ifname eno6 master team0
$ nmcli con mod team0 ipv4.method manual ipv4.addresses
172.16.3.4/24
$ nmcli con add con-name team0.4 type vlan id 4 dev team0
$ nmcli con mod team0.4 ipv4.method manual ipv4.addresses
<CNE_Management_IP_Address_With_Prefix> ipv4.gateway
<ToRswitch_CNEManagementNet_VIP>
$nmcli con up team0.4

3-15
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-3 (Cont.) Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description


4. Edit the /etc/ Edit the /etc/xinetd.d/tftp file to enable TFTP service. Change the disable
xinetd.d/tftp option to no, if it is set to yes.
file
$ vi /etc/xinetd.d/tftp
# default: off
# description: The tftp server serves files using the
trivial file transfer \
# protocol. The tftp protocol is often used to
boot diskless \
# workstations, download configuration files to
network-aware printers, \
# and to start the installation process for some
operating systems.
service tftp
{
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -s /var/lib/tftpboot
disable = no
per_source = 11
cps = 100 2
flags = IPv4
}

5. Enable tftp on
$ systemctl start tftp
the Bootstrap
$ systemctl enable tftp
host.
Verify tftp is active and enabled:
$ systemctl status tftp
$ ps -elf | grep tftp

6. Copy the Copy the dhcpd.conf file from the Utility USB in OCCNE 1.0 Installation
dhcpd.conf PreFlight checklist : Create the dhcpd.conf File to the /etc/dhcp/ directory.
file
$ cp /media/usb/dhcpd.conf /etc/dhcp/

7. Restart and
$ /bin/systemctl restart dhcpd.service
enable dhcpd
$ /bin/systemctl enable dhcpd.service
service.
Use the systemctl status dhcpd command to verify active
and enabled.
$ systemctl status dhcpd

8. Copy the Copy the switch configuration and script files from the Utility USB to
switch directory /var/lib/tftpboot/.
configuration
and script $ cp /media/usb/93180_switchA.cfg /var/lib/tftpboot/.
files $ cp /media/usb/93180_switchB.cfg /var/lib/tftpboot/.
$ cp /media/usb/poap_nexus_script.py /var/lib/tftpboot/.

3-16
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-3 (Cont.) Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description


9. Copy the Copy the ifcfg template files to /tmp directory for later use.
ifcfg template
files $ cp /media/usb/ifcfg-vlan /tmp
$ cp /media/usb/ifcfg-bridge /tmp

10. Modify Modify POAP script File. Make the following change for the first server
POAP script information: The username and password are the credentials used to login
File. to the Bootstrap host.

$ vi /var/lib/tftpboot/poap_nexus_script.py
Host name and user credentials
options = {
"username": "<username>",
"password": "<password>",
"hostname": "192.168.2.11",
"transfer_protocol": "scp",
"mode": "serial_number",
"target_system_image": "nxos.9.2.3.bin",
}

Note: The version nxos.9.2.3.bin is used by default. If


different version is to be used, modify the
"target_system_image" with new version.

11. Modify Modify POAP script file md5sum by executing the md5Poap.sh script from
POAP script the Utility USB created from OCCNE 1.0 Installation PreFlight checklist :
file Create the md5Poap Bash Script.
$ cd /var/lib/tftpboot/
$ /bin/bash md5Poap.sh

12. Create the The serial number is located on a pullout card on the back of the switch in
files the left most power supply of the switch.
necessary to
configure the
ToR switches
using the
serial number
from the
switch.

3-17
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-3 (Cont.) Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description


13. Copy Modify the switch specific values in the /var/lib/tftpboot/conf.<switchA
the /var/lib/ serial number> file, including all the values in the curly braces as following
tftpboot/ code block.
93180_switch These values are contained at OCCNE 1.0 Installation PreFlight checklist :
A.cfg into a ToR and Enclosure Switches Variables Table (Switch Specific) and OCCNE
file 1.0 Installation PreFlight Checklist : Complete OA and Switch IP Table.
called /var/lib Modify these values with the following sed commands, or use an editor
/tftpboot/ such as vi etc.
conf.<switch
A serial $ sed -i 's/{switchname}/<switch_name>/' conf.<switchA
number> serial number>
$ sed -i 's/{admin_password}/<admin_password>/'
conf.<switchA serial number>
$ sed -i 's/{user_name}/<user_name>/' conf.<switchA
serial number>
$ sed -i 's/{user_password}/<user_password>/'
conf.<switchA serial number>
$ sed -i 's/{ospf_md5_key}/<ospf_md5_key>/' conf.<switchA
serial number>
$ sed -i 's/{OSPF_AREA_ID}/<ospf_area_id>/' conf.<switchA
serial number>

$ sed -i 's/{NTPSERVER1}/<NTP_server_1>/' conf.<switchA


serial number>
$ sed -i 's/{NTPSERVER2}/<NTP_server_2>/' conf.<switchA
serial number>
$ sed -i 's/{NTPSERVER3}/<NTP_server_3>/' conf.<switchA
serial number>
$ sed -i 's/{NTPSERVER4}/<NTP_server_4>/' conf.<switchA
serial number>
$ sed -i 's/{NTPSERVER5}/<NTP_server_5>/' conf.<switchA
serial number>

Note: If less than 5 ntp servers available, delete the


extra ntp server lines such as command:
$ sed -i 's/{NTPSERVER5}/d' conf.<switchA serial number>

Note: different delimiter is used in next two commands


due to '/' sign in the variables
$ sed -i
's#{ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN}#<MetalLB_Signal_Sub
net_With_Prefix>#g' conf.<switchA serial number>
$ sed -i
's#{CNE_Management_SwA_Address}#<ToRswitchA_CNEManagementN
et_IP>#g' conf.<switchA serial number>
$ sed -i
's#{CNE_Management_SwB_Address}#<ToRswitchB_CNEManagementN
et_IP>#g' conf.<switchA serial number>
$ sed -i
's#{CNE_Management_Prefix}#<CNEManagementNet_Prefix>#g'
conf.<switchA serial number>
$ sed -i
's#{SQL_replication_SwA_Address}#<ToRswitchA_SQLreplicatio
nNet_IP>#g' conf.<switchA serial number>
$ sed -i

3-18
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-3 (Cont.) Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description

's#{SQL_replication_SwB_Address}#<ToRswitchB_SQLreplicatio
nNet_IP>#g' conf.<switchA serial number>
$ sed -i
's#{SQL_replication_Prefix}#<SQLreplicationNet_Prefix>#g'
conf.<switchA serial number>
$ ipcalc -n <ToRswitchA_SQLreplicationNet_IP/
<SQLreplicationNet_Prefix> | awk -F'=' '{print $2}'
$ sed -i 's/{SQL_replication_Subnet}/<output from ipcalc
command as SQL_replication_Subnet>/' conf.<switchA serial
number>

$ sed -i 's/{CNE_Management_VIP}/
<ToRswitch_CNEManagementNet_VIP>/g' conf.<switchA serial
number>
$ sed -i 's/{SQL_replication_VIP}/
<ToRswitch_SQLreplicationNet_VIP>/g' conf.<switchA serial
number>
$ sed -i 's/{OAM_UPLINK_CUSTOMER_ADDRESS}/
<ToRswitchA_oam_uplink_customer_IP>/' conf.<switchA
serial number>

$ sed -i 's/{OAM_UPLINK_SwA_ADDRESS}/
<ToRswitchA_oam_uplink_IP>/g' conf.<switchA serial number>
$ sed -i 's/{SIGNAL_UPLINK_SwA_ADDRESS}/
<ToRswitchA_signaling_uplink_IP>/g' conf.<switchA serial
number>
$ sed -i 's/{OAM_UPLINK_SwB_ADDRESS}/
<ToRswitchB_oam_uplink_IP>/g' conf.<switchA serial number>
$ sed -i 's/{SIGNAL_UPLINK_SwB_ADDRESS}/
<ToRswitchB_signaling_uplink_IP>/g' conf.<switchA serial
number>
$ ipcalc -n <ToRswitchA_signaling_uplink_IP>/30 | awk -
F'=' '{print $2}'
$ sed -i 's/{SIGNAL_UPLINK_SUBNET}/<output from ipcalc
command as signal_uplink_subnet>/' conf.<switchA serial
number>

$ ipcalc -n <ToRswitchA_SQLreplicationNet_IP> | awk -


F'=' '{print $2}'
$ sed -i 's/{MySQL_Replication_SUBNET}/<output from the
above ipcalc command appended with prefix >/'
conf.<switchA serial number>

Note: The version nxos.9.2.3.bin is used by default and


hard-coded in the conf files. If different version is to
be used, run the following command:
$ sed -i 's/nxos.9.2.3.bin/<nxos_version>/' conf.<switchA
serial number>

Note: access-list Restrict_Access_ToR


The following line allow one access server to access the
switch management and SQL vlan addresses while other
accesses are denied. If no need, delete this line. If
need more servers, add similar line.

3-19
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-3 (Cont.) Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description

$ sed -i 's/{Allow_Access_Server}/<Allow_Access_Server>/'
conf.<switchA serial number>

3-20
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-3 (Cont.) Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description


14. Copy Modify the switch specific values in the /var/lib/tftpboot/conf.<switchA
the /var/lib/ serial number> file, including: hostname, username/password, oam_uplink
tftpboot/ IP address, signaling_uplink IP address, access-list
93180_switch ALLOW_5G_XSI_LIST permit address, prefix-list ALLOW_5G_XSI.
B.cfg into a These values are contained at OCCNE 1.0 Installation PreFlight checklist :
file ToR and Enclosure Switches Variables Table and OCCNE 1.0 Installation
called /var/lib PreFlight Checklist : Complete OA and Switch IP Table.
/tftpboot/
conf.<switch $ sed -i 's/{switchname}/<switch_name>/' conf.<switchB
B serial serial number>
number> $ sed -i 's/{admin_password}/<admin_password>/'
conf.<switchB serial number>
$ sed -i 's/{user_name}/<user_name>/' conf.<switchB
serial number>
$ sed -i 's/{user_password}/<user_password>/'
conf.<switchB serial number>
$ sed -i 's/{ospf_md5_key}/<ospf_md5_key>/' conf.<switchB
serial number>
$ sed -i 's/{OSPF_AREA_ID}/<ospf_area_id>/' conf.<switchB
serial number>

$ sed -i 's/{NTPSERVER1}/<NTP_server_1>/' conf.<switchB


serial number>
$ sed -i 's/{NTPSERVER2}/<NTP_server_2>/' conf.<switchB
serial number>
$ sed -i 's/{NTPSERVER3}/<NTP_server_3>/' conf.<switchB
serial number>
$ sed -i 's/{NTPSERVER4}/<NTP_server_4>/' conf.<switchB
serial number>
$ sed -i 's/{NTPSERVER5}/<NTP_server_5>/' conf.<switchB
serial number>

Note: If less than 5 ntp servers available, delete the


extra ntp server lines such as command:
$ sed -i 's/{NTPSERVER5}/d' conf.<switchB serial number>

Note: different delimiter is used in next two commands


due to '/' sign in in the variables
$ sed -i
's#{ALLOW_5G_XSI_LIST_WITH_PREFIX_LEN}#<MetalLB_Signal_Sub
net_With_Prefix>#g' conf.<switchB serial number>
$ sed -i
's#{CNE_Management_SwA_Address}#<ToRswitchA_CNEManagementN
et_IP>#g' conf.<switchB serial number>
$ sed -i
's#{CNE_Management_SwB_Address}#<ToRswitchB_CNEManagementN
et_IP>#g' conf.<switchB serial number>
$ sed -i
's#{CNE_Management_Prefix}#<CNEManagementNet_Prefix>#g'
conf.<switchB serial number>
$ sed -i
's#{SQL_replication_SwA_Address}#<ToRswitchA_SQLreplicatio
nNet_IP>#g' conf.<switchB serial number>
$ sed -i
's#{SQL_replication_SwB_Address}#<ToRswitchB_SQLreplicatio

3-21
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-3 (Cont.) Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description

nNet_IP>#g' conf.<switchB serial number>


$ sed -i
's#{SQL_replication_Prefix}#<SQLreplicationNet_Prefix>#g'
conf.<switchB serial number>
$ ipcalc -n <ToRswitchB_SQLreplicationNet_IP/
<SQLreplicationNet_Prefix> | awk -F'=' '{print $2}'
$ sed -i 's/{SQL_replication_Subnet}/<output from ipcalc
command as SQL_replication_Subnet>/' conf.<switchB serial
number>

$ sed -i 's/{CNE_Management_VIP}/
<ToRswitch_CNEManagementNet_VIP>/' conf.<switchB serial
number>
$ sed -i 's/{SQL_replication_VIP}/
<ToRswitch_SQLreplicationNet_VIP>/' conf.<switchB serial
number>
$ sed -i 's/{OAM_UPLINK_CUSTOMER_ADDRESS}/
<ToRswitchB_oam_uplink_customer_IP>/' conf.<switchB
serial number>

$ sed -i 's/{OAM_UPLINK_SwA_ADDRESS}/
<ToRswitchB_oam_uplink_IP>/g' conf.<switchB serial number>
$ sed -i 's/{SIGNAL_UPLINK_SwA_ADDRESS}/
<ToRswitchB_signaling_uplink_IP>/g' conf.<switchB serial
number>
$ sed -i 's/{OAM_UPLINK_SwB_ADDRESS}/
<ToRswitchB_oam_uplink_IP>/g' conf.<switchB serial number>
$ sed -i 's/{SIGNAL_UPLINK_SwB_ADDRESS}/
<ToRswitchB_signaling_uplink_IP>/g' conf.<switchB serial
number>
$ ipcalc -n <ToRswitchB_signaling_uplink_IP>/30 | awk -
F'=' '{print $2}'
$ sed -i 's/{SIGNAL_UPLINK_SUBNET}/<output from ipcalc
command as signal_uplink_subnet>/' conf.<switchB serial
number>

Note: The version nxos.9.2.3.bin is used by default and


hard-coded in the conf files. If different version is to
be used, run the following command:
$ sed -i 's/nxos.9.2.3.bin/<nxos_version>/' conf.<switchB
serial number>

Note: access-list Restrict_Access_ToR


The following line allow one access server to access the
switch management and SQL vlan addresses while other
accesses are denied. If no need, delete this line. If
need more servers, add similar line.
$ sed -i 's/{Allow_Access_Server}/<Allow_Access_Server>/'
conf.<switchB serial number>

3-22
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-3 (Cont.) Procedure to configure Top of Rack 93180YC-EX Switches

Step # Procedure Description


15. Generate the
md5
$ md5sum conf.<switchA serial number> > conf.<switchA
checksum for
serial number>.md5
each conf file
$ md5sum conf.<switchB serial number> > conf.<switchB
in /var/lib/
serial number>.md5
tftpboot and
copy that into
a new file
called
conf.<switch
A/B serial
number>.md
5.
16. Verify Note: The ToR switches are constantly attempting to find and execute the
the /var/lib/ poap_nexus_script.py script which uses tftp to load and install the
tftpboot configuration files.
directory has
the correct
files. Make $ ls -l /var/lib/tftpboot/
sure the file total 1305096
permissions -rw-r--r--. 1 root root 7161 Mar 25 15:31
are set as conf.<switchA serial number>
given below. -rw-r--r--. 1 root root 51 Mar 25 15:31
conf.<switchA serial number>.md5
-rw-r--r--. 1 root root 7161 Mar 25 15:31
conf.<switchB serial number>
-rw-r--r--. 1 root root 51 Mar 25 15:31
conf.<switchB serial number>.md5
-rwxr-xr-x. 1 root root 75856 Mar 25 15:32
poap_nexus_script.py

17. Disable
$ systemctl stop firewalld
firewalld.
$ systemctl disable firewalld

To verify:
$ systemctl status firewalld

Once this is complete, the ToR Switches will attempt to boot from the
tftpboot files automatically. Eventually the verification steps can be
executed below. It may take about 5 minutes for this to complete.

Verification

3-23
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-4 Procedure to verify Top of Rack 93180YC-EX Switches

Step # Procedure Description


1. After the ToR Note: Wait till the device responds.
switches
configured, $ ping 192.168.2.1
ping the PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
switches from 64 bytes from 192.168.2.1: icmp_seq=1 ttl=255 time=0.419
bootstrap ms
server. The 64 bytes from 192.168.2.1: icmp_seq=2 ttl=255 time=0.496
switches ms
mgmt0 64 bytes from 192.168.2.1: icmp_seq=3 ttl=255 time=0.573
interfaces are ms
configured 64 bytes from 192.168.2.1: icmp_seq=4 ttl=255 time=0.535
with the IP ms
addresses ^C
which are in --- 192.168.2.1 ping statistics ---
the conf files. 4 packets transmitted, 4 received, 0% packet loss, time
3000ms
rtt min/avg/max/mdev = 0.419/0.505/0.573/0.063 ms
$ ping 192.168.2.2
PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.
64 bytes from 192.168.2.2: icmp_seq=1 ttl=255 time=0.572
ms
64 bytes from 192.168.2.2: icmp_seq=2 ttl=255 time=0.582
ms
64 bytes from 192.168.2.2: icmp_seq=3 ttl=255 time=0.466
ms
64 bytes from 192.168.2.2: icmp_seq=4 ttl=255 time=0.554
ms
^C
--- 192.168.2.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time
3001ms
rtt min/avg/max/mdev = 0.466/0.543/0.582/0.051 ms

3-24
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-4 (Cont.) Procedure to verify Top of Rack 93180YC-EX Switches

Step # Procedure Description


2. Attempt to ssh
$ ssh plat@192.168.2.1
to the switches
The authenticity of host '192.168.2.1 (192.168.2.1)'
with the
can't be established.
username/
RSA key fingerprint is SHA256:jEPSMHRNg9vejiLcEvw5qprjgt
password
+4ua9jucUBhktH520.
provided in
RSA key fingerprint is MD5:02:66:3a:c6:81:65:20:2c:6e:cb:
the conf files.
08:35:06:c6:72:ac.
Are you sure you want to continue connecting (yes/no)?
yes
Warning: Permanently added '192.168.2.1' (RSA) to the
list of known hosts.
User Access Verification
Password:

Cisco Nexus Operating System (NX-OS) Software


TAC support: http://www.cisco.com/tac
Copyright (C) 2002-2019, Cisco and/or its affiliates.
All rights reserved.
The copyrights to certain works contained in this
software are
owned by other third parties and used and distributed
under their own
licenses, such as open source. This software is
provided "as is," and unless
otherwise stated, there is no warranty, express or
implied, including but not
limited to warranties of merchantability and fitness for
a particular purpose.
Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or
GNU General Public License (GPL) version 3.0 or the GNU
Lesser General Public License (LGPL) Version 2.1 or
Lesser General Public License (LGPL) Version 2.0.
A copy of each such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://opensource.org/licenses/gpl-3.0.html and
http://www.opensource.org/licenses/lgpl-2.1.php and
http://www.gnu.org/licenses/old-licenses/library.txt.
#

3-25
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-4 (Cont.) Procedure to verify Top of Rack 93180YC-EX Switches

Step # Procedure Description


3. Verify the
$ show running-config
running-config
!Command: show running-config
has all
!Running configuration last done at: Mon Apr 8 17:39:38
expected
2019
configurations
!Time: Mon Apr 8 18:30:17 2019
in the conf file
version 9.2(3) Bios:version 07.64
using the
hostname 12006-93108A
show
vdc 12006-93108A id 1
running-
limit-resource vlan minimum 16 maximum 4094
config
limit-resource vrf minimum 2 maximum 4096
command.
limit-resource port-channel minimum 0 maximum 511
limit-resource u4route-mem minimum 248 maximum 248
limit-resource u6route-mem minimum 96 maximum 96
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
feature scp-server
feature sftp-server
cfs eth distribute
feature ospf
feature bgp
feature interface-vlan
feature lacp
feature vpc
feature bfd
feature vrrpv3
....
....

4. Un-mount the Connect or enable customer uplink.


Utility USB
and remove it:
umount /
media/usb
5. Verify the
$ ping <ToRSwitch_CNEManagementNet_VIP>
RMS1 can
PING <ToRSwitch_CNEManagementNet_VIP>
ping the
(<ToRSwitch_CNEManagementNet_VIP>) 56(84) bytes of data.
CNE_Manage
64 bytes from <ToRSwitch_CNEManagementNet_VIP>:
ment VIP
icmp_seq=2 ttl=255 time=1.15 ms
64 bytes from <ToRSwitch_CNEManagementNet_VIP>:
icmp_seq=3 ttl=255 time=1.11 ms
64 bytes from <ToRSwitch_CNEManagementNet_VIP>:
icmp_seq=4 ttl=255 time=1.23 ms
^C
--- 10.75.207.129 ping statistics ---
4 packets transmitted, 3 received, 25% packet loss, time
3019ms
rtt min/avg/max/mdev = 1.115/1.168/1.237/0.051 ms

3-26
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-4 (Cont.) Procedure to verify Top of Rack 93180YC-EX Switches

Step # Procedure Description


6. Verify the
RMS1 can be
$ ssh root@<CNE_Management_IP_Address>
accessed from
Using username "root".
laptop. Use
root@<CNE_Management_IP_Address>'s password:<root
application
password>
such as putty
Last login: Mon May 6 10:02:01 2019 from 10.75.9.171
etc to ssh to
[root@RMS1 ~]#
RMS1.

Configure Addresses for RMS iLOs, OA, EBIPA


Introduction
This procedure is used to configure RMS iLO addresses and add a new user account for each
RMS other than the Bootstrap Host. When the RMSs are shipped and out of box after hardware
installation and powerup, the RMSs are in a factory default state with the iLO in DHCP mode
waiting for DHCP service. DHCP is used to configure the ToR switches, OAs, Enclosure
switches, and blade server iLOs, so DHCP can be used to configure RMS iLOs as well.

Prerequisites
Procedure OCCNE Configure Top of Rack 93180YC-EX Switches has been completed.

Limitations/Expectations
All steps are executed from the ssh session of the Bootstrap server.

References
HPE BladeSystem Onboard Administrator User Guide

Steps to configure Addresses for RMS iLOs, OA, EBIPA

Table 3-5 Procedure to configure Addresses for RMS iLOs, OA, EBIPA

Step # Procedure Description


1. Setup team0.2
$ nmcli con add con-name team0.2 type vlan id 2
interface
dev team0
$ nmcli con mod team0.2 ipv4.method manual
ipv4.addresses 192.168.20.11/24
$ nmcli con up team0.2

2. Subnet and conf file The /etc/dhcp/dhcp.conf file should already have been configured
address in procedure
OCCNE Configure Top of Rack 93180YC-EX Switches and dhcp
started/enabled on the bootstrap server. The second subnet
192.168.20.0 is used to assign addresses for OA and RMS iLOs.
The "next-server 192.168.20.11" option is same as the server
team0.2 IP address.

3-27
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA

Step # Procedure Description


3. Display the dhcpd
$ cat /var/lib/dhcpd/dhcpd.leases
leases file
# The format of this file is documented in the
at /var/lib/
dhcpd.leases(5) manual page.
dhcpd/
# This lease file was written by isc-dhcp-4.2.5
dhcpd.leases. The lease 192.168.20.101 {
DHCPD lease file starts 4 2019/03/28 22:05:26;
will display the ends 4 2019/03/28 22:07:26;
DHCP addresses for tstp 4 2019/03/28 22:07:26;
all RMS iLOs, cltt 4 2019/03/28 22:05:26;
Enclosure OAs. binding state free;
hardware ethernet 48:df:37:7a:41:60;
}
lease 192.168.20.103 {
starts 4 2019/03/28 22:05:28;
ends 4 2019/03/28 22:07:28;
tstp 4 2019/03/28 22:07:28;
cltt 4 2019/03/28 22:05:28;
binding state free;
hardware ethernet 48:df:37:7a:2f:70;
}
lease 192.168.20.102 {
starts 4 2019/03/28 22:05:16;
ends 4 2019/03/28 23:03:29;
tstp 4 2019/03/28 23:03:29;
cltt 4 2019/03/28 22:05:16;
binding state free;
hardware ethernet 48:df:37:7a:40:40;
}
lease 192.168.20.106 {
starts 5 2019/03/29 11:14:04;
ends 5 2019/03/29 14:14:04;
tstp 5 2019/03/29 14:14:04;
cltt 5 2019/03/29 11:14:04;
binding state free;
hardware ethernet b8:83:03:47:5f:14;
uid "\000\270\203\003G_\024\000\000\000";
}
lease 192.168.20.105 {
starts 5 2019/03/29 12:56:23;
ends 5 2019/03/29 15:56:23;
tstp 5 2019/03/29 15:56:23;
cltt 5 2019/03/29 12:56:23;
binding state free;
hardware ethernet b8:83:03:47:5e:54;
uid "\000\270\203\003G^T\000\000\000";
}
lease 192.168.20.104 {
starts 5 2019/03/29 13:08:21;
ends 5 2019/03/29 16:08:21;
tstp 5 2019/03/29 16:08:21;
cltt 5 2019/03/29 13:08:21;
binding state free;
hardware ethernet b8:83:03:47:64:9c;
uid "\000\270\203\003Gd\234\000\000\000";

3-28
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA

Step # Procedure Description

}
lease 192.168.20.108 {
starts 5 2019/03/29 09:57:02;
ends 5 2019/03/29 21:57:02;
tstp 5 2019/03/29 21:57:02;
cltt 5 2019/03/29 09:57:02;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet fc:15:b4:1a:ea:05;
uid "\001\374\025\264\032\352\005";
client-hostname "OA-FC15B41AEA05";
}
lease 192.168.20.107 {
starts 5 2019/03/29 12:02:50;
ends 6 2019/03/30 00:02:50;
tstp 6 2019/03/30 00:02:50;
cltt 5 2019/03/29 12:02:50;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet 9c:b6:54:80:d7:d7;
uid "\001\234\266T\200\327\327";
client-hostname "SA-9CB65480D7D7";
}
server-duid "\000\001\000\001$#
\364\344\270\203\003Gim";
lease 192.168.20.107 {
starts 5 2019/03/29 18:09:47;
ends 6 2019/03/30 06:09:47;
cltt 5 2019/03/29 18:09:47;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet 9c:b6:54:80:d7:d7;
uid "\001\234\266T\200\327\327";
client-hostname "SA-9CB65480D7D7";
}
lease 192.168.20.108 {
starts 5 2019/03/29 18:09:54;
ends 6 2019/03/30 06:09:54;
cltt 5 2019/03/29 18:09:54;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet fc:15:b4:1a:ea:05;
uid "\001\374\025\264\032\352\005";
client-hostname "OA-FC15B41AEA05";
}
lease 192.168.20.106 {
starts 5 2019/03/29 18:10:04;
ends 5 2019/03/29 21:10:04;
cltt 5 2019/03/29 18:10:04;
binding state active;

3-29
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA

Step # Procedure Description

next binding state free;


rewind binding state free;
hardware ethernet b8:83:03:47:5f:14;
uid "\000\270\203\003G_\024\000\000\000";
client-hostname "ILO2M2909004B";
}
lease 192.168.20.104 {
starts 5 2019/03/29 18:10:35;
ends 5 2019/03/29 21:10:35;
cltt 5 2019/03/29 18:10:35;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet b8:83:03:47:64:9c;
uid "\000\270\203\003Gd\234\000\000\000";
client-hostname "ILO2M2909004F";
}
lease 192.168.20.105 {
starts 5 2019/03/29 18:10:40;
ends 5 2019/03/29 21:10:40;
cltt 5 2019/03/29 18:10:40;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet b8:83:03:47:5e:54;
uid "\000\270\203\003G^T\000\000\000";
client-hostname "ILO2M29090048";

4. Access RMS iLO Note: The DNS Name on the pull-out label. The DNS Name on the
from the DHCP pull-out label should be used to match the physical machine with
address with default the iLO IP since the same default DNS Name from the pull-out
Administrator label is displayed upon logging in to the iLO command line
password. From the interface, as shown in the example below.
above
dhcpd.leases file, $ ssh Administrator@192.168.20.104
find the IP address Administrator@192.168.20.104's password:
for the iLO name, the User:Administrator logged-in to
default username is ILO2M2909004F.labs.nc.tekelec.com(192.168.20.104 /
Administrator, the FE80::BA83:3FF:FE47:649C)
password is on the iLO Standard 1.37 at Oct 25 2018
label which can be Server Name:
pulled out from front Server Power: On
of server.
5. Create RMS iLO
</>hpiLO-> create /map1/accounts1 username=root
new user. Create new
password=TklcRoot
user with customized
group=admin,config,oemHPE_rc,oemHPE_power,oemHPE_vm
username and
status=0
password.
status_tag=COMMAND COMPLETED
Tue Apr 2 20:08:30 2019
User added successfully.

3-30
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA

Step # Procedure Description


6. Disable the DHCP
</>hpiLO-> set /map1/dhcpendpt1 EnabledState=NO
before able to setup
status=0
static IP. Setup static
status_tag=COMMAND COMPLETED
failed before DHCP
Tue Apr 2 20:04:53 2019
is disabled.
Network settings change applied.
Settings change applied, iLO 5 will now be reset.
Logged Out: It may take several minutes before you
can log back in.
CLI session stopped
packet_write_wait: Connection to 192.168.20.104
port 22: Broken pipe

7. Setup RMS iLO


$ ssh <new username>@192.168.20.104
static IP address.
<new username>@192.168.20.104's password: <new
After a while after
password>
previous step, can
User: logged-in to
login back with the
ILO2M2909004F.labs.nc.tekelec.com(192.168.20.104 /
same address(which
FE80::BA83:3FF:FE47:649C)
is static IP now) and
iLO Standard 1.37 at Oct 25 2018
new username/
Server Name:
password. If don't
Server Power: On
want to use the same
address, go to next
</>hpiLO-> set /map1/enetport1/lanendpt1/ipendpt1
step to change the IP
IPv4Address=192.168.20.122 SubnetMask=255.255.255.0
address.
status=0
status_tag=COMMAND COMPLETED
Tue Apr 2 20:22:23 2019

Network settings change applied.


Settings change applied, iLO 5 will now be reset.
Logged Out: It may take several minutes before you
can log
back in.

CLI session stopped

packet_write_wait: Connection to 192.168.20.104


port 22:
Broken pipe
#

3-31
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA

Step # Procedure Description


8. Set EBIPA addresses
Set address for each enclosure switch, note the
for InterConnect
last number 1 or 2 is the interconnect bay number.
Bays (Enclosure
Switches). After
OA-FC15B41AEA05> set ebipa interconnect
login to OA, set
192.168.20.133 255.255.255.0 1
EBIPA addressed for
Entering anything other than 'YES' will result in
the two enclosure
the command not executing.
switches. The
It may take each interconnect several minutes to
addresses have to be
acquire the new settings.
in the subnet with
Are you sure you want to change the IP address for
server team0.2
the specified
address in order for
interconnect bays? yes
TFTP to work.
Successfully set 255.255.255.0 as the netmask for
interconnect bays.
Successfully set interconnect bay # 1 to IP
address 192.168.20.133
For the IP addresses to be assigned EBIPA must be
enabled.

OA-FC15B41AEA05> set ebipa interconnect


192.168.20.134 255.255.255.0 2
Entering anything other than 'YES' will result in
the command not executing.
It may take each interconnect several minutes to
acquire the new settings.
Are you sure you want to change the IP address for
the specified
interconnect bays? yes
Successfully set 255.255.255.0 as the netmask for
interconnect bays.
Successfully set interconnect bay # 2 to IP
address 192.168.20.134
For the IP addresses to be assigned EBIPA must be
enabled.

3-32
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA

Step # Procedure Description


9. Set EBIPA addresses
OA-FC15B41AEA05> set ebipa server 192.168.20.141
for Blade Servers.
255.255.255.0 1-16
Set EBIPA addressed
Entering anything other than 'YES' will result in
for all the blade
the command not executing.
servers. The
Changing the IP address for device (iLO) bays that
addresses are in the
are enabled
same subnet with
causes the iLOs in those bays to be reset.
first server team0.2
Are you sure you want to change the IP address for
address and
the specified
enclosure switches.
device (iLO) bays? YES
Successfully set 255.255.255.0 as the netmask for
device (iLO) bays.
Successfully set device (iLO) bay # 1 to IP
address 192.168.20.141
Successfully set device (iLO) bay # 2 to IP
address 192.168.20.142
Successfully set device (iLO) bay # 3 to IP
address 192.168.20.143
Successfully set device (iLO) bay # 4 to IP
address 192.168.20.144
Successfully set device (iLO) bay # 5 to IP
address 192.168.20.145
Successfully set device (iLO) bay # 6 to IP
address 192.168.20.146
Successfully set device (iLO) bay # 7 to IP
address 192.168.20.147
Successfully set device (iLO) bay # 8 to IP
address 192.168.20.148
Successfully set device (iLO) bay # 9 to IP
address 192.168.20.149
Successfully set device (iLO) bay #10 to IP
address 192.168.20.150
Successfully set device (iLO) bay #11 to IP
address 192.168.20.151
Successfully set device (iLO) bay #12 to IP
address 192.168.20.152
Successfully set device (iLO) bay #13 to IP
address 192.168.20.153
Successfully set device (iLO) bay #14 to IP
address 192.168.20.154
Successfully set device (iLO) bay #15 to IP
address 192.168.20.155
Successfully set device (iLO) bay #16 to IP
address 192.168.20.156
For the IP addresses to be assigned EBIPA must be
enabled.
OA-FC15B41AEA05>

3-33
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA

Step # Procedure Description


10. Add New User for
OA-FC15B41AEA05> ADD USER <username>
OA. Create new user,
New Password: ********
set access level as
Confirm : ********
ADMINISTRATOR,
User "<username>" created.
and assign access to
You may set user privileges with the 'SET USER
all blades and OAs.
ACCESS' and 'ASSIGN' commands.
After that, the
username and
OA-FC15B41AEA05> set user access <username>
password can be used
ADMINISTRATOR
to access OAs.
"<username>" has been given administrator level
privileges.

OA-FC15B41AEA05> ASSIGN SERVER ALL <username>

<username> has been granted access to the valid


requested bay(s)

OA-FC15B41AEA05> ASSIGN OA <username>

<username> has been granted access to the OA.

11. From OA, go to each


OA-FC15B41AEA05> connect server 4
blade with "connect
server <bay
Connecting to bay 4 ...
number>", add New
User:OAtmp-root-5CBF2E61 logged-in to
User for each blade.
ILO2M290605KP.(192.168.20.144 /
FE80::AF1:EAFF:FE89:460)
iLO Standard Blade Edition 1.37 at Oct 25 2018
Server Name:
Server Power: On

</>hpiLO->

</>hpiLO-> create /map1/accounts1 username=root


password=TklcRoot
group=admin,config,oemHPE_rc,oemHPE_power,oemHPE_vm

status=2
status_tag=COMMAND PROCESSING FAILED
error_tag=COMMAND SYNTAX ERROR
Tue Apr 23 16:18:58 2019
User added successfully.

3-34
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA

Step # Procedure Description


12. Change to static IP Note: After the following change, the OA session will be stuck due
on OA. In order not to the address change, make another server session ready to ssh
reply on DHCP and with the new IP address and new root user.
make the OA address
stable, change to OA-FC15B41AEA05> SET IPCONFIG STATIC 1
static IP. 192.168.20.131 255.255.255.0
Static IP settings successfully updated.
These setting changes will take effect immediately.

OA-FC15B41AEA05> SET IPCONFIG STATIC 2


192.168.20.132 255.255.255.0
Static IP settings successfully updated.
These setting changes will take effect immediately.
OA-FC15B41AEA05>

Configure Legacy BIOS on Remaining Hosts


These procedures define the steps necessary to configure additional Legacy BIOS for all hosts
in OCCNE 1.0. This includes steps that cannot be performed from the HP iLO 5 CLI prompt
such as RAID configuration, changing the boot mode, and setting the primary and secondary
boot devices.

Note:
The procedures in this document apply to the HP iLO console accessed via KVM. Each
procedure is executed in the order listed.

Prerequisites
Procedure OCCNE Configure Addresses for RMS iLOs, OA, EBIPA is complete.

Limitations and Expectations


1. Applies to HP iLO 5 only.
2. Should the System Utility indicate (or defaults to) UEFI booting, then the user must go
through the steps to reset booting back to the Legacy BIOS mode by following step:
Change over from UEFI Booting Mode to Legacy BIOS Booting Mode in Table 3-6.
3. The procedures listed here apply to both Gen10 DL380 RMSs and Gen10 BL460c Blades
in a C7000 enclosure.
4. Access to the enclosure blades in these procedures is via the Bootstrap host using SSH on
the KVM. This is possible because the prerequisites are complete. If the prerequisites are
not completed before executing this procedure, the enclosure blades are only accessible via
the KVM connected directly to the active OA. In this case the mouse is not usable and
screen manipulations are performed using the keyboard ESC and directional keys.
5. This procedure does NOT apply to the Bootstrap Host.

3-35
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

References
1. HPE iLO 5 User Guide 1.15
2. UEFI System Utilities User Guide for HPE ProLiant Gen10 Servers and HPE Synergy
3. UEFI Workload-based Performance and Tuning Guide for HPE ProLiant Gen10 Servers
and HPE Synergy
4. HPE BladeSystem Onboard Administrator User Guide
5. OCCNE Inventory File Preparation

Steps to Configure the Legacy BIOS on Remaining Hosts

Table 3-6 Procedure to configure the Legacy BIOS on Remaining Hosts

Step # Procedure Description


Expose the Expose the System Utility screen to the user for a RMS host on the
1. System KVM. It does not provide instructions on how to connect the KVM as
Configuration this may be different on each installation.
Utility on a RMS
1. Once the remote console has been exposed, the system must be
Host
reset by manually pressing the power button on the front of the
RMS host to force it through the restart process. When the initial
window is displayed, hit the F9 key repeatedly. Once the F9 is
highlighted at the lower left corner of the remote console, it should
eventually bring up the main System Utility.
2. The System Utilities screen is exposed in the remote console.

3-36
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts

Step # Procedure Description


Expose the 1. The blades are maintained via the OAs in the enclosure. Because
2. System Utility each blade iLO has already been assigned an IP address from the
for an Enclosure prerequisites, the blades can each be reached using SSH from the
Blade Bootstrap host login shell on the KVM.
a. SSH to the blade using the iLO IP address and the root user
and password. This brings up the HP iLO prompt.
$ ssh root@<blade_ilo_ip_address>
Using username "root".
Last login: Fri Apr 19 12:24:56 2019 from
10.39.204.17
[root@localhost ~]# ssh root@192.168.20.141
root@192.168.20.141's password:
User:root logged-in to ILO2M290605KM.
(192.168.20.141 / FE80::AF1:EAFF:FE89:35E)
iLO Standard Blade Edition 1.37 at Oct 25 2018
Server Name:
Server Power: On

</>hpiLO->
b. Use VSP to connect to the blade remote console.
</>hpiLO->vsp
c. Power cycle the blade to bring up the System Utility for that
blade.
Note: The System Utility is a text based version of that
exposed on the RMS via the KVM. The user must use the
directional (arrow) keys to manipulate between selections,
ENTER key to select, and ESC to go back from the current
selection.
d. Access the System Utility by hitting ESC 9.
2. Enabling Virtualization
This procedure provides the steps required to enable virtualization
on a given Bare Metal Server. Virtualization can be configured
using the default settings or via the default Workload Profiles.
Verifying Default Settings
a. Expose the System Utility by following step 1 or 2 depending
on the hardware being configured.
b. Select System Configuration
c. Select BIOS/Platform Configuration (RBSU)
d. Select Virtualization Options
This view displays the settings for the Intel(R) Virtualization
Technology (IntelVT), Intel(R) VT-d, and SR-IOV options
(Enabled or Disabled). The default values for each option is
Enabled.
e. Select F10 if it is desired to save and stay in the utility or select
the F12 if it is desired to save and exit to continue the current
boot process.

3-37
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts

Step # Procedure Description


Change over 1. Expose the System Utility by following step 1 or 2 depending on
3. from UEFI the hardware being configured.
Booting Mode to
Legacy BIOS 2. Select System Configuration
Booting Mode
3. Select BIOS/Platform Configuration (RBSU)
4. Select Boot Options.
This menu defines the boot mode.
If the Boot Mode is set to UEFI Mode then continue this
procedure. Otherwise there is no need to make any of the changes
below.
5. Select Boot Mode
This generates a warning indicating the following:
Boot Mode changes require a system reboot in order
to take effect. Changing the Boot Mode can impact
the ability of the server to boot the installed
operating system. An operating system is installed
in the same mode as the platform during the
installation. If the Boot Mode does not match the
operating system installation, the system cannot
boot. The following features require that the
server be configured for UEFI Mode: Secure Boot,
IPv6 PXE Boot, Boot > 2.2 TB Disks in AHCI SATA
Mode, and Smart Array SW RAID.

Hit the ENTER key and two selections appear: UEFI


Mode(highlighted) and Legacy BIOS Mode
6. Use the down arrow key to select Legacy BIOS Mode and hit the
ENTER. The screen indicates: A reboot is required for the
Boot Mode changes.
7. Hit F12. This displays the following: Changes are pending. Do
you want to save changes? Press 'Y" to save and exit,
'N' to discard and stay, or 'ESC' to cancel.
8. Hit the y key and an additional warning appears indicating: System
configuration changed. A system reboot is required.
Press ENTER to reboot the system.
9. a. Hit ENTER to force a reboot.
Note: The boot must go into the process of actually trying to
boot from the boot devices using the boot order (not just go
back through initialization and access the System Utility
again). The boot should fail and the System Utility can be
accessed again to continue any further changes needed.
b. After the reboot, hit the ESC 9key sequence to re-enter the
System Utility. Selecting System Configuration->BIOS/
Platform Configuration (RBSU)->Boot Options. Verify the
Boot Mode is set to Legacy Boot Mode UEFI Optimized Boot
is set to Disabled
10. Select F10 if it is desired to save and stay in the utility or select the
F12 if it is desired to save and exit to complete the current boot
process.

3-38
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts

Step # Procedure Description


Force PXE to 1. Expose the System Utility by following step 1 or 2 depending on
4. boot from the the hardware being configured.
first Embedded
FlexibleLOM 2. Select System Configuration.
HPE Ethernet
3. Select BIOS/Platform Configuration (RBSU) .
10Gb 2-port
Adapter 4. Select Boot Options. This menu defines the boot mode.
5. Confirm the following settings: Boot Mode Legacy BIOS Mode
UEFI Optimized Boot , and Boot Order Policy Retry Boot Order
Indefinitely(this means it keeps trying to boot without ever going to
disk). If not in Legacy BIOS Mode, follow procedure 2.1 Change
over from UEFI Booting Mode to Legacy BIOS Booting Mode.
6. Select Legacy BIOS Boot Order In the default view, the 10Gb
Embedded FlexibleLOM 1 Port 1 is at the bottom of the list.
7. Move the 10 Gb Embedded FlexibleLOM 1 Port 1 entry up above
the 1Gb Embedded LOM 1 Port 1 entry. To move an entry press the
'+' key to move an entry higher in the boot list and the '-' key to
move an entry lower in the boot list. Use the arrow keys to navigate
through the Boot Order list.
8. Select F10 if it is desired to save and stay in the utility or select the
F12 it is desired to save and exit to continue the current boot
process.

Enabling This procedure provides the steps required to enable virtualization on a


5. Virtualization given Bare Metal Server. Virtualization can be configured using the
default settings or via the Workload Profiles.
Verifying Default Settings
1. Expose the System Utility by following step 1 or 2 depending on
the hardware being configured.
2. Select System Configuration
3. Select BIOS/Platform Configuration (RBSU)
4. Select Virtualization Options
This view displays the settings for the Intel(R) Virtualization
Technology (IntelVT), Intel(R) VT-d, and SR-IOV options
(Enabled or Disabled). The default values for each option is
Enabled.
5. Select F10 if it is desired to save and stay in the utility or select the
F12 if it is desired to save and exit to continue the current boot
process.

3-39
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts

Step # Procedure Description


Disable RAID OCCNE does not currently support any RAID configuration. Follow this
6. Configurations procedure to disable RAID settings if the default settings of the System
Utility include any RAID configuration(s).
Note: There may be more than one RAID Array set up. This procedure
should be repeated for any RAID configuration.
1. Expose the System Utility by following step 1 or 2 depending on
the hardware being configured.
2. Select System Configuration.
3. Select Embedded RAID 1 : HPE Smart Array P408i-a SR Gen
10.
4. Select Array Configuration.
5. Select Manage Arrays.
6. Select Array A (or any designated Array Configuration if there
are more than one).
7. Select Delete Array. A warning is displayed indicating the
following:
Deletes an Array. All the data on the logical
drives that are part of deleted array will be lost.
Also if the deleted array is the only one on the
controller, the controller settings will be erased
and its default configuration is restored.
8. Hit ENTER, the changes are submitted and Delete Array
Successful is displayed.
9. Hit ENTER to go back to the main menu for the HPE Smart Array.
10. Select F10 if it is desired to save and stay in the utility or select the
F12 it is desired to save and exit to continue the current boot
process.

3-40
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts

Step # Procedure Description


Enable the This steps provide necessary to configure the primary and secondary
7. Primary and bootable devices for a Gen10 Server.
Secondary Boot Note: There can be multiple configurations of hardware drives on the
Devices server that include both Hard Drives (HDD) and Solid State Hard Drives
(SSD). SSDs are indicated by SATA-SSD ATA in the drive description.
The commands below include two HDDs and two SSDs. The SSDs are
not to be selected for this configuration. The actual selections may be
different based on the hardware being updated.
1. Expose the System Utility by following step 1 or 2 depending on
the hardware being configured.
2. Select System Configuration.
3. Select Embedded RAID 1 : HPE Smart Array P408i-a SR Gen
10.
4. Select Set Bootable Device(s) for Legacy Boot Mode.
If the boot devices are not set then Not Set is displayed for the
primary and secondary devices.
5. Examine the list of available hardware drives. If one or more HDDs
are available, continue with this procedure.
Note: A single drive can be set as both the primary and secondary
boot device but that is not part of this configuration.
6. Select Bootable Physical Drive
7. Select Port 1| Box:3 Bay:1 Size:1.8 TB SAS HP
EG00100JWJNR. Note: This example includes two HDDs and
two SSDs. The actual configuration may be different.
8. Select Set as Primary Bootable Device.
9. Hit ENTER.
Note: There is no need to set the secondary boot device. Leave it as
Not Set.
10. Hit the ESC key to back out to the System Utilitiesmenu.
11. Select F10 if it Is desired to save and stay in the utility or select the
F12 if it Is desired to save and exit to continue the current boot
process.

Configure Enclosure Switches


Introduction
This procedure is used to configure the 6127XLG enclosure switches.
Prerequisites
• Procedure OCCNE Configure Top of Rack 93180YC-EX Switches has been completed.
• Procedure OCCNE Configure Addresses for RMS iLOs, OA, EBIPA has been completed.
• The Utility USB is available containing the necessary files as per: OCCNE 1.0 Installation
PreFlight checklist: Create Utility USB.
Limitations/Expectations

3-41
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

All steps are executed from a Keyboard, Video, Mouse (KVM) connection.
References
1. https://support.hpe.com/hpsc/doc/public/display?docId=c04763537

Procedure

Table 3-7 Procedure to configure enclosure switches

Step # Procedure Description


1. Copy the Copy the 6127XLG configuration file on the Utility USB (See OCCNE
6127XLG 1.0 Installation PreFlight checklist : Create the OA 6127XLG Switch
configuration Configuration File) to the /var/lib/tftpboot directory on the Installer
file Bootstrap Host and verify it exists and the permissions.
$ cp /media/usb/6127xlg_irf.cfg /var/lib/tftpboot/
6127xlg_irf.cfg

$ ls -l /var/lib/tftpboot/

total 1305096

-rw-r--r--. 1 root root 311 Mar 25 08:41


6127xlg_irf.cfg

2. Modify the These values are contained at OCCNE 1.0 Installation PreFlight
switch specific checklist : Create the OA 6127XLG Switch Configuration File from
values in column Enclosure_Switch.
the /var/lib/
tftpboot/
6127xlg_irf.cfg $ cd /var/lib/tftpboot
file. $ sed -i 's/{switchname}/<switch_name>/' 6127xlg_irf.cfg
$ sed -i 's/{admin_password}/<admin_password>/'
6127xlg_irf.cfg
$ sed -i 's/{user_name}/<user_name>/' 6127xlg_irf.cfg
$ sed -i 's/{user_password}/<user_password>/'
6127xlg_irf.cfg

3-42
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-7 (Cont.) Procedure to configure enclosure switches

Step # Procedure Description


3. Access the Access the InterConnect Bay1 6127XLG switch to configure the IRF
InterConnect (Intelligent Resilient Framework).
Bay1 6127XLG Note: On a new switch the user is presented with the following when
connecting to the console and must type CTRL_C or CTRL_D to break
out of the loop.
Note: When trying to save the config, the following prompt is received:
[HPE] [HPE] save The current configuration will be written to the
device. Are you sure? [Y/N]: Before pressing ENTER you must choose
'YES' or 'NO'[Y/N]:y Please input the file name(*.cfg)[flash:/startup.cfg]
(To leave the existing filename unchanged, press the enter key): User can
leave this default startup.cfg unchanged, or change to another name. The
cfg file will be used for next reboot.

$ ssh <oa username>@<oa address>

If it shows standby, ssh to the other OA address.

OA-FC15B41AEA05> connect interconnect 1

....

<HPE>system-view

System View: return to User View with Ctrl+Z.

(Note: Run the following commands:)

irf member 1 priority 32

interface range Ten-GigabitEthernet 1/0/17 to Ten-


GigabitEthernet 1/0/20

shutdown

quit

irf-port 1/1

port group interface Ten-GigabitEthernet1/0/17

port group interface Ten-GigabitEthernet1/0/18

port group interface Ten-GigabitEthernet1/0/19

port group interface Ten-GigabitEthernet1/0/20

quit

interface range Ten-GigabitEthernet 1/0/17 to Ten-


GigabitEthernet 1/0/20

3-43
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-7 (Cont.) Procedure to configure enclosure switches

Step # Procedure Description

undo shutdown

quit

save

irf-port-configuration active

4. Access the Access the InterConnect Bay2 6127XLG switch to re-number to IRF 2.
InterConnect
Bay2 6127XLG OA-FC15B41AEA05> connect interconnect 2

....

<HPE>system-view

System View: return to User View with Ctrl+Z.

[HPE] irf member 1 renumber 2

Renumbering the member ID may result in configuration


change or loss. Continue?[Y/N]Y

[HPE]save

The current configuration will be written to the


device. Are you sure? [Y/N]:Y

Please input the file name(*.cfg)[flash:/startup.cfg]

(To leave the existing filename unchanged, press the


enter key):

Validating file. Please wait...

Saved the current configuration to mainboard device


successfully.

[HPE]quit

<HPE>reboot

Start to check configuration with next startup


configuration file, please wait.........DONE!

This command will reboot the device. Continue? [Y/N]:Y

Now rebooting, please wait...

System is starting...

3-44
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-7 (Cont.) Procedure to configure enclosure switches

Step # Procedure Description


5. Configure the After rebooting, the interfaces will begin with number 2 such as Ten-
IRF on Bay2 GigabitEthernet2/0/17, Ten-GigabitEthernet2/1/5. Run the following
6127XLG commands:
switch
system-view

interface range Ten-GigabitEthernet 2/0/17 to Ten-


GigabitEthernet 2/0/20

shutdown

quit

irf-port 2/2

port group interface Ten-GigabitEthernet2/0/17

port group interface Ten-GigabitEthernet2/0/18

port group interface Ten-GigabitEthernet2/0/19

port group interface Ten-GigabitEthernet2/0/20

quit

interface range Ten-GigabitEthernet 2/0/17 to Ten-


GigabitEthernet 2/0/20

undo shutdown

quit

save

irf-port-configuration active

6. Run "reboot"
<HPE>reboot
command on
Start to check configuration with next startup
both switches
configuration file, please wait.........DONE!
This command will reboot the device. Continue? [Y/N]:Y
Now rebooting, please wait...

System is starting...

3-45
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment

Table 3-7 (Cont.) Procedure to configure enclosure switches

Step # Procedure Description


7. Verify the IRF When reboot is finished, verify IRF is working with both member and
for the ports from previous two switches, which form IRF to act as one switch
6127XLG now.
switches.
<HPE>system-view

System View: return to User View with Ctrl+Z.

[HPE]display irf configuration

MemberID NewID IRF-Port1 IRF-


Port2

1 1 Ten-GigabitEthernet1/0/17 disable

Ten-GigabitEthernet1/0/18

Ten-GigabitEthernet1/0/19

Ten-GigabitEthernet1/0/20

2 2 disable Ten-
GigabitEthernet2/0/17

Ten-
GigabitEthernet2/0/18

Ten-
GigabitEthernet2/0/19

Ten-
GigabitEthernet2/0/20

[HPE]

3-46
Chapter 3
Bastion Host Installation

Table 3-7 (Cont.) Procedure to configure enclosure switches

Step # Procedure Description


8. Configure the
IRF switch with
<HPE>tftp 192.168.20.11 get 6127xlg_irf.cfg startup.cfg
predefined
configuration
startup.cfg already exists. Overwrite it? [Y/N]:Y
file.
Press CTRL+C to abort.

% Total % Received % Xferd Average Speed Time Time


Time Current

Dload Upload Total Spent Left Speed

100 9116 100 9116 0 0 167k 0 --:--:-- --:--:-- --:--:--


178k

<HPE>system-view

System View: return to User View with Ctrl+Z.

[HPE]configuration replace file flash:/startup.cfg

Current configuration will be lost, save current


configuration? [Y/N]:N

Now replacing the current configuration. Please wait ...

Succeeded in replacing current configuration with the


file flash:/startup.cfg.

[<switch_name>]save flash:/startup.cfg

The current configuration will be saved to flash:/


startup.cfg. Continue? [Y/N]:Y

flash:/startup.cfg exists, overwrite? [Y/N]:Y

Now saving current configuration to the device.

Saving configuration flash:/startup.cfg.Please wait...

Configuration is saved to device successfully.

[<switch_name>]

Bastion Host Installation


This section outlines the use of the Installer Bootstrap Host to provision RMS2 with an
operating system and configure it to fulfill the role of Database Host. Subsequently, steps are
provided to provision virtual machines that run MySQL services, DBMS, and serve the role as

3-47
Chapter 3
Bastion Host Installation

Bastion Host. After the Bastion Host is provisioned, it is used to complete the installation of
OCCNE.

Install Host OS onto RMS2 from the Installer Bootstrap Host


(RMS1)
Introduction
These procedures provide the steps required to install the OL7 image onto the RMS2 via the
Installer Bootstrap Host using a occne/os_install container. Once completed, RMS2 includes all
necessary rpm updates and tools necessary to Install the Bastion Host.

Prerequisites
• All procedures in OCCNE 1.0 - Installation Procedure : OCCNE Initial Configuration are
complete.
• The Utility USB is available containing the necessary files as mentioned in OCCNE 1.0
Installation PreFlight checklist.

Limitations and Expectations


All steps are executable from a SSH application (putty) connected laptop accessible via the
Management Interface.

3-48
Chapter 3
Bastion Host Installation

Procedures

Table 3-8 Procedure to install the OL7 image onto the RMS2 via the installer bootstrap
host

Step # Procedure Description


1. Copy the This procedure is used to provide the steps for copying all supporting
Necessary Files files from the Utility USB to the appropriate directories so that the OS
from the Utility Install Container successfully installs OL7 onto RMS2.
USB to Support the Note: The cluster_name field is derived from the
OS Install occne_cluster_name field in the hosts.ini file.
1. Create the directories needed on the Installer Bootstrap Host.

$ mkdir /var/occne
$ mkdir /var/occne/<cluster_name>
$ mkdir /var/occne/<cluster_name>/yum.repos.d
2. Mount the Utility USB.
Note: Instructions for mounting a USB in Linux are at: OCCNE
Installation of Oracle Linux 7.5 on Bootstrap Host : Install
Additional Packages. Only follow steps 1-4 to mount the USB.
3. Copy the hosts.ini file (created using procedure: OCCNE
Inventory File Preparation) into the /var/occne/<cluster_name>/
directory. This hosts.ini file defines RMS2 to the OS Installer
Container running the os-install image downloaded from the
repo.
$ cp /media/usb/hosts.ini /var/occne/
<cluster_name>/hosts.ini
4. Update the hosts.ini file to include the ToR host_net (vlan3) VIP
for NTP clock synchronization. Use the ToR VIP address as
defined in procedure: OCCNE 1.0 Installation PreFlight
Checklist : Complete OA and Switch IP SwitchTable as the NTP
source.
$ vim /var/occne/<cluster_name>/hosts.ini

Update the ntp_server field with the VIP address.


5. Copy the customer specific ol7-mirror.repo and the docker-ce-
stable repo on the Utility USB to the Installer Bootstrap Host.
This is the .repo file created by the customer that provides access
to the onsite (within their network) repositories needed to
complete the full deployment of OCCNE 1.0 and to install
docker-ce onto the Installer Bootstrap Host.

$ cp /media/usb/ol7-mirror.repo /var/occne/
<cluster_name>/yum.repos.d/ol7-mirror.repo
$ cp /media/usb/ol7-mirror.repo /etc/yum.repos.d/
ol7-mirror.repo
$ cp /media/usb/docker-ce-stable.repo /etc/
yum.repos.d/docker-ce-stable.repo
6. If still enabled from procedure: OCCNE Installation of Oracle
Linux 7.5 on Bootstrap Host, the /etc/yum.repos.d/Media.repo is
to be disabled.

3-49
Chapter 3
Bastion Host Installation

Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host

Step # Procedure Description

$ mv /etc/yum.repos.d/Media.repo /etc/yum.repos.d/
Media.repo.disable
7. Copy the updated version of the kickstart configuration file
to /var/occne/<cluster_name> directory.
$ cp /media/usb/occne-ks.cfg.j2.new /var/occne/
<cluster_name>/occne-ks.cfg.j2.new

2. Copy the OL7 ISO The iso file should be accessible from a Customer Site Specific
to the Installer repository. This file should be accessible because the ToR switch
Bootstrap Host configurations were completed in procedure: OCCNE Configure Top
of Rack 93180YC-EX Switches.
Copy from RMS1, the OL7 ISO file to the /var/occne directory. The
example below uses OracleLinux-7.5-x86_64-disc1.iso. Note: If the
user copies this ISO from their laptop then they must use an
application like WinSCP pointing to the Management Interface IP.
$ scp <usr>@<site_specific_address>:/<path_to_iso>/
OracleLinux-7.5-x86_64-disc1.iso /var/occne/
OracleLinux-7.5-x86_64-disc1.iso

3. Install Docker onto Use YUM to install docker-ce onto the installer Bootstrap Host. YUM
the Installer should use the existing <customer_specific_repo_file>.repo in
Bootstrap Host the /etc/yum.repos.d directory.
$ yum install docker-ce-18.06.1.ce-3.el7.x86_64

3-50
Chapter 3
Bastion Host Installation

Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host

Step # Procedure Description


4. Set up access to the 1. Add an entry to the /etc/hosts file on the Installer Bootstrap Host
Docker Registry on to provide a name mapping for the docker registry using the
the Installer hosts.ini file fields occne_private_registry and
Bootstrap Host occne_private_registry_address in OCCNE Inventory File
Preparation.
<occne_private_registry_address> <occne_private_registry>
Example:10.75.200.217 reg-1
2. Create the /etc/docker/daemon.json file on the Installer
Bootstrap Host. Add an entry for the insecure-registries for the
docker registry.
$ mkdir /etc/docker
$ vi /etc/docker/daemon.json
Enter the following:

"insecure-registries":
["<occne_private_registry>:<occne_private_registry
_port>"]

Example:

cat /etc/docker/daemon.json

"insecure-registries": ["reg-1:5000"]

To Verify:

ping <occne_private_registry>

Example:

# ping reg-1

PING reg-1 (10.75.200.217) 56(84) bytes of data.

64 bytes from reg-1 (10.75.200.217): icmp_seq=1


ttl=61 time=0.248 ms

64 bytes from reg-1 (10.75.200.217): icmp_seq=2


ttl=61 time=0.221 ms

64 bytes from reg-1 (10.75.200.217): icmp_seq=3


ttl=61 time=0.239 ms
3. Create the docker service http-proxy.conf file.

3-51
Chapter 3
Bastion Host Installation

Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host

Step # Procedure Description

$ mkdir -p /etc/systemd/system/docker.service.d/

$ vi /etc/systemd/system/docker.service.d/http-
proxy.conf

Add the following:

[Service]

Environment="NO_PROXY=<occne_private_registry_addr
ess>,<occne_private_registry>,
127.0.0.1,localhost"

Example:

[Service]

Environment="NO_PROXY=10.75.200.217,reg-1,127.0.0.
1,localhost"
4. Start the docker daemon
$ systemctl daemon-reload
$ systemctl restart docker
$ systemctl enable docker

Verify docker is running:


$ ps -elf | grep docker
$ systemctl status docker

5. Setup NFS on the Run the following commands (assumes nfs-utils has already been
Installer Bootstrap installed in procedure: OCCNE Installation of Oracle Linux 7.5 on
Host Bootstrap Host : Install Additional Packages).
Note: The IP address used in the echo command is the Platform
VLAN IP Address (VLAN 3) of the Bootstrap Host (RMS 1) as given
in: OCCNE 1.0 Installation PreFlight Checklist : Complete Site
Survey Host Table.
$ echo'/var/occne
172.16.3.4/24(ro,no_root_squash)'>> /etc/exports
$ systemctl start nfs-server
$ systemctl enable nfs-server
Verify nfs is running:
$ ps -elf | grep nfs
$ systemctl status nfs-server

6. Set up the Boot Execute the following commands:


Loader on the
Installer Bootstrap $ mkdir -p /var/occne/pxelinux
Host $ mount -t iso9660 -o loop /var/occne/
OracleLinux-7.5-x86_64-disc1.iso /mnt
$ cp /mnt/isolinux/initrd.img /var/occne/pxelinux
$ cp /mnt/isolinux/vmlinuz /var/occne/pxelinux

3-52
Chapter 3
Bastion Host Installation

Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host

Step # Procedure Description


7. Verify and Set the Each file configured in the step above must be open for read and write
PXE Configuration permissions.
File Permissions
on the Installer $ chmod 777 /var/occne/pxelinux
Bootstrap Host $ chmod 777 /var/occne/pxelinux/vmlinuz
$ chmod 777 /var/occne/pxelinux/initrd.img

8. Disable DHCP and The TFTP and DHCP services running on the Installer Bootstrap Host
TFTP on the may still be running. These services must be disabled.
Installer Bootstrap
Host $ systemctl stop dhcpd
$ systemctl disable dhcpd
$ systemctl stop tftp
$ systemctl disable tftp

9. Disable SELINUX SELINUX must be set to permissive mode. In order to successfully


set the SELINUX mode, a reboot of the system is required. The
getenforce command is used to determine the status of SELINUX.
$ getenforce
active
If the output of this command displays active,
change it to permissive by editing the /etc/selinux/
config file.
$ vi /etc/selinux/config
Change the SELINUX variable to passive:
SELINUX=permissive
save the file
Reboot the system: reboot

3-53
Chapter 3
Bastion Host Installation

Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host

Step # Procedure Description


10. Execute the OS This step requires executing docker run for four different Ansible tags.
Install on RMS2 Note: The initial OS install is performed from the OS install container
from the Installer running bashbecause the new kickstart configuration file must be
Bootstrap Host copied over the existing configuration prior to executing the Ansible
playbook.
1. Run the docker command below to install the OS onto RMS2.
This first command installs the OS while the subsequent
commands (steps) set up the environment for yum repository
support, datastore, and security. These commands must be
executed in the order listed. This command can take up to 30
minutes to complete.

$ docker run -it --rm --network host --cap-


add=NET_ADMIN -v /var/occne/
rainbow.lab.us.oracle.com/:/host -v /var/
occne/:/var/occne:rw <image_name>:<image_tag> bash

Example:

$ docker run -it --rm --network host --cap-


add=NET_ADMIN -v /var/occne/rainbow/:/host -
v /var/occne/:/var/occne:rw reg-1:5000/os_install:
1.0.1 bash
2. From the container, copy the /var/occne/occne-ks.cfg.j2.new file
(which is mounted to the /host directory on the container) over
the existing /install/os-install/roles/pxe_config/templates/ks/
occne-ks.cfg.j2 file.
$ cp /host/occne-ks.cfg.j2.new /install/roles/
pxe_config/templates/ks/occne-ks.cfg.j2
3. Install the OS onto each host using the ansible command
indicated below. This command installs the OS while the
subsequent commands (steps) set up the environment for yum
repository support, datastore, and security. This command can
take up to 30 minutes to complete.
$ ansible-playbook -i /host/hosts.ini --become --
become-user=root --private-key /host/.ssh/
occne_id_rsa /install/os-install.yaml --limit
<RMS2 db node from hosts.ini file>,localhost --
skip-tags "ol7_hardening,datastore,yum_update"

Example:

ansible-playbook -i /host/hosts.ini --become --


become-user=root --private-key /host/.ssh/
occne_id_rsa /install/os-install.yaml --limit
db-2.rainbow.lab.us.oracle.com,localhost --skip-
tags "ol7_hardening,datastore,yum_update"
4. Configure db-2 management interface.

3-54
Chapter 3
Bastion Host Installation

Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host

Step # Procedure Description

The <vlan_4_ip_address> is from OCCNE 1.0 Installation


PreFlight Checklist : Complete Site Survey Host IP Table.
The <ToRswitch_CNEManagementNet_VIP> is from OCCNE
1.0 Installation PreFlight Checklist : ToR and Enclosure Switches
Variables Table (Switch Specific).
$ scp /tmp/ifcfg-* root@<db-2 host_net address>:/
tmp
$ ssh root@<db-2 host_net address>

$sudo su
$ cd /etc/sysconfig/network-scripts/

$ cp /tmp/ifcfg-vlan ifcfg-team0.4
$ sed -i 's/{BRIDGE_NAME}/vlan4-br/g' ifcfg-
team0.4
$ sed -i 's/{PHY_DEV}/team0/g' ifcfg-team0.4
$ sed -i 's/{VLAN_ID}/4/g' ifcfg-team0.4
$ sed -i 's/{IF_NAME}/team0.4/g' ifcfg-team0.4
$ echo "BRIDGE=vlan4-br" >> ifcfg-team0.4

$ cp /tmp/ifcfg-bridge ifcfg-vlan4-br
$ sed -i 's/{BRIDGE_NAME}/vlan4-br/g' ifcfg-vlan4-
br
$ sed -i 's/DEFROUTE=no/DEFROUTE=yes/g' ifcfg-
vlan4-br
$ sed -i 's/{IP_ADDR}/<vlan_4_ip_address>/g'
ifcfg-vlan4-br
$ sed -i 's/{PREFIX_LEN}/29/g' ifcfg-vlan4-br
$ sed -i 's/DEFROUTE=no/DEFROUTE=yes/g' ifcfg-
vlan4-br
$ echo "GATEWAY=<ToRswitch_CNEManagementNet_VIP>"
>> ifcfg-vlan4-br

$ service network restart


5. Execute yum-update using docker.
This step disables any existing .repo files that are currently
existing in directory /etc/yum.repos.d on RMS after the OS
Install. It then copies any .repo files in the /var/occne/
<cluster_name> directory into the /etc/yum.repos.d and sets up
the customer repo access.
$ docker run --rm --network host --cap-
add=NET_ADMIN -v /var/occne/<cluster_name>/:/host
-v /var/occne/:/var/occne:rw -e "OCCNEARGS=--
limit <RMS2 db node from hosts.ini
file>,localhost --tags yum_update"
<image_name>:<image_tag>

Example:

$ docker run -it --rm --network host --cap-

3-55
Chapter 3
Bastion Host Installation

Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host

Step # Procedure Description

add=NET_ADMIN -v /var/occne/rainbow/:/host -
v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit
db-2.rainbow.lab.us.oracle.com,localhost --tags
yum_update" reg-1:5000/os_install:1.0.1
6. Check the /etc/yum.repos.d directory on RMS2 for non-disabled
repo files. These files should be disabled. The only file that
should be enabled is the customer specif .repo file that was set in
the /var/occne/<cluster_name>/yum.repos.d directory on RMS1.
If any of these files are not disabled then each file must be
renamed as <filename>.repo.disabled.
$ cd /etc/yum.repos.d
$ ls

Check for any files other than the customer


specific .repo file that are not listed as
disabled. If any exist, disable them using the
following command:

$ mv <filename>.repo <filename>.repo.disabled
7. Execute datastore using docker.
$ docker run --rm --network host --cap-
add=NET_ADMIN -v /var/occne/<cluster_name>/:/host
-v /var/occne/:/var/occne:rw -e "OCCNEARGS=--
limit <RMS2 db node from hosts.ini
file>.oracle.com,localhost --tags datastore"
<image_name>:<image_tag>

Example:

$ docker run -it --rm --network host --cap-


add=NET_ADMIN -v /var/occne/rainbow/:/host -
v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit
db-2.rainbow.lab.us.oracle.com,localhost --tags
datastore" reg-1:5000/os_install:1.0.1
8. Execute the OL7 hardening using docker.
Note: The two extra-vars included in the command are not used
in the context of this command but need to be there to set the
values to something other than an empty string.
$ docker run --rm --network host --cap-
add=NET_ADMIN -v /var/occne/<cluster_name>/:/host
-v /var/occne/:/var/occne:rw -e "OCCNEARGS=--
limit <RMS2 db node from hosts.ini
file>,localhost --tags ol7_hardening --extra-vars
ansible_env=172.16.3.4 --extra-vars
http_proxy=172.16.3.4" <image_name>:<image_tag>

Example:

3-56
Chapter 3
Bastion Host Installation

Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host

Step # Procedure Description

$ docker run -it --rm --network host --cap-


add=NET_ADMIN -v /var/occne/rainbow/:/host -
v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit
db-2.rainbow.lab.us.oracle.com,localhost --tags
ol7_hardening --extra-vars ansible_env=172.16.3.4
--extra-vars http_proxy=172.16.3.4" reg-1:5000/
os_install:1.0.1

Installation of the Bastion Host


This procedure details the steps necessary to install the Bastion Host onto RMS2 during initial
installation.

Prerequisites
1. Procedure OCCNE 1.0 - Installation Procedure : Install OL7 onto the Management Host
has been completed
2. All the hosts servers where this VM is created are captured in OCCNE Inventory File
Template
3. Host names and IP Address, network information assigned to this VM is captured in the
OCCNE 1.0 Installation PreFlight Checklist
4. The Utility USB is available containing the necessary files as per: OCCNE 1.0 Installation
PreFlight checklist : Miscellaneous Files

Limitations and Expectations


1. All steps are executable from a SSH application (putty) connected laptop accessible via the
Management Interface.
2. The OL7 Linux iso must be available either on RMS2 in /var/occne or it can be obtained
from the Customer Specific Repository which is accessible via the Management Interface
on RMS2 via a laptop (using WinSCP or some other application).

References
1. https://linux.die.net/man/1/virt-install
2. https://linuxconfig.org/how-to-create-and-manage-kvm-virtual-machines-from-cli
3. https://www.cyberciti.biz/faq/kvm-install-centos-redhat-using-kickstart-ks-cfg/
4. https://opensource.com/business/16/9/linux-users-guide-lvm

Steps to Install the Bastion Host


These procedures detail the steps required to install the Bastion Host (Management VM) onto
RMS2. All commands are executed from RMS2. RMS2 is accessible from RMS1 via SSH.

3-57
Chapter 3
Bastion Host Installation

Table 3-9 Procedure to Install the Bastion Host

Step # Procedure Description


1. Login to Login using the admusr account and the private key generated when the
RMS2 from OS Install was completed on RMS2. Sudo to root after logging in.
RMS1
$ ssh -i /var/occne/rainbow.lab.us.oracle.com/.ssh/
occne_id_rsa admusr@172.16.3.5
$ sudo su -

2. Install Install the following files from the ISO USB onto RMS2.
Necessary
RPMs $ yum install qemu-kvm libvirt libvirt-python
libguestfs-tools virt-install -y

3-58
Chapter 3
Bastion Host Installation

Table 3-9 (Cont.) Procedure to Install the Bastion Host

Step # Procedure Description


3. Configure the The initial setup includes steps to configure the kickstart file and create
Kickstart file the bridge necessary for the VM to network to the system hosts.
1. Mount the Utility USB.
Note: Instructions for mounting a USB in Linux are at: OCCNE
Installation of Oracle Linux 7.5 on Bootstrap Host : Install
Additional Packages. Only follow steps 1-4 to mount the USB.
2. Copy the kickstart file from the Utility USB to the /tmp directory as
bastion_host.ks on RMS2.
Note: The /tmp location is highly volatile and may be cleaned out
on reboot. It is strongly recommended to put this somewhere else for
safe keeping. It can always be downloaded again.
$ cp /media/usb/bastion_host.ks /tmp/bastion_host.ks
3. Update the kickstart file using the following commands to set the
following file variables:
a. BASTION_VLAN2_IP
b. BASTION_VLAN3_IP
c. BASTION_VLAN4_IP
d. BASTION_VLAN4_MASK
e. GATEWAYIP
f. NODEHOSTNAME
g. NTPSERVERIPS
h. NAMESERVERIPS
i. HTTP_PROXY
j. PUBLIC_KEY
Note: HTTP_PROXY in the commands below require only the URL as
the http:// is provided in the sed command. If a proxy is not
needed this variable still must be set to something as it cannot be left
blank. In that case just set it to an unused IP address.
$ sed -i 's/GATEWAYIP/<gateway_ip>/g' /tmp/
bastion_host.ks
$ sed -i 's/BASTION_VLAN2_IP/
<bastion_vlan2_ip>/g' /tmp/bastion_host.ks
$ sed -i 's/BASTION_VLAN3_IP/
<bastion_vlan3_ip>/g' /tmp/bastion_host.ks
$ sed -i 's/BASTION_VLAN4_IP/
<bastion_vlan4_ip>/g' /tmp/bastion_host.ks
$ sed -i 's/BASTION_VLAN4_MASK/
<bastion_vlan4_mask>/g' /tmp/bastion_host.ks
$ sed -i 's/NODEHOSTNAME/<node_host_name>/g' /tmp/
bastion_host.ks
$ sed -i 's/NAMESERVERIPS/<nameserver_ip>/g' /tmp/
bastion_host.ks
$ sed -i 's/NTPSERVERIPS/
<ToRswitch_Platform_VIP>/g' /tmp/bastion_host.ks

3-59
Chapter 3
Bastion Host Installation

Table 3-9 (Cont.) Procedure to Install the Bastion Host

Step # Procedure Description

$ sed -i 's/HTTP_PROXY/http:\/\/
<http_proxy>/g' /tmp/bastion_host.ks
$ sed -e '/PUBLIC_KEY/{' -e 'r /home/admusr/.ssh/
authorized_keys' -e 'd' -e '}' -i /tmp/
bastion_host.ks

4. Configure The networking required to interface with the Bastion Host is all handled
Networking by executing the following command set:
$ sudo su
$ cd /etc/sysconfig/network-scripts/

$ sed -i '/IPADDR/d' ifcfg-team0


$ sed -i '/PREFIX/d' ifcfg-team0
$ sed -i '/GATEWAY/d' ifcfg-team0
$ sed -i '/DEFROUTE="yes"/d' ifcfg-team0
$ echo "BRIDGE=teambr0" >> ifcfg-team0

$ cp /tmp/ifcfg-bridge ifcfg-teambr0
$ sed -i 's/{BRIDGE_NAME}/teambr0/g' ifcfg-teambr0
$ sed -i 's/{IP_ADDR}/172.16.3.5/g' ifcfg-teambr0
$ sed -i 's/{PREFIX_LEN}/24/g' ifcfg-teambr0
$ sed -i '/NM_CONTROLLED/d' ifcfg-teambr0

$ cp /tmp/ifcfg-vlan ifcfg-team0.2
$ sed -i 's/{BRIDGE_NAME}/vlan2-br/g' ifcfg-team0.2
$ sed -i 's/{PHY_DEV}/team0/g' ifcfg-team0.2
$ sed -i 's/{VLAN_ID}/2/g' ifcfg-team0.2
$ sed -i 's/{IF_NAME}/team0.2/g' ifcfg-team0.2
$ echo "BRIDGE=vlan2-br" >> ifcfg-team0.2

$ cp /tmp/ifcfg-bridge ifcfg-vlan2-br
$ sed -i 's/{BRIDGE_NAME}/vlan2-br/g' ifcfg-vlan2-br
$ sed -i 's/{IP_ADDR}/192.168.20.12/g' ifcfg-vlan2-br
$ sed -i 's/{PREFIX_LEN}/24/g' ifcfg-vlan2-br

$ service network restart

3-60
Chapter 3
Bastion Host Installation

Table 3-9 (Cont.) Procedure to Install the Bastion Host

Step # Procedure Description


5. Copy and 1. Create the /var/occne directory on RMS2 if not already existing.
Mount the
Oracle Linux $ mkdir /var/occne
ISO
2. Verify the OL7 iso file is available from the previous procedure:
OCCNE Install Host OS onto RMS2 from the Installer Bootstrap
Host - RMS1 in the /var/occne directory. From RMS2, SCP the
Oracle Linux ISO from RMS1 into the /var/occne directory on
RMS2 and verify the permissions are set to 0644. The file should be
in the /var/occne directory on RMS1. If the file is not on RMS1 it
must be downloaded from the customer specific site where the OL is
maintained onto RMS2.
Note: The example below uses OracleLinux-7.5-x86_64-
disc1.iso. If the user copies this ISO from their laptop then they
must use an application like WinSCP pointing to the Management
Interface IP.
$ scp root@172.16.3.4:/var/occne/
<iso_file_name>.iso /var/occne/.
$ chmod 644 /var/occne/<iso_file_name>.iso

3-61
Chapter 3
Bastion Host Installation

Table 3-9 (Cont.) Procedure to Install the Bastion Host

Step # Procedure Description


6. Update the 1. Un-comment the user and group fields in the /etc/libvirt/
qemu.conf qemu.conf file on RMS2.
File
$ vim /etc/libvirt/qemu.conf

Update fields:
# Some examples of valid values are:
#
# user = "qemu" # A user named "qemu"
# user = "+0" # Super user (uid=0)
# user = "100" # A user named "100" or a
user with uid=100
#
user = "root"

# The group for QEMU processes run by the system


instance. It can be
# specified in a similar way to user.
group = "root"
2. Restart the libvirtd service RMS2.
Note: After the restart the service should become enabled. If an error
is displayed like the following, it can be ignored for now. A bug
story has been opened to address this in a later release.
Jun 01 16:13:14 db-2.odyssey.morrisville.us.lab.oracle.com systemd
[1] : Starting Virtualization daemon... Jun 01 16:13:14
db-2.odyssey.morrisville.us.lab.oracle.com systemd [1] : Started
Virtualization daemon. Jun 01 16:13:15
db-2.odyssey.morrisville.us.lab.oracle.com dnsmasq [39538] :
read /etc/hosts - 2 addresses Jun 01 16:13:15
db-2.odyssey.morrisville.us.lab.oracle.com dnsmasq [39538] : failed
to load names from /var/lib/libvirt/dnsmasq/
default.addnhosts: P...enied Jun 01 16:13:15
db-2.odyssey.morrisville.us.lab.oracle.com dnsmasq [39538] :
cannot read /var/lib/libvirt/dnsmasq/default.hostsfile:
Permission denied Hint: Some lines were ellipsized
$ systemctl daemon-reload
$ systemctl restart libvirtd
$ systemctl enable libvirtd

To Verify:
$ systemctl status libvirtd

3-62
Chapter 3
Bastion Host Installation

Table 3-9 (Cont.) Procedure to Install the Bastion Host

Step # Procedure Description


7. Create the 1. Execute the virt-install command on RMS2.
Bastion Host
VM
$ virt-install --name bastion_host --memory 8192 --
vcpus 2 --metadata description="Bastion Host" \
--autostart --location /var/
occne/OracleLinux-7.5-x86_64-disc1.iso \
--initrd-inject=/tmp/
bastion_host.ks --os-variant ol7.5 \
--extra-args "ks=file:/
bastion_host.ks console=tty0 console=ttyS0,115200" \
--disk path=/var/lib/libvirt/
images/bastion_host.qcow2,size=300 \
--network bridge=teambr0 --
network bridge=vlan2-br --network bridge=vlan4-br
--graphics none
2. After the VM creation completes, the login prompt appears which
allows the user to login to the Bastion Host.
3. To exit from the virsh console press CTRL+ '5' keys, after logout
from VM.

8. Un-mount the Use the umount command to un-mount the Utility USB and extract it
Utility USB from the USB port.
$ umount /media/usb

Configuration of the Bastion Host


Introduction
This procedure details the steps necessary to configure the Bastion Host onto RMS2 during
initial installation. This VM is used for host provisioning, MySQL Cluster, and installing the
hosts with kubernetes and the common services.

Prerequisites
1. Procedure OCCNE Installation of the Bastion Host has been completed.
2. All the hosts servers where this VM is created are captured in OCCNE Inventory File
Preparation.
3. Host names and IP Address, network information assigned to this VM is captured in the
OCCNE 1.0 Installation PreFlight Checklist
4. Yum repository mirror is setup and accessible by Bastion host.
5. Http server is setup and has kubernetes binaries, helm charts on a server with address that
is accessible by Bastion Host.
6. Docker registry is setup to an address that is reachable by the Bastion host.
7. This document is based on the assumption that an apache http server (as part of the mirror
creation) is created outside of bastion host that supports yum mirror, helm charts and

3-63
Chapter 3
Bastion Host Installation

Kubernetes Binaries. (This can be different so directories to copy static content to Bastion
host must be verified before starting the rsync procedure ).

Limitations and Expectations


All steps are executable from a SSH application (putty) connected laptop accessible via the
Management Interface.

References
1. https://docs.docker.com/registry/deploying/
2. https://computingforgeeks.com/how-to-configure-ntp-server-using-chrony-on-rhel-8/

Procedure
These procedures detail the steps required to configure the existing Bastion Host (Management
VM).

Table 3-10 Procedure to configure Bastion Host

Step Procedure Description


#
1. Create Create the directory using the occne_cluster_name variable contained in the
the /var/ hosts.ini file.
occne/
<cluster_na $ mkdir /var/occne
me> $ mkdir /var/occne/<cluster_name>
directory on
the Bastion
Host
2. Copy the Copy the hosts.ini file (created using procedure: OCCNE Inventory File
host.ini file Preparation) into the /var/occne/<cluster_name>/ directory from RMS1 (this
to the /var/ procedure assumes the same hosts.ini file is being used here as was used to install
occne/ the OS onto RMS2 from RMS1. If not then the hosts.ini file must be retrieved
<cluster_na from the Utility USB mounted onto RMS2 and copied from RMS2 to the Bastion
me> Host).
directory This hosts.ini file defines each host to the OS Installer Container running the os-
install image downloaded from the repo.
$ scp root@172.16.3.4:/var/occne/<cluster_name> /var/occne/
<cluster_name>/hosts.ini

The current sample hosts.ini file requires a "/" to be added to the entry for the
occne_helm_images_repo.
vim (or use vi) and edit the hosts.ini file and add the"/"to
the occne_helm_images_repo entry.
occne_helm_images_repo='bastion-1:5000 ->
occne_helm_images_repo='bastion-1:5000/

3-64
Chapter 3
Bastion Host Installation

Table 3-10 (Cont.) Procedure to configure Bastion Host

Step Procedure Description


#
3. Check and Check the status of the firewall. If active then disable it.
Disable
Firewall $ systemctl status firewalld

$ systemctl stop firewalld


$ systemctl disable firewalld

To verify:
$ systemctl status firewalld

3-65
Chapter 3
Bastion Host Installation

Table 3-10 (Cont.) Procedure to configure Bastion Host

Step Procedure Description


#
4. Set up 1. Create the local YUM repo mirror file in etc/yum.repos.d and add the docker
Binaries, repo mirror. Follow procedure: OCCNE Artifact Acquisition and Hosting
Helm Charts
and Docker 2. Disable the public repo
Registry on
$ mv /etc/yum.repos.d/public-yum-ol7.repo /etc/yum.repos.d/
Bastion
public-yum-ol7.repo.disabled
Host VM
Install necessary packages from the yum mirror on Bastion Host
$ yum install rsync
$ yum install createrepo yum-utils
$ yum install docker-ce-18.06.1.ce-3.el7.x86_64
$ yum install nfs-utils
$ yum install httpd
$ yum install chrony -y

Install curl with http2 support:


Get curl on server accessible by Bastion Host
$ mkdir curltar
$ cd curltar
$ wget https://curl.haxx.se/download/curl-7.63.0.tar.gz --
no-check-certificate

Login to Bastion Host and run the following commands:


Create a temporary directory on the bastion host. It does
not really matter where this directory is created but it
must have read/write/execute privileges.
$ mkdir /var/occne/<cluster_name>/tmp
$ yum install -y nghttp2
$ rsync -avzh <login-username>@<IP address of server with
curl tar>:curltar /var/occne/<cluster_name>/tmp
$ cd /var/occne/<cluster_name>/tmp
$ tar xzf curl-7.63.0.tar.gz
$ rm -f curl-7.63.0.tar.gz
$ cd curl-7.63.0
$ ./configure --with-nghttp2 --prefix=/usr/local --with-ssl
$ make && sudo make install
$ sudo ldconfig
3. Copy the yum mirror contents from the remote server where the yum mirror
is deployed, this can be done in the following way:
Get the ip address of the yum mirror from the yum repo file.
Create an apache http server on Bastion host.
$ systemctl start httpd
$ systemctl enable httpd
$ systemctl status httpd
4. Retrieve the latest rpm's from the yum mirror to /var/www/yum on the
Bastion host using reposync:
Run following repo sync commands to get latest packages on Bastion host

3-66
Chapter 3
Bastion Host Installation

Table 3-10 (Cont.) Procedure to configure Bastion Host

Step Procedure Description


#

$ reposync -g -l -d -m --repoid=local_ol7_x86_64_addons --
newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/
$ reposync -g -l -d -m --repoid=local_ol7_x86_64_UEKR5 --
newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/
$ reposync -g -l -d -m --repoid=local_ol7_x86_64_developer
--newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/
$ reposync -g -l -d -m --
repoid=local_ol7_x86_64_developer_EPEL --newest-only --
download-metadata --download_path=/var/www/html/yum/
OracleLinux/OL7/
$ reposync -g -l -d -m --repoid=local_ol7_x86_64_ksplice --
newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/
$ reposync -g -l -d -m --repoid=local_ol7_x86_64_latest --
newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/

After the above execution, you will be able to see the directory structure in
with all the repo id's in /var/www/html/yum/OracleLinux/OL7/. Rename the
repositories in OL7/ directory:
Note: download_path can be changed according to the folder structure
required. Change the names of the copied over folders to match the base url.
$ cd /var/www/html/yum/OracleLinux/OL7/
$ mv local_ol7_x86_64_addons addons
$ mv local_ol7_x86_64_UEKR5 UEKR5
$ mv local_ol7_x86_64_developer developer
$ mv local_ol7_x86_64_developer_EPEL developer_EPEL
$ mv local_ol7_x86_64_ksplice ksplice
$ mv local_ol7_x86_64_latest latest

Run following createrepo commands to create repo data for each repository
channel on Bastion host yum mirror:
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/addons
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/UEKR5
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/developer
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/
developer_EPEL
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/ksplice
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/latest
5. Get Docker-ce and gpg key from mirror
Execute the following rsync command:
$ rsync -avzh <login-username>@<IP address of repo
server>:<centos folder directory path> /var/www/html/yum/
6. Create http repository configuration to retrieve Kubernetes binaries and helm
binaries/charts on Bastion Host if on a different server
Given that the Kubernetes binaries have been created outside of bastion host
as part of the procedure of setting up artifacts and repositories, kubernetes/

3-67
Chapter 3
Bastion Host Installation

Table 3-10 (Cont.) Procedure to configure Bastion Host

Step Procedure Description


#

helm binaries and helm charts have to be copied using rsync command to the
bastion host. Example below should copy all of the contents from a folder to
the static content render folder of the http server on the bastion host:
$ rsync -avzh <login-username>@<IP address of repo
server>:<copy from directory address> /var/www/html

Note: Above is an example directory for an apache folder, if there is another


http server running, the directory may be different
7. Setup Helm and initiate on Bastion Host
Get Helm version on server accessible by Bastion Host
$ mkdir helmtar
$ cd helmtar
$ wget https://storage.googleapis.com/kubernetes-helm/helm-
v2.9.1-linux-amd64.tar.gz

Login to Bastion Host and run the following commands:


Create a temporary directory on the bastion host. It does
not really matter where this directory is created but it
must have read/write/execute privileges.
$ mkdir /var/occne/<cluster_name>/tmp1
$ rsync -avzh <login-username>@<IP address of repo
server>:helmtar /var/occne/<cluster_name>/tmp1
$ cd /var/occne/<cluster_name>/tmp1
$ tar -xvf helm-v2.9.1-linux-amd64.tar.gz
$ rm -f helm-v2.9.1-linux-amd64.tar.gz
$ mv linux-amd64 helm
$ cd helm

# Run the following command in the charts directory of the


http server on bastion host to create index.yaml file so
that helm chart can be initialized
$ helm repo index
<path_to_helm_charts_directory_bastion_host>
# initialize helm
$ ./helm init --client-only --stable-repo-url
<bastion_host_occne_helm_stable_repo_url>

3-68
Chapter 3
Bastion Host Installation

Table 3-10 (Cont.) Procedure to configure Bastion Host

Step Procedure Description


#
5. Create a 1. Pull registry Image from Docker registry onto Bastion host to run a registry
docker locally
registry on Add the server registry IP and port to the /etc/docker/daemon.json file. Create
Bastion the file if not currently existing.
Host
{
"insecure-registries" :
["<server_docker_registry_address>:<port>"]
}
2. Start docker: Start the docker daemon.
$ systemctl daemon-reload
$ systemctl restart docker
$ systemctl enable docker

Verify docker is running:


$ ps -elf | grep docker
$ systemctl status docker

While creating the docker registry on a server outside of the bastion host,
there is no tag added to the registry image and the image is also not added to
the docker registry repository of that server. Manually tag the registry image
and push it as one of the repositories on the docker registry server:
$ docker tag registry:<tag>
docker_registry_address>:<port>/registry:<tag>
3. Push the tagged registry image customer to docker registry repository on
server accessible by Bastion Host:
$ docker push <docker_registry_address>:<port>/
registry:<tag>
4. Login into Bastion host and pull the registry image onto Bastion Host from
customer registry setup on server outside of bastion host
$ docker pull --all-tags <docker_registry_address>:<port>/
registry
5. Run Docker registry on Bastion Host
$ docker run -d -p 5000:5000 --restart=always --name
registry registry:<tag>

This runs the docker registry local to Bastion host on port 5000.
6. Get docker images from docker registry to Bastion Host docker registry
Pull all the docker images from Docker Repository Requirements to the local
Bastion Host repository:
$ docker pull --all-tags <docker_registry_address>:<port>/
<image_names_from_attached_list>

Note: If following error is encountered during the pull of images "net/http:


request canceled (Client.Timeout exceeded while awaiting headers)" from the
internal docker registry, edit http-proxy.conf and add the docker registry
address to NO_PROXY environment variable

3-69
Chapter 3
Bastion Host Installation

Table 3-10 (Cont.) Procedure to configure Bastion Host

Step Procedure Description


#

7. Tag Images
$ docker tag <docker_registry_address>:<port>/
<imagename>:<tag>
<bastion_host_docker_registry_address>:<port>/
<image_names_from_attached_list>

Example:
$ docker tag 10.75.207.133:5000/jaegertracing/jaeger-
collector:1.9.0 10.75.216.125:5000/jaegertracing/jaeger-
collector
8. Push the images to local Docker Registry created on the Bastion host
Create a daemon.json file in /etc/docker directory and add the following to it:
{
"insecure-registries" :
["<bastion_host_docker_registry_address>:<port>"]
}

Restart docker:

$ systemctl daemon-reload
$ systemctl restart docker
$ systemctl enable docker

To verify:
$ systemctl status docker
$ docker push
<bastion_host_docker_registry_address>:<port>/
<image_names_from_attached_list>

6. Setup NFS Run the following commands:


on the
Bastion $ echo '/var/occne 172.16.3.100/24(ro,no_root_squash)' >> /etc/
Host exports
$ systemctl start nfs-server
$ systemctl enable nfs-server

Verify nfs is running:


$ ps -elf | grep nfs
$ systemctl status nfs-server

3-70
Chapter 3
Bastion Host Installation

Table 3-10 (Cont.) Procedure to configure Bastion Host

Step Procedure Description


#
7. Setup the The ToR acts as the NTP source for all hosts.
Bastion Update the chrony.conf file with the source NTP server by adding the VIP address
Host to of the ToR switch from: OCCNE 1.0 Installation PreFlight Checklist : Complete
clock off the OA and Switch IP SwitchTable as the NTP source.
ToR Switch
$ vim /etc/chrony.conf

Add the following line at the end of the file:


server 172.16.3.1

chrony was installed in the first step of this procedure. Enable the service.
$ systemctl enable --now chronyd
$ systemctl status chronyd

chrony was installed in the first step of this procedure. Enable the service.
$ systemctl enable --now chronyd$ systemctl status chronyd

Execute the chronyc sources -v command to display the current status of NTP on
the Bastion Host. The S field should be set to * indicating NTP sync.
$ chronyc sources -v
210 Number of sources = 1

.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-'
= not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' =
time too variable.
|| .- xxxx
[ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx =
adjusted offset,
|| Log2(Polling interval) --. | | yyyy =
measured offset,
|| \ | | zzzz =
estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last
sample
===============================================================
================
^* 172.16.3.1 4 9 377 381
-1617ns[ +18us] +/- 89ms

Edit the /var/occne/<cluster_name>/host.ini file to include the ToR Switch IP as


the NTP server host.
$ vim /var/occne/<cluster_name>/hosts.ini

Change field: ntp_server='<ToR Switch IP'

3-71
Chapter 3
Software Installation Procedures - Automated Installation

Software Installation Procedures - Automated


Installation
Using either of the management hosts created in the previous procedures, the installer performs
a sequence of procedures to complete the automated installation of the CNE environment.
The following procedures provide the necessary steps to install the different images required
during OCCNE Software Installation.

Oracle Linux OS Installer


This procedure details the steps required to configure the hosts bare metal servers using the
OCCNE os_install image, running within an os-install docker container, which provides a PXE
based installer pre-configured to aid in the provisioning of the required hosts. This procedure
will require the use of an inventory file (hosts.ini) which provides the install with all the
necessary information about the cluster.
These procedures provide the steps required to install the OL7 image onto all hosts via the
Bastion Host using a occne/os_install container. Once completed, all hosts includes all
necessary rpm updates and tools necessary to run the k8-install procedure.
Prerequisites:
1. All procedures in OCCNE Installation of the Bastion Host are complete.
2. The Utility USB is available containing the necessary files as per: OCCNE 1.0 Installation
PreFlight checklist : Miscellaneous Files.

Limitations and Expectations


All steps are executable from a SSH application (putty) connected laptop accessible via the
Management Interface.
References
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html

3-72
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-11 Procedure to run the auto OS-installer container

Step # Procedure Description


1. Initial This procedure is used to provide the steps for creating directories and
Configuration copying all supporting files to the appropriate directories on the Bastion
on the Bastion Host so that the OS Install Container successfully installs OL7 onto each
Host to host.
Support the OS Note: The cluster_name field is derived from the hosts.ini file field:
Install occne_cluster_name.
1. Log into the Bastion Host using the IP supplied from: OCCNE 1.0
Installation PreFlight Checklist : Complete VM IP Table
2. Create the directories needed on the Bastion Host.
$ mkdir /var/occne/<cluster_name>/yum.repos.d
3. Update the repository fields in the hosts.ini file to reflect the changes
from procedure: OCCNE Configuration of the Bastion Host . The
fields listed must reflect the new Bastion Host IP (172.16.3.100) and
the names of the repositories.
$ vim /var/occne/<cluster_name>/hosts.ini

Update the following fields with the new values from


the configuration of the Bastion Host.
ntp_server
occne_private_registry
occne_private_registry_address
occne_private_registry_port
occne_k8s_binary_repo
occne_helm_stable_repo_url
occne_helm_images_repo
docker_rh_repo_base_url
docker_rh_repo_gpgkey

Comment out the following lines:


#http_proxy=<proxy_url>
#https_proxy=<proxy_url>

Example:
ntp_server='172.16.3.1'
occne_private_registry=registry
occne_private_registry_address='10.75.207.133'
occne_private_registry_port=5000
occne_k8s_binary_repo='http://10.75.207.133/
binaries/'
occne_helm_stable_repo_url='http://10.75.207.133/
helm/'
occne_helm_images_repo='10.75.207.133:5000/'
docker_rh_repo_base_url=http://10.75.207.133/yum/
centos/7/updates/x86_64/
docker_rh_repo_gpgkey=http://10.75.207.133/yum/
centos/RPM-GPG-CENTOS

3-73
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-11 (Cont.) Procedure to run the auto OS-installer container

Step # Procedure Description


2. Retrieve the The initial install of the OS requires an updated version of the kickstart
updated configuration file. This file is maintained on the Oracle OHC software
version of the download site and must be copied to the appropriate folder prior to
Kickstart executing the initial OS install steps below.
Configuration
1. Mount the Utility USB on RMS2.
file
Note: Instructions for mounting a USB in Linux are at: OCCNE
Installation of Oracle Linux 7.5 on Bootstrap Host : Install Additional
Packages . Only follow steps 1-4 to mount the USB.
2. Copy the kickstart file from the Utility USB to the /tmp on RMS2.
$ cp /media/usb/occne-ks.cfg.j2.new /tmp/occne-
ks.cfg.j2.new
3. On the Bastion Host, copy the kickstart configuration file from
RMS2 to the Bastion Host.
$ scp root@172.16.3.5:/tmp/occne-ks.cfg.j2.new /var/
occne/<clust_name>/occne-ks.cfg.j2.new

3. Copy the OL7 The iso file is normally accessible from a Customer Site Specific
ISO to the repository. It is accessible because the ToR switch configurations were
Bastion Host completed in procedure: OCCNE Configure Top of Rack 93180YC-EX
Switches. For this procedure the file has already been copied to the /var/
occne directory on RMS2 and can be copied to the same directory on the
Bastion Host.
Copy from RMS2, the OL7 ISO file to the /var/occne directory. The
example below uses OracleLinux-7.5-x86_64-disc1.iso.
Note: If the user copies this ISO from their laptop then they must use an
application like WinSCP pointing to the Management Interface IP.
$ scp root@172.16.3.5:/var/occne/OracleLinux-7.5-x86_64-
disc1.iso /var/occne/OracleLinux-7.5-x86_64-disc1.iso

4. Set up the Boot Execute the following commands:


Loader on the Note: The iso can be unmounted after the files have been copied if the
Bastion Host user wishes to do so using the command: umount /mnt.

$ mkdir -p /var/occne/pxelinux
$ mount -t iso9660 -o loop /var/occne/OracleLinux-7.5-
x86_64-disc1.iso /mnt
$ cp /mnt/isolinux/initrd.img /var/occne/pxelinux
$ cp /mnt/isolinux/vmlinuz /var/occne/pxelinux

5. Verify and Set Each file configured in the step above must be open for read and write
the PXE permissions.
Configuration
File $ chmod 777 /var/occne/pxelinux
Permissions on $ chmod 777 /var/occne/pxelinux/vmlinuz
the Bastion $ chmod 777 /var/occne/pxelinux/initrd.img
Host

3-74
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-11 (Cont.) Procedure to run the auto OS-installer container

Step # Procedure Description


6. Copy and 1. The customer specific Oracle Linux .repo file on the bastion_host
Update .repo must be copied to the /var/occne/<cluster_name> /yum.repos.d
files directory and updated to reflect the URL to the bastion host. This file
is transferred to /etc/yum.repos.d directory on the host by ansible
after the host has been installed but before the actual yum update is
performed.
$ cp /etc/yum.repos.d/
<customer_OL7_specifc.repo> /var/occne/
<cluster_name>/yum.repos.d/.
2. Edit each .repo file in the /var/occne/<cluster_name>/yum.repos.d
directory and update the baseurl IP of the repo to reflect the IP of the
bastion_host.
$ vim /var/occne/<cluster_name>/yum.repos.d/
<repo_name>.repo

Example:

[local_ol7_x86_64_UEKR5]
name=Unbreakable Enterprise Kernel Release 5 for
Oracle Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/
UEKR5/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_

Change the IP address of the baseurl IP:


10.75.155.195 to the bastion host ip: 172.16.3.100.

The URL may have to change based on the


configuration of the customer repos. That cannot be
indicated in this procedure.

3-75
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-11 (Cont.) Procedure to run the auto OS-installer container

Step # Procedure Description


7. Execute the This procedure requires executing docker run for four different Ansible
OS Install on tags.
the Hosts from Note: The <image_name>:<image_tag> represent the images in the
the Bastion docker image registry accessible by Bastion host.
Host
Note: The initial OS install is performed from the OS install container
running bash because the new kickstart configuration file must be copied
over the existing configuration prior to executing the Ansible playbook.
Note: The <limit_filter> value used in the commands below is based on
the settings in the hosts.ini file. See reference 1 above. An example used
in this procedure is host_hp_gen_10[0:7]. The example hosts.ini file
includes a grouping referred to as host_hp_gen_10. This can be considered
an array of hosts with the 0:7 indicating the hosts (or indexes into that
array) to include in the execution of the ansible commands listed below. In
this example, the hosts would be all the k8s nodes (RMS3-5 and blades
1-4), db-1 (RMS1) and not include db-2 (RMS2). The settings at a
customer site are specific to the site hosts.ini file and those used in this
procedure are presented here as an example only. An example is illustrated
below:
Example section from a hosts.ini file:
[host_hp_gen_10]
k8s-1.rainbow.lab.us.oracle.com
ansible_host=172.16.3.6ilo=192.168.20.123mac=48-df-37-7a-41-60
k8s-2.rainbow.lab.us.oracle.com
ansible_host=172.16.3.7ilo=192.168.20.124mac=48-df-37-7a-2f-60
k8s-3.rainbow.lab.us.oracle.com
ansible_host=172.16.3.8ilo=192.168.20.125mac=48-df-37-7a-2f-70
k8s-4.rainbow.lab.us.oracle.com
ansible_host=172.16.3.11ilo=192.168.20.141mac=d0-67-26-b1-8c-50
k8s-5.rainbow.lab.us.oracle.com
ansible_host=172.16.3.12ilo=192.168.20.142mac=d0-67-26-ac-4a-30
k8s-6.rainbow.lab.us.oracle.com
ansible_host=172.16.3.13ilo=192.168.20.143mac=d0-67-26-c8-88-30
k8s-7.rainbow.lab.us.oracle.com
ansible_host=172.16.3.14ilo=192.168.20.144mac=20-67-7c-08-94-40
db-1.rainbow.lab.us.oracle.com
ansible_host=172.16.3.4ilo=192.168.20.121mac=48-df-37-7a-41-50
db-2.rainbow.lab.us.oracle.com
ansible_host=172.16.3.5ilo=192.168.20.122mac=48-df-37-7a-40-40
In the above example host_hp_gen_10[0:7] would be all of the above
hosts except for db-2 (RMS2) which is index 8 starting at index 0. So
using limit filter of [0:7] would install all hosts except RMS2.
1. Run the docker command below to create a container running bash.
This command must include the -it option and the bash executable at
the end of the command. After execution of this command the user
prompt will be running within the container.
$ docker run -it --rm --network host --cap-
add=NET_ADMIN -v /var/occne/<cluster_name>/:/host -
v /var/occne/:/var/occne:rw <image_name>:<image_tag>
bash

3-76
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-11 (Cont.) Procedure to run the auto OS-installer container

Step # Procedure Description

Example:

$ docker run -it --rm --network host --cap-


add=NET_ADMIN -v /var/occne/
rainbow.lab.us.oracle.com/:/host -v /var/occne/:/var/
occne:rw 10.75.200.217:5000/os_install:1.0.1 bash
2. From the container, copy the /var/occne/occne-ks.cfg.j2.new file
(which is mounted to the /host directory on the container) over the
existing /install/os-install/roles/pxe_config/templates/ks/occne-
ks.cfg.j2 file.
$ cp /host/occne-ks.cfg.j2.new/install/roles/
pxe_config/templates/ks/occne-ks.cfg.j2
3. Install the OS onto each host using the ansible command indicated
below. This command installs the OS while the subsequent
commands (steps) set up the environment for yum repository support,
datastore, and security. This command can take up to 30 minutes to
complete.
$ ansible-playbook -i /host/hosts.ini --become --
become-user=root --private-key /host/.ssh/
occne_id_rsa /install/os-install.yaml --limit
<limit_filter>,localhost --skip-tags
"ol7_hardening,datastore,yum_update"

Example:

$ ansible-playbook -i /host/hosts.ini --become --


become-user=root --private-key /host/.ssh/
occne_id_rsa /install/os-install.yaml --limit
host_hp_gen_10[0:7],localhost --skip-tags
"ol7_hardening,datastore,yum_update"

Note: This ansible task times out in 35 minutes. If the a timeout


condition occurs on any set of the given hosts (usually just the blades
indicated as k8s-4 through k8s-7), this can be caused by the Linux
boot process taking too long to complete the reboot at the end of the
installation. This does not mean the install failed. If the following
conditions appear, the blades can be rebooted to force up the login
prompt and the installation process can continue.
The install task display may look something like the following
showing 1 or more blades (in this example k8s-4 and k8s-5 failed to
complete the task due to the lockout issue)
.
.
.
TASK [pxe_install : PXE boot a blade. Will reboot
even if currently powered on.]
*****************************************************
*
changed: [k8s-3.rainbow.lab.us.oracle.com ->

3-77
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-11 (Cont.) Procedure to run the auto OS-installer container

Step # Procedure Description

localhost]
changed: [k8s-1.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-2.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-5.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-4.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-6.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-7.rainbow.lab.us.oracle.com ->
localhost]
changed: [db-1.rainbow.lab.us.oracle.com ->
localhost]

TASK [pxe_install : Wait for hosts to come online]


*****************************************************
*******************************
ok: [k8s-1.rainbow.lab.us.oracle.com]
ok: [k8s-6.rainbow.lab.us.oracle.com]
ok: [k8s-3.rainbow.lab.us.oracle.com]
ok: [k8s-2.rainbow.lab.us.oracle.com]
ok: [k8s-7.rainbow.lab.us.oracle.com]
ok: [db-1.rainbow.lab.us.oracle.com]
fatal: [k8s-5.rainbow.lab.us.oracle.com]: FAILED! =>
{"changed": false, "elapsed": 2100, "msg": "Timeout
when waiting for 172.16.3.12:22"}
fatal: [k8s-4.rainbow.lab.us.oracle.com]: FAILED! =>
{"changed": false, "elapsed": 2101, "msg": "Timeout
when waiting for 172.16.3.11:22"}

Accessing the install process via a KVM or via VSP should display
the following last few lines.
.
.
.
[ OK ] Stopped Remount Root and Kernel File
Systems.
Stopping Remount Root and Kernel File
Systems...
[ OK ] Stopped Create Static Device Nodes in /dev.
Stopping Create Static Device Nodes in /
dev...
[ OK ] Started Restore /run/initramfs.
[ OK ] Reached target Shutdown.

Reboot the "stuck" blades using the KVM, the ssh session to the HP
ILO, or via the power button on the blade. This example shows how
to use the HP ILO to reboot the blade (using blade 1 or k8s-4)
Login to the blade ILO:
$ ssh root@192.168.20.121 using the root credentials

3-78
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-11 (Cont.) Procedure to run the auto OS-installer container

Step # Procedure Description

</>hpiLO->

Once the prompt is displayed issue the following


command:
</>hpiLO-> reset /system1 hard

Go to the VSP console to watch the reboot process


and the login prompt to appear:
</>hpiLO-> vsp
4. Execute the OS Install yum-update on the Hosts from the Bastion
Host. This step disables any existing .repo files that are currently
existing in directory /etc/yum.repos.d on Host after the OS Install. It
then copies any .repo files in the /var/occne/<cluster_name> directory
into the /etc/yum.repos.d and sets up the customer repo access.
$ docker run --rm --network host --cap-add=NET_ADMIN
-v /var/occne/<cluster_name>/:/host -v /var/
occne/:/var/occne:rw -e "OCCNEARGS=--limit
<limit_filter>,localhost --tags yum_update"
<image_name>:<image_tag>

Example:

$ docker run --rm --network host --cap-add=NET_ADMIN


-v /var/occne/rainbow.lab.us.oracle.com/:/host -
v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit
host_hp_gen_10[0:7],localhost --tags yum_update"
10.75.200.217:5000/os_install:1.0.1
5. Check the /etc/yum.repos.d directory on each host for non-disabled
repo files. These files should be disabled. The only file that should be
enabled is the customer specif .repo file that was set in the /var/
occne/<cluster_name>/yum.repos.d directory on the Bastion Host. If
any of these files are not disabled then each file must be renamed as
<filename>.repo.disabled.
$ cd /etc/yum.repos.d
$ ls

Check for any files other than the customer


specific .repo file that are not listed as disabled.
If any exist, disable them using the following
command:

$ mv <filename>.repo <filename>.repo.disabled

Example:
$ mv oracle-linux-ol7.repo oracle-linux-
ol7.repo.disabled
$ mv uek-ol7.repo uek-ol7.repo.disabled
$ mv virt-ol7.repo virt-ol7.repo.disabled
6. Execute the OS Install datastore on the Hosts from the Bastion Host

3-79
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-11 (Cont.) Procedure to run the auto OS-installer container

Step # Procedure Description

$ docker run --rm --network host --cap-add=NET_ADMIN


-v /var/occne/<cluster_name>/:/host -v /var/
occne/:/var/occne:rw -e "OCCNEARGS=--limit
<limit_filter>,localhost --tags datastore"
<image_name>:<image_tag>

Example:

$ docker run --rm --network host --cap-add=NET_ADMIN


-v /var/occne/rainbow.lab.us.oracle.com/:/host -
v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit
host_hp_gen_10[0:7],localhost --tags datastore"
10.75.200.217:5000/os_install:1.0.1
7. Execute the OS Install OL7 Security Hardening on the Hosts from
the Bastion Host. This step performs a set of security hardening steps
on the OS after it has been installed.
Note: The two extra-vars included in the command are not used in
the context of this command but need to be there to set the values to
something other than an empty string.
$ docker run --rm --network host --cap-add=NET_ADMIN
-v /var/occne/<cluster_name>/:/host -v /var/
occne/:/var/occne:rw -e "OCCNEARGS=--limit
<limit_filter>,localhost --tags ol7_hardening --
extra-vars ansible_env=172.16.3.4 --extra-vars
http_proxy=172.16.3.4" <image_name>:<image_tag>

Example:

$ docker run -it --rm --network host --cap-


add=NET_ADMIN -v /var/occne/
rainbow.lab.us.oracle.com/:/host -v /var/occne/:/var/
occne:rw -e "OCCNEARGS=--limit
host_hp_gen_10[0:7],localhost --tags ol7_hardening --
extra-vars ansible_env=172.16.3.4 --extra-vars
http_proxy=172.16.3.4" 10.75.200.217:5000/os_install:
1.0.1

8. Re-instantiate 1. Run the following commands on RMS1 host OS:


the
management $ sudo su
link bridge on $ nmcli con add con-name mgmtBridge type bridge
RMS1 ifname mgmtBridge
$ nmcli con add type bridge-slave ifname eno2 master
mgmtBridge
$ nmcli con add type bridge-slave ifname eno3 master
mgmtBridge
$ nmcli con mod mgmtBridge ipv4.method manual
ipv4.addresses 192.168.2.11/24
$ nmcli con up mgmtBridge
2. Verify access to the ToR switches' management ports.
$ ping 192.168.2.1
$ ping 192.168.2.2

3-80
Chapter 3
Software Installation Procedures - Automated Installation

Install Backup Bastion Host


Introduction
This procedure details the steps necessary to install the Backup Management VM in the Storage
Host (RMS1) and backing up the data from the Management VM in Storage host (RMS2) in to
Backup Management VM in Storage host (RMS1). After the first Storage Host is reinstalled we
will use below procedure to create Backup Management VM in this Storage Host. Management
VM in first Storage host (RMS1) is backup for the Management VM in second Storage host
(RMS2).

Prerequisites
1. Management VM is present in Storage Host(RMS2).
2. Storage Host(RMS1) is reinstalled using the os-install container.
3. First and Second Storage Hosts are defined in OCCNE Inventory File Template.
4. Host names and IP Address, network information assigned to this Management VM is
captured in the OCCNE 1.0 Installation PreFlight checklist
Expectations
1. Management VM in first Storage Host is a backup for Management VM in Second Storage
Host.
2. All the required config files and data configured in the Backup Management VM in first
Storage Host(RMS1) is copied from the Management VM in second Storage Host(RMS2).

References
1. https://linux.die.net/man/1/virt-install
2. https://linuxconfig.org/how-to-create-and-manage-kvm-virtual-machines-from-cli
3. https://www.cyberciti.biz/faq/kvm-install-centos-redhat-using-kickstart-ks-cfg/
4. https://opensource.com/business/16/9/linux-users-guide-lvm

Table 3-12 Procedure to Install Backup Bastion Host

Step # Procedure Description


1. Create the Backup 1. Login to the Storage Host(RMS1).
Management VM in
Storage Host 2. Make sure the "/tmp/bastion_host.ks" kickstart file exists in the
(RMS1) Storage Host.
3. Follow the procedure OCCNE Installation of the Bastion Host
for creating the Management VM.
4. Follow the procedure OCCNE Configuration of the Bastion
Host for configuring the Management VM.

3-81
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-12 (Cont.) Procedure to Install Backup Bastion Host

Step # Procedure Description


2. Backup the Cluster The Backup Management VM is created to provide HA, i.e. if the
directory in Backup Management VM is failed and not available, then backup
Management VM Management VM is used. After Backup Management VM is created
in a first Storage Host(RMS1), copy all the configs, SSH keys from
the Management VM(Second Storage Host RMS2) to Backup
Management VM(first Storage Host RMS1).
1. Copy "/var/occne/" directory from the Management VM
(Second Storage Host) to Backup Management VM.
a. Login to Management VM in Second Storage Host.
b. Copy the "/var/occne/<cluster_name>" directory in to
Backup Management VM in first Storage Host, This
directory contains all the required config files, hosts.ini
inventory file, SSH keys, Artifacts directory, MySQL
software, Oracle Linux ISO and so on.
$ sudo su
$ cd /var/occne
$ scp -r -i ./<cluster_name>/.ssh/
occne_id_rsa /var/occne/<cluster_name>
admusr@10.75.216.XXX:/var/occne/
<cluster_name> is the directory created in /var/occne for installing
the OS in all the host servers.

Database Tier Installer


This procedure documents the steps for installing the MySQL Cluster on VM's. Here VM's will
be created manually using the virt-install CLI tool; MySQL Cluster will be installed using the
db-install docker container.
For Installing the MySQL Cluster on these VM's requires an use of the an inventory file
(hosts.ini) where all the MySQL node IP Address are configured. This Inventory file provides
the db-install docker container with all the necessary information about the MySQL cluster.
MySQL Cluster will be installed using the MySQL Cluster Manager binary release which
includes MySQL NDB Cluster version. Download MySQL Cluster Manager version as
specified in the OCCNE 1.0 Installation PreFlight Checklist.
In OCCNE platform, all the NF's will need a database to store application data, so MySQL
Cluster is installed for storing all the application and config data for NF's. For installing
MySQL Cluster, VM's will be created in kubernetes master nodes and Database Servers as
configured in the OCCNE Inventory File Template file.

Prerequisites
Below are list of prerequisites required for creating the VM's and installing the MySQL Cluster.
1. VM's needed for installing the MySQL Cluster will be created as part of the VM creation
procedures OCCNE Install VMs for MySQL Nodes and Management Server.
2. SSH keys generated during the host provisioning in /var/occne/<cluster_name> directory,
these SSH keys will be configured in these VM's as part of the OCCNE Install VMs for

3-82
Chapter 3
Software Installation Procedures - Automated Installation

MySQL Nodes and Management Server, so that db-install container can install these VM's
with the MySQL Cluster software.
3. The host running the docker image must have docker installed.
4. A defined and installed site hosts.ini inventory file should also be present..
5. Download MySQL Cluster Manager software as specified in OCCNE 1.0 Installation
PreFlight Checklist and place it in the /var/occne directory in bastion host(Management
VM).

Limitations and Expectations


1. db-install container will deploy MySQL cluster in these VM's as per configuration
provided in the OCCNE Inventory File Template file.
2. The steps below will install different MySQL Cluster nodes(Management nodes, Data
nodes and SQL nodes) in these VM's.

References
1. MySQL NDB Cluster : https://dev.mysql.com/doc/refman/5.7/en/mysql-cluster.html
2. MySQL Cluster Manager: https://dev.mysql.com/doc/mysql-cluster-manager/1.4/en/

Steps to perform OCCNE Database Tier Installer

Table 3-13 OCCNE Database Tier Installer

Step # Procedure Description


1 Login in to the Login in to the Management Node using the IP address noted in the
Management Node OCCNE 1.0 Installation PreFlight Checklist

2 Configure Check the /var/occne/<cluster_name> directory which has been


occne_mysqlndb_ created during the os install procedure, as specified in the OCCNE Oracle
DataMemory Linux OS Installer, this directory consists of the hosts.ini inventory file
variable in the and SSH keys generated during the os-install, which will be used by db-
hosts.ini file install container to install MySQL Cluster.
Configure occne_mysqlndb_DataMemory variable in the hosts.ini file as
documented in OCCNE Inventory File Preparation, value for this variable
can be obtained from the OCCNE Install VMs for MySQL Nodes and
Management Server.
3 Note down the db Note down the db install container name as specified in the manifest.
install container
name
Container Name db_install_container_na db_install:0.1.0-beta.3
me

Note: This container will be used in the next step while running the db
install container which will install MySQL Cluster

3-83
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-13 (Cont.) OCCNE Database Tier Installer

Step # Procedure Description


4 Run db-install The db-install container will install MySQL Cluster on VM's
container configured in the host.ini inventory file. All the above steps should be
performed before running the db-install container. Replace
<customer_repo_location> and <db_install_container_name> in
below docker command and docker-compose.yaml file.
1. Using docker
$ docker run -it --network host --cap-add=NET_ADMIN
\
-v /var/occne/<cluster_name>:/host \
-v /var/occne:/var/occne:rw \
<customer_repo_location>/<db_install_container_name>

For Example:
$ docker run -it --network host --cap-add=NET_ADMIN
\
-v /var/occne/rainbow/:/host \
-v /var/occne:/var/occne:rw \
reg-1:5000/db_install:1.0.1
2. Using docker-compose
a. Create a docker-compose.yaml file in the /var/occne/
<cluster_name> directory.
Using docker-compose
$ vi docker-compose.yaml
db_install_<cluster_name>:
net: host
stdin_open: true
tty: true
image: <customer_repo_location>/
<db_install_container_name>
container_name: <cluster_name>_db_installer
cap_add:
- NET_ADMIN
volumes:
- /var/occne/<cluster_name>:/host
- /var/occne:/var/occne:rw

Note: In above docker-compose.yaml file cluster_name


should be replaced with the cluster directory name.
b. Run the docker-compose yaml file
Running docker-compose
$ docker-compose run --rm
db_install_<cluster_name>

For example:
If the directory name created as OccneCluster then
cluster_name should be replaced with "OccneCluster".

$ docker-compose run --rm


db_install_<cluster_name>

3-84
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-13 OCCNE Database Tier Installer

Step # Procedure Description

db_install container will take around 5 to 10 mins for installing


the MySQL Cluster nodes in these VM's, After db_install
container is completed MySQL DB is installed in the VM's as
configured in the hosts.ini file.

5 Test the MySQL Test the MySQL Cluster by executing the following command:
Cluster
$ docker run -it --network host --cap-add=NET_ADMIN \
-v /var/occne/<cluster_name>:/host \
-v /var/occne:/var/occne:rw \
<customer_repo_location>/<db_install_container_name> \
/test/cluster_test 0

For Example:
$ docker run -it --network host --cap-add=NET_ADMIN \
-v /var/occne/rainbow:/host \
-v /var/occne:/var/occne:rw \
reg-1:5000/db_install:1.0.1 \
/test/cluster_test

6 Login to the each of As part of the installation of the MySQL Cluster, db_install container
the MySQL SQL will generate the random password and marked as expired in the MySQL
nodes and change SQL nodes. This password is stored in /var/occnedb/
the MySQL root mysqld_expired.log file. so we need to login to the each of the
user password MySQL SQL nodes and change the MySQL root user password.
1. Login to MySQL SQL Node VM.
2. Login to mysql client as a root user.
$ sudo su
$ mysql -h 127.0.0.1 -uroot -p
3. Enter expired random password for mysql root user stored in
the /var/occnedb/mysqld_expired.log file:

$ mysql -h 127.0.0.1 -uroot -p


Enter password:
4. Change Root Password:

$ mysql> ALTER USER 'root'@'localhost' IDENTIFIED


BY '<NEW_PASSWORD>';
$ mysql> FLUSH PRIVILEGES;
Perform this step for all the remaining SQL nodes.
Note: Here NEW_PASSWORD is the password of the mysql root user.

3-85
Chapter 3
Software Installation Procedures - Automated Installation

OCCNE Kubernetes Installer


These procedures provide the steps required to install the K8's image onto all hosts via the
Bastion Host using a occne/k8s_install container. Once completed, configure procedure can be
run.

Prerequisites
1. All the hosts servers where this VM is created are captured in OCCNE Inventory File
Preparation.
2. Host names and IP Address, network information assigned to this VM is captured in the
OCCNE 1.0 Installation PreFlight Checklist
3. Cluster Inventory File and SSH Keys are present in the cluster_name folder in var/occne
directory
4. A docker image for 'k8s_install' must be available in the docker registry accesible by
Bastion host. OCCNE 1.0 - Installation Procedure

Limitations and Expectations


All steps are executable from a SSH application (putty) connected laptop accessible via the
Management Interface.

Steps to Perform OCCNE Kubernetes Installer

Table 3-14 Procedure to install OCCNE Kubernetes

Step # Procedure Description


1 Initial 1. Log into the Bastion Host using the IP supplied from: OCCNE 1.0
Configuration on Installation PreFlight Checklist : Complete VM IP Table
the Bastion Host to
Support the 2. Verify the entries in the hosts.ini file for
Kubernetes Install occne_private_registry, occne_private_registry_address,
occne_private_registry_port and occne_k8s_binary_repo are
correct . The fields listed must reflect the new Bastion Host IP and
the names of the repositories correctly.

3-86
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-14 (Cont.) Procedure to install OCCNE Kubernetes

Step # Procedure Description


2 Execute the
Kubernetes Install
on the Hosts from
the Bastion Host Note:
The cluster_name field is derived from the
hosts.ini file field: occne_cluster_name.
The <image_name>:<image_tag>
represent the images in the Bastion Host
docker image registry as set up in
procedure: OCCNE Configuration of the
Bastion Host.

Create a file named repo_remove.yaml in /var/occne/


<cluster_name> directory with following content:

- hosts: k8s-cluster
tasks:
- name: Clean artifact path
file:
state: absent
path: "/etc/yum.repos.d/docker.repo"

Start the k8s_install container with a bash shell :


$ docker run --rm --network host --cap-add=NET_ADMIN -
v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/
occne:rw -e "OCCNEARGS=<k8s_args>" <docker_registry>/
<image_name>:<image_tag>
For example:
$ docker run --rm -it --network host --cap-add=NET_ADMIN
-v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/
occne/:/var/occne:rw -e OCCNEARGS="-vv"
10.75.200.217:5000/k8s_install:1.0.1 bash
Run following commands within the bash of the container:
$ sed -i /kubespray/cluster.yml -re '47,57d'
$ sed -i /kubespray/roles/container-engine/docker/
templates/rh_docker.repo.j2 -re '10,17d'
Run following command:
$ ansible-playbook -i /kubespray/inventory/occne/
hosts.ini --become --become-user=root --private-key /
host/.ssh/occne_id_rsa /occne/kubespray/cluster.yml -
vvvvv
Example:
$ ansible-playbook -i /kubespray/inventory/occne/
hosts.ini --become --become-user=root --private-key /
host/.ssh/occne_id_rsa /var/occne/
rainbow.lab.us.oracle.com/repo_remove.yaml
%% Run exit command below to exit out of the bash of
container
$ exit

3-87
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-14 (Cont.) Procedure to install OCCNE Kubernetes

Step # Procedure Description


3 Update the $PATH On the Bastion Host, edit the /root /.bash_profile file. Update the
Environment PATH variable in that file.
Variable to access
the kubectl %% On the bastion Host edit file /root/.bash_profile.
command from the
kubectl.sh script # .bash_profile

# Get the aliases and functions


if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

%% Update the following to the PATH variable:

PATH=$PATH:$HOME/bin:var/occne/<cluster_name>/artifacts

%% Save the file and source the .bash_profile file:

source /root/.bash_profile
%% Execute the following to verify the $PATH has been
updated.

echo $PATH /usr/local/sbin:/usr/local/bin:/sbin:/


bin:/usr/sbin:/usr/bin:/root/bin:/root/bin:/var/occne/
rainbow.lab.us.oracle.com/artifacts
%% Make sure the permissions on the /var/occne/
rainbow.lab.us.oracle.com/artifacts/kubectl.sh and /var/
occne/rainbow.lab.us.oracle.com/artifacts/kubectl files are
set correctly:
-rwxr-xr-x. 1 root root 248122280 May 30 18:23 kubectl
-rwxr-xr-x. 1 root root 112 May 30 18:44
kubectl.sh

%% If not run the following command:


chmod +x kubectl

4 Run Kubernetes For verification of k8s installation, run docker command in the
Cluster Tests k8s_install /test/cluster_test.

3-88
Chapter 3
Software Installation Procedures - Automated Installation

OCCNE Automated Initial Configuration

Introduction
Common Services typically refers to the collection of various components deployed to
OCCNE. The Common services are major functions in action which are able to perform
logging, tracing, and metric collection of the cluster. To Monitor the cluster and to raise alerts
when an anomaly occurs or when a potential failure is round the corner. The below procedure
are used to install the common services.

Prerequisites
1. All procedures in OCCNE Kubernetes Installer is complete.
2. The host running the docker image must have docker installed. Refer to Install VMs for
MySQL Nodes and Management Server for more information.
3. A defined and installed site hosts.ini file should also be present. Check OCCNE Inventory
File Preparation for instructions for developing this file.
4. A docker image named 'occne/configure' must be available in the customer repository.
OCCNE 1.0 - Installation Procedure

Procedure Steps

Table 3-15 Procedure to install common services

Step # Procedure Description


Configure The following variables should be configured according to customer needs,
1. variables these variables can be modified to point to customer-specific repositories and
other configurations needing to be done in the hosts.ini files as documented
in the OCCNE Inventory File Template
1. <occne_helm_stable_repo_url>
2. <occne_helm_images_repo>

Run Run the Configure image using the below command. After "configure:"
2. configure keyword put the tag of your latest pulled image.
image
$ docker run --rm -v /<PATH_TO_CLUSTER>/<CLUSTER_NAME>:/
host <CUSTOMER-PROVIDED_REPOSITORY_LOCATION>/occne/
configure:<RELEASE_TAG>

Example:

$ docker run --rm --network host --cap-add=NET_ADMIN -


v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/
occne/:/var/occne:rw -e "OCCNEARGS=--limit
host_hp_gen_10[0:7],localhost" 10.75.200.217:5000/
configure:1.0.1

Note: Replace the <release_tag> after "configure:" image name with the
latest build tag.

3-89
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-15 (Cont.) Procedure to install common services

Step # Procedure Description


Verify the After the above command successfully completes, the services deployed and
3. services exposed can be verified by using the below command.
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
service --namespace=<NAMESPACE_VALUE>
or
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
service --all-namespaces

Example:
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
service --all-namespaces
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
service --namespace=default
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
service --namespace=kube-system
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
srvices --namespace=occne-infra

3-90
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-15 (Cont.) Procedure to install common services

Step # Procedure Description


Sample To verify if the above command was executed successfully and whether the
4. Output services was installed properly, the output shown below can be taken as
reference.
NAMESPACE
NAME
TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
default
kubernetes
ClusterIP 10.233.0.1 <none> 443/
TCP 20h
kube-system
coredns
ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,
9153/TCP 16h
kube-system tiller-
deploy ClusterIP
10.233.26.235 <none> 44134/
TCP 16h
occne-infra occne-elastic-elasticsearch-
client ClusterIP 10.233.57.169
<none> 9200/TCP 106s
occne-infra occne-elastic-elasticsearch-
discovery ClusterIP None
<none> 9300/TCP 106s
occne-infra occne-elastic-exporter-elasticsearch-
exporter ClusterIP 10.233.39.27 <none>
9108/TCP 104s
occne-infra occne-
grafana LoadBalancer
10.233.31.105 10.75.163.132 80:32257/
TCP 90s
occne-infra occne-
kibana LoadBalancer
10.233.12.112 10.75.163.128 80:30871/
TCP 101s
occne-infra occne-metrics-
server ClusterIP
10.233.50.144 <none> 443/
TCP 87s
occne-infra occne-prometheus-
alertmanager LoadBalancer
10.233.36.86 10.75.163.130 80:31101/
TCP 93s
occne-infra occne-prometheus-alertmanager-
headless ClusterIP None
<none> 80/TCP,6783/TCP 93s
occne-infra occne-prometheus-kube-state-
metrics ClusterIP None
<none> 80/TCP 93s
occne-infra occne-prometheus-node-
exporter ClusterIP 10.233.31.99
<none> 9100/TCP 91s
occne-infra occne-prometheus-

3-91
Chapter 3
Software Installation Procedures - Automated Installation

Table 3-15 (Cont.) Procedure to install common services

Step # Procedure Description

pushgateway ClusterIP
10.233.49.128 <none> 9091/
TCP 93s
occne-infra occne-prometheus-
server LoadBalancer
10.233.19.225 10.75.163.131 80:31511/
TCP 93s
occne-infra occne-tracer-jaeger-
agent ClusterIP 10.233.45.62
<none> 5775/UDP,6831/UDP,6832/UDP,5778/TCP 99s
occne-infra occne-tracer-jaeger-
collector ClusterIP 10.233.58.112
<none> 14267/TCP,14268/TCP,9411/TCP 99s
occne-infra occne-tracer-jaeger-
query LoadBalancer 10.233.43.110
10.75.163.129 80:31319/TCP 99

Remove If a node fail during configuration deployment, the configuration can be


5. configurati removed using the following:
on, if node
fails. $ docker run --rm --network host --cap-add=NET_ADMIN -
v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/
occne:rw -e "OCCNEARGS=--limit <host_filter>,localhost --
tags remove" <repo>/configure:<tag>

Example:
$ docker run --rm --network host --cap-add=NET_ADMIN -
v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/
occne/:/var/occne:rw -e "OCCNEARGS=--limit
host_hp_gen_10[0:7],localhost --tags remove"
10.75.200.217:5000/configure:1.0.1

3-92
4
Post Installation Activities

Post Install Verification


Introduction
This document verifies installation of CNE Common services on all nodes hosting the cluster.
There are different UI end points installed with common services like Kibana, Grafana,
Prometheus Server, Alert Manager; below are the steps to launch different UI endpoints and
verify the services are installed and working properly.

Prerequisities
1. Common services has been installed on all nodes hosting the cluster.
2. Gather list of cluster names and version tags for docker images that were used during
install.
3. All cluster nodes and services pods should be up and running.
4. Commands are required to be run on Management server.
5. Any Modern browser(HTML5 compliant) with network connectivity to CNE.

Table 4-1 OCCNE Post Install Verification

Step Procedure Description


No.
Run the
1. commands to get
# LoadBalancer ip address of the kibana service is
retrieved with below command
the load-balancer
$ export KIBANA_LOADBALANCER_IP=$(kubectl get services
IP address and
occne-kibana --namespace occne-infra -o
port number for
jsonpath="{.status.loadBalancer.ingress[*].ip}")
Kibana Web
Interface.
# LoadBalancer port number of the kibana service is
retrieved with below command
$ export KIBANA_LOADBALANCER_PORT=$(kubectl get services
occne-kibana --namespace occne-infra -o
jsonpath="{.spec.ports[*].port}")

# Complete url for accessing kibana in external browser


$ echo http://$KIBANA_LOADBALANCER_IP:
$KIBANA_LOADBALANCER_PORT
http://10.75.182.51:80

Launch the Browser and navigate to http://


$KIBANA_LOADBALANCER_IP:
$KIBANA_LOADBALANCER_PORT(e.g.: http://10.75.182.51:80 in
the example above) received in the output of the above commands.

4-1
Chapter 4
Post Install Verification

Table 4-1 (Cont.) OCCNE Post Install Verification

Step Procedure Description


No.
Using Kibana 1. Navigate to "Management" Tab in Kibana.
2. verify Log and
Tracer data is 2. Click on "Index Patterns". You should be able to see the two patterns
stored in as below which confirms Log and Tracer data been stored in Elastic-
Elasticsearch Search successfully.
a. jaeger-*
b. logstash-*
3. Type logstash* in the index pattern field and wait for few seconds.
4. Verify the "Success" message and index pattern "logstash-
YYYY.MM.DD" appeared as highlighted in the bottom red box .
Click on " Next step "
5. Select "I don't want to use the Time Filter" and click on "Create index
pattern"
6. Ensure the Web page having the indices appear in the main viewer
frame
7. Click on "Discover" Tab and you should be able to view raw Log
records.
8. Repeat steps 3-6 using "jaeger*" instead of "logstash* to ensure the
data is stored in elastic search.

Verify 1. Navigate to "Dev Tools" in Kibana


3. Elasticsearch
cluster health 2. Enter the command "GET _cluster/health" and press on the green
arrow mark. You should see the status as "green"on the right side of
the screen.

4-2
Chapter 4
Post Install Verification

Table 4-1 (Cont.) OCCNE Post Install Verification

Step Procedure Description


No.
Verify 1. Run below commands to get the load-balancer IP address and port
4. Prometheus number for Prometheus Alert Manager Web Interface.
Alert manager is
accessible # LoadBalancer ip address of the alertmanager
service is retrieved with below command
$ export ALERTMANAGER_LOADBALANCER_IP=$(kubectl get
services occne-prometheus-alertmanager --namespace
occne-infra -o
jsonpath="{.status.loadBalancer.ingress[*].ip}")

# LoadBalancer port number of the alertmanager


service is retrieved with below command
$ export ALERTMANAGER_LOADBALANCER_PORT=$(kubectl
get services occne-prometheus-alertmanager --
namespace occne-infra -o
jsonpath="{.spec.ports[*].port}")

# Complete url for accessing alertmanager in


external browser
$ echo http://$ALERTMANAGER_LOADBALANCER_IP:
$ALERTMANAGER_LOADBALANCER_PORT
http://10.75.182.53:80
2. Launch the Browser and navigate to http://
$ALERTMANAGER_LOADBALANCER_IP:
$ALERTMANAGER_LOADBALANCER_PORT (e.g.: http://
10.75.182.53:80 in the example above) received in the output of the
above commands. Ensure the AlertManager GUI is accessible.

4-3
Chapter 4
Post Install Verification

Table 4-1 (Cont.) OCCNE Post Install Verification

Step Procedure Description


No.
Verify metrics 1. Run below commands to get the load-balancer IP address and port
5. are scraped and number for Prometheus Server Web Interface.
stored in
prometheus # LoadBalancer ip address of the prometheus service
server is retrieved with below command
$ export PROMETHEUS_LOADBALANCER_PORT=$(kubectl get
services occne-prometheus-server --namespace occne-
infra -o jsonpath="{.spec.ports[*].port}")

# LoadBalancer port number of the prometheus service


is retrieved with below command
$ export PROMETHEUS_LOADBALANCER_IP=$(kubectl get
services occne-prometheus-server --namespace occne-
infra -o
jsonpath="{.status.loadBalancer.ingress[*].ip}")

# Complete url for accessing prometheus in external


browser
$ echo http://$PROMETHEUS_LOADBALANCER_IP:
$PROMETHEUS_LOADBALANCER_PORT
http://10.75.182.54:80
2. Launch the Browser and navigate to http://
$PROMETHEUS_LOADBALANCER_IP:
$PROMETHEUS_LOADBALANCER_PORT (e.g.: http://
10.75.182.54:80 in the example above) received in the output of the
above commands. Ensure the Prometheus server GUI is accessible.
3. Select "UP" option from "insert metric at cursor" drop down and
click on "Execute" button.
4. Here the entries present under the Element section are scrape
endpoints and under the value section its corresponding status( 1 for
up 0 for down). Ensure all the scrape endpoints have value as 1
(means up and running).

Verify Alerts are 1. Navigate to alerts tab of Prometheus server GUI or navigate using
6. configured URL http://$PROMETHEUS_LOADBALANCER_IP:
$PROMETHEUS_LOADBALANCER_PORT/
alertsFor<PROMETHEUS_LOADBALANCER_IP>and<PROMETH
EUS_LOADBALANCER_PORT>
2. If below alerts are seen in " Alerts" tab of prometheus GUI, then
Alerts are configured properly.

4-4
Chapter 4
Post Install Verification

Table 4-1 (Cont.) OCCNE Post Install Verification

Step Procedure Description


No.
Verify grafana is 1. Run below commands to get the load-balancer IP address and port
7. accessible and number for Grafana Web Interface.
change the
default password # LoadBalancer ip address of the grafana service is
for admin user retrieved with below command
$ export GRAFANA_LOADBALANCER_IP=$(kubectl get
services occne-grafana --namespace occne-infra -o
jsonpath="{.status.loadBalancer.ingress[*].ip}")

# LoadBalancer port number of the grafana service is


retrieved with below command
$ export GRAFANA_LOADBALANCER_PORT=$(kubectl get
services occne-grafana --namespace occne-infra -o
jsonpath="{.spec.ports[*].port}")

# Complete url for accessing grafana in external


browser
$ echo http://$GRAFANA_LOADBALANCER_IP:
$GRAFANA_LOADBALANCER_PORT
http://10.75.182.55:80
2. Launch the Browser and navigate to http://
$GRAFANA_LOADBALANCER_IP:
$GRAFANA_LOADBALANCER_PORT (e.g.: http://
10.75.182.55:80 in the example above) received in the output of the
above commands. Ensure the Prometheus server GUI is accessible.
The default username and password is admin/admin for the 1st time
access.
3. At first connection to the Grafana dashboard, a 'Change Password'
screen will appear. Change the password to the customer provided
credentials.
Note: Grafana data is not persisted, so if Grafana services restarted
for some reason change password screen will appear again.
4. Grafana dashboards are accessed after the changing the default
password in the above step.
5. Click on "New dashboard" as marked red below.
6. Click on "Add Query"
7. From "Queries to" drop down select "Prometheus" as data source.
Presence of " Prometheus " entry in the " Queries to " drop down
ensures Grafana is connected to Prometheus time series database.
8. In the Query Section marked in Red below put " sum by(__name__)
({kubernetes_namespace="occne-infra"}) " and then click any where
outside of the textbox and wait for few seconds. Ensure the dashboard
appearing in the top section of the page. This link shows all the
metrics and number of entries in each metrics over time span
originated from kubernetes namespace 'occne-infra. In the add query
section we can give any valid promQl query.Example for using the
metrics list link above to write a promQL query:
sum($metricnamefromlist)sum by(kubernetes_pod_name)
($metricnamefromlist{kubernetes_namespace="occne-infra"})For
more details about promQl please follow the link.

4-5
A
Artifacts
The following appendices outline procedures referenced by one or more install procedures.
These procedures may be conditionally executed based on customer requirements or to address
certain deployment environments.

Repository Artifacts
OL YUM Repository Requirements

The following manifest includes the current list of RPMs that have been tested with a fully
configured system.

Table A-1 OL YUM Repository Requirements

Num RPM Name/Version


1 nspr-4.19.0-1.el7_5.x86_64
2 gpgme-1.3.2-5.el7.x86_64
3 zlib-1.2.7-18.el7.x86_64
4 elfutils-libelf-0.172-2.el7.x86_64
5 numactl-libs-2.0.9-7.el7.x86_64
6 perl-macros-5.16.3-294.el7_6.x86_64
7 quota-nls-4.01-17.el7.noarch
8 GeoIP-1.5.0-13.el7.x86_64
9 lz4-1.7.5-2.0.1.el7.x86_64
10 qrencode-libs-3.4.1-3.el7.x86_64
11 ncurses-libs-5.9-14.20130511.el7_4.x86_64
12 sg3_utils-libs-1.37-17.el7.x86_64
13 libss-1.42.9-13.el7.x86_64
14 info-5.1-5.el7.x86_64
15 libselinux-utils-2.5-14.1.el7.x86_64
16 openssl-libs-1.0.2k-16.0.1.el7.x86_64
17 sed-4.2.2-5.el7.x86_64
18 python-libs-2.7.5-76.0.1.el7.x86_64
19 polkit-pkla-compat-0.1-4.el7.x86_64
20 libdb-5.3.21-24.el7.x86_64
21 glib2-2.56.1-2.el7.x86_64
22 iputils-20160308-10.el7.x86_64
23 libcap-ng-0.7.5-4.el7.x86_64
24 rhnlib-2.5.65-8.0.1.el7.noarch
25 libffi-3.0.13-18.el7.x86_64
26 python-firewall-0.5.3-5.el7.noarch

A-1
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


27 which-2.20-7.el7.x86_64
28 python-perf-3.10.0-957.5.1.el7.x86_64
29 expat-2.1.0-10.el7_3.x86_64
30 bind-libs-9.9.4-73.el7_6.x86_64
31 libidn-1.28-4.el7.x86_64
32 nss-pem-1.0.3-5.el7.x86_64
33 xmlrpc-c-1.32.5-1905.svn2451.el7.x86_64
34 libssh2-1.4.3-12.el7.x86_64
35 rpm-4.11.3-35.el7.x86_64
36 dmraid-1.0.0.rc16-28.el7.x86_64
37 gdbm-1.10-8.el7.x86_64
38 rpm-python-4.11.3-35.el7.x86_64
39 usb_modeswitch-data-20170806-1.el7.noarch
40 perl-Pod-Perldoc-3.20-4.el7.noarch
41 rhn-client-tools-2.0.2-24.0.5.el7.x86_64
42 perl-Pod-Usage-1.63-3.el7.noarch
43 dmidecode-3.1-2.el7.x86_64
44 perl-Storable-2.45-3.el7.x86_64
45 libteam-1.27-5.el7.x86_64
46 perl-constant-1.27-2.el7.noarch
47 libsmartcols-2.23.2-59.el7.x86_64
48 perl-Socket-2.010-4.el7.x86_64
49 util-linux-2.23.2-59.el7.x86_64
50 perl-PathTools-3.40-5.el7.x86_64
51 kmod-20-23.0.1.el7.x86_64
52 systemd-219-62.0.4.el7_6.5.x86_64
53 polkit-0.112-18.0.1.el7_6.1.x86_64
54 plymouth-0.8.9-0.31.20140113.0.1.el7.x86_64
55 libreport-python-2.1.11-42.0.1.el7.x86_64
56 yum-plugin-ulninfo-0.2-13.el7.noarch
57 boost-system-1.53.0-27.el7.x86_64
58 grub2-tools-2.02-0.76.0.3.el7.1.x86_64
59 cronie-anacron-1.4.11-20.el7_6.x86_64
60 virt-what-1.18-4.el7.x86_64
61 libnfnetlink-1.0.1-4.el7.x86_64
62 policycoreutils-2.5-29.0.1.el7_6.1.x86_64
63 keyutils-libs-1.5.8-3.el7.x86_64
64 dracut-network-033-554.0.3.el7.x86_64
65 desktop-file-utils-0.23-1.el7.x86_64
66 libreport-cli-2.1.11-42.0.1.el7.x86_64
67 abrt-dbus-2.1.11-52.0.1.el7.x86_64
68 pinfo-0.6.10-9.el7.x86_64

A-2
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


69 hunspell-en-GB-0.20121024-6.el7.noarch
70 abrt-addon-pstoreoops-2.1.11-52.0.1.el7.x86_64
71 acl-2.2.51-14.el7.x86_64
72 lvm2-libs-2.02.180-10.0.1.el7_6.3.x86_64
73 quota-4.01-17.el7.x86_64
74 libdwarf-20130207-4.el7.x86_64
75 kernel-3.10.0-957.5.1.el7.x86_64
76 setuptool-1.19.11-8.el7.x86_64
77 make-3.82-23.el7.x86_64
78 teamd-1.27-5.el7.x86_64
79 usbutils-007-5.el7.x86_64
80 libxcb-1.13-1.el7.x86_64
81 lshw-B.02.18-12.el7.x86_64
82 libmodman-2.0.1-8.el7.x86_64
83 kmod-kvdo-6.1.1.125-5.0.1.el7.x86_64
84 psacct-6.6.1-13.el7.x86_64
85 boost-date-time-1.53.0-27.el7.x86_64
86 vim-common-7.4.160-5.el7.x86_64
87 biosdevname-0.7.3-1.el7.x86_64
88 hardlink-1.0-19.el7.x86_64
89 vim-enhanced-7.4.160-5.el7.x86_64
90 smartmontools-6.5-1.el7.x86_64
91 libXau-1.0.8-2.1.el7.x86_64
92 sssd-client-1.16.2-13.el7_6.5.x86_64
93 aic94xx-firmware-30-6.el7.noarch
94 NetworkManager-tui-1.12.0-8.el7_6.x86_64
95 p11-kit-trust-0.23.5-3.el7.x86_64
96 libreport-plugin-mailx-2.1.11-42.0.1.el7.x86_64
97 libdrm-2.4.91-3.el7.x86_64
98 gzip-1.5-10.el7.x86_64
99 microcode_ctl-2.1-47.0.2.el7.x86_64
100 bash-completion-2.1-6.el7.noarch
101 shared-mime-info-1.8-4.el7.x86_64
102 at-3.1.13-24.el7.x86_64
103 blktrace-1.0.5-8.el7.x86_64
104 libxml2-python-2.9.1-6.0.1.el7_2.3.x86_64
105 dracut-config-rescue-033-554.0.3.el7.x86_64
106 rhn-setup-2.0.2-24.0.5.el7.x86_64
107 ntsysv-1.7.4-1.el7.x86_64
108 bind-utils-9.9.4-73.el7_6.x86_64
109 nano-2.3.1-10.el7.x86_64
110 e2fsprogs-1.42.9-13.el7.x86_64

A-3
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


111 man-db-2.6.3-11.el7.x86_64
112 perl-Pod-Escapes-1.04-294.el7_6.noarch
113 setserial-2.17-33.el7.x86_64
114 python-slip-0.4.0-4.el7.noarch
115 strace-4.12-9.el7.x86_64
116 pyxattr-0.5.1-5.el7.x86_64
117 rfkill-0.4-10.el7.x86_64
118 iwl1000-firmware-39.31.5.1-999.1.el7.noarch
119 iwl135-firmware-18.168.6.1-999.1.el7.noarch
120 iwl6050-firmware-41.28.5.1-999.1.el7.noarch
121 iwl6000g2a-firmware-17.168.5.3-999.1.el7.noarch
122 iwl2030-firmware-18.168.6.1-999.1.el7.noarch
123 grub2-common-2.02-0.76.0.3.el7.1.noarch
124 grub2-pc-modules-2.02-0.76.0.3.el7.1.noarch
125 glibc-common-2.17-260.0.15.el7_6.3.x86_64
126 passwd-0.79-4.el7.x86_64
127 pygpgme-0.3-9.el7.x86_64
128 filesystem-3.2-25.el7.x86_64
129 basesystem-10.0-7.0.1.el7.noarch
130 libpipeline-1.2.3-3.el7.x86_64
131 kernel-uek-firmware-4.1.12-112.16.4.el7uek.noarch
132 gpm-libs-1.20.7-5.el7.x86_64
133 ncurses-base-5.9-14.20130511.el7_4.noarch
134 libutempter-1.1.6-4.el7.x86_64
135 libdaemon-0.14-7.el7.x86_64
136 pcre-8.32-17.el7.x86_64
137 xz-libs-5.2.2-1.el7.x86_64
138 libxml2-2.9.1-6.0.1.el7_2.3.x86_64
139 popt-1.13-16.el7.x86_64
140 bzip2-libs-1.0.6-13.el7.x86_64
141 readline-6.2-10.el7.x86_64
142 libattr-2.4.46-13.el7.x86_64
143 libacl-2.2.51-14.el7.x86_64
144 gawk-4.0.2-4.el7_3.1.x86_64
145 dbus-glib-0.100-7.el7.x86_64
146 libgpg-error-1.12-3.el7.x86_64
147 libusbx-1.0.21-1.el7.x86_64
148 cpio-2.11-27.el7.x86_64
149 libtar-1.2.11-29.el7.x86_64
150 json-c-0.11-4.el7_0.x86_64
151 sqlite-3.7.17-8.el7.x86_64
152 lua-5.1.4-15.el7.x86_64

A-4
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


153 usermode-1.111-5.el7.x86_64
154 groff-base-1.22.2-8.el7.x86_64
155 hunspell-1.3.2-15.el7.x86_64
156 dmraid-events-1.0.0.rc16-28.el7.x86_64
157 perl-parent-0.225-244.el7.noarch
158 usb_modeswitch-2.5.1-1.el7.x86_64
159 perl-podlators-2.5.1-3.el7.noarch
160 python-slip-dbus-0.4.0-4.el7.noarch
161 kernel-3.10.0-862.el7.x86_64
162 perl-Encode-2.51-7.el7.x86_64
163 python-hwdata-1.7.3-4.el7.noarch
164 perl-threads-1.87-4.el7.x86_64
165 perl-Filter-1.49-3.el7.x86_64
166 perl-Time-HiRes-1.9725-3.el7.x86_64
167 perl-threads-shared-1.43-6.el7.x86_64
168 perl-Time-Local-1.2300-2.el7.noarch
169 perl-Carp-1.26-244.el7.noarch
170 perl-File-Path-2.09-2.el7.noarch
171 perl-Pod-Simple-3.28-4.el7.noarch
172 fxload-2002_04_11-16.el7.x86_64
173 pciutils-libs-3.5.1-3.el7.x86_64
174 alsa-tools-firmware-1.1.0-1.el7.x86_64
175 libmnl-1.0.3-7.el7.x86_64
176 python-pyudev-0.15-9.el7.noarch
177 libnl3-cli-3.2.28-4.el7.x86_64
178 plymouth-scripts-0.8.9-0.31.20140113.0.1.el7.x86_64
179 p11-kit-0.23.5-3.el7.x86_64
180 libedit-3.0-12.20121213cvs.el7.x86_64
181 rhnsd-5.0.13-10.0.1.el7.x86_64
182 libnl-1.1.4-3.el7.x86_64
183 yum-rhn-plugin-2.0.1-10.0.1.el7.noarch
184 newt-0.52.15-4.el7.x86_64
185 sysvinit-tools-2.88-14.dsf.el7.x86_64
186 libestr-0.1.9-2.el7.x86_64
187 yajl-2.0.4-4.el7.x86_64
188 libtiff-4.0.3-27.el7_3.x86_64
189 hostname-3.13-3.el7.x86_64
190 lzo-2.06-8.el7.x86_64
191 libnetfilter_conntrack-1.0.6-1.el7_3.x86_64
192 iproute-4.11.0-14.el7.x86_64
193 boost-thread-1.53.0-27.el7.x86_64
194 less-458-9.el7.x86_64

A-5
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


195 libdb-utils-5.3.21-24.el7.x86_64
196 bzip2-1.0.6-13.el7.x86_64
197 libpng-1.5.13-7.el7_2.x86_64
198 mozjs17-17.0.0-20.el7.x86_64
199 libconfig-1.4.9-5.el7.x86_64
200 libproxy-0.4.11-11.el7.x86_64
201 gmp-6.0.0-15.el7.x86_64
202 libverto-0.2.5-4.el7.x86_64
203 pth-2.0.7-23.el7.x86_64
204 libyaml-0.1.4-11.el7_0.x86_64
205 lsscsi-0.27-6.el7.x86_64
206 libtasn1-4.10-1.el7.x86_64
207 cracklib-2.9.0-11.el7.x86_64
208 pkgconfig-0.27.1-4.el7.x86_64
209 newt-python-0.52.15-4.el7.x86_64
210 cyrus-sasl-lib-2.1.26-23.el7.x86_64
211 pygobject2-2.28.6-11.el7.x86_64
212 cracklib-dicts-2.9.0-11.el7.x86_64
213 pam-1.1.8-22.el7.x86_64
214 python-augeas-0.5.0-2.el7.noarch
215 python-iniparse-0.4-9.el7.noarch
216 gettext-0.19.8.1-2.el7.x86_64
217 python-gobject-base-3.22.0-1.el7_4.1.x86_64
218 python-chardet-2.2.1-1.el7_1.noarch
219 python-configobj-4.7.2-7.el7.noarch
220 PyYAML-3.10-11.el7.x86_64
221 m2crypto-0.21.1-17.el7.x86_64
222 fipscheck-lib-1.4.1-6.el7.x86_64
223 xmlrpc-c-client-1.32.5-1905.svn2451.el7.x86_64
224 bash-4.2.46-31.el7.x86_64
225 nss-util-3.36.0-1.1.el7_6.x86_64
226 libselinux-2.5-14.1.el7.x86_64
227 audit-libs-2.8.4-4.el7.x86_64
228 libcom_err-1.42.9-13.el7.x86_64
229 augeas-libs-1.4.0-6.el7_6.1.x86_64
230 libstdc++-4.8.5-36.0.1.el7.x86_64
231 perl-libs-5.16.3-294.el7_6.x86_64
232 libsemanage-2.5-14.el7.x86_64
233 file-libs-5.11-35.el7.x86_64
234 findutils-4.5.11-6.el7.x86_64
235 setup-2.8.71-10.el7.noarch
236 ethtool-4.8-9.el7.x86_64

A-6
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


237 libjpeg-turbo-1.2.90-6.el7.x86_64
238 dyninst-9.3.1-2.el7.x86_64
239 e2fsprogs-libs-1.42.9-13.el7.x86_64
240 kmod-libs-20-23.0.1.el7.x86_64
241 vim-minimal-7.4.160-5.el7.x86_64
242 ca-certificates-2018.2.22-70.0.el7_5.noarch
243 coreutils-8.22-23.0.1.el7.x86_64
244 libblkid-2.23.2-59.el7.x86_64
245 python-2.7.5-76.0.1.el7.x86_64
246 libmount-2.23.2-59.el7.x86_64
247 grubby-8.28-25.0.1.el7.x86_64
248 pyOpenSSL-0.13.1-4.el7.x86_64
249 python-dmidecode-3.12.2-3.el7.x86_64
250 python-urlgrabber-3.10-9.el7.noarch
251 python-ethtool-0.8-7.el7.x86_64
252 sos-3.6-13.0.1.el7_6.noarch
253 gdb-7.6.1-114.el7.x86_64
254 openssl-1.0.2k-16.0.1.el7.x86_64
255 libtirpc-0.2.4-0.15.el7.x86_64
256 oracle-logos-70.0.3-4.0.9.el7.noarch
257 nss-3.36.0-7.1.el7_6.x86_64
258 nss-tools-3.36.0-7.1.el7_6.x86_64
259 libcurl-7.29.0-51.el7.x86_64
260 rpm-libs-4.11.3-35.el7.x86_64
261 openldap-2.4.44-21.el7_6.x86_64
262 rpm-build-libs-4.11.3-35.el7.x86_64
263 yum-3.4.3-161.0.1.el7.noarch
264 oraclelinux-release-el7-1.0-5.el7.x86_64
265 iptables-1.4.21-28.el7.x86_64
266 libfprint-0.8.2-1.el7.x86_64
267 kernel-tools-libs-3.10.0-957.5.1.el7.x86_64
268 iw-4.3-2.el7.x86_64
269 ipset-libs-6.38-3.el7_6.x86_64
270 lm_sensors-libs-3.4.0-6.20160601gitf9185e5.el7.x86_64
271 procps-ng-3.3.10-23.el7.x86_64
272 device-mapper-1.02.149-10.0.1.el7_6.3.x86_64
273 device-mapper-libs-1.02.149-10.0.1.el7_6.3.x86_64
274 dracut-033-554.0.3.el7.x86_64
275 elfutils-libs-0.172-2.el7.x86_64
276 dbus-libs-1.10.24-12.0.1.el7.x86_64
277 dbus-1.10.24-12.0.1.el7.x86_64
278 satyr-0.13-15.el7.x86_64

A-7
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


279 initscripts-9.49.46-1.0.1.el7.x86_64
280 device-mapper-event-libs-1.02.149-10.0.1.el7_6.3.x86_64
281 libreport-2.1.11-42.0.1.el7.x86_64
282 grub2-tools-minimal-2.02-0.76.0.3.el7.1.x86_64
283 libstoragemgmt-python-1.6.2-4.el7.noarch
284 libstoragemgmt-python-clibs-1.6.2-4.el7.x86_64
285 cronie-1.4.11-20.el7_6.x86_64
286 dhcp-libs-4.2.5-68.0.1.el7_5.1.x86_64
287 selinux-policy-3.13.1-229.0.3.el7_6.9.noarch
288 dhclient-4.2.5-68.0.1.el7_5.1.x86_64
289 kexec-tools-2.0.15-21.0.3.el7.x86_64
290 xdg-utils-1.1.0-0.17.20120809git.el7.noarch
291 grub2-pc-2.02-0.76.0.3.el7.1.x86_64
292 libreport-web-2.1.11-42.0.1.el7.x86_64
293 abrt-2.1.11-52.0.1.el7.x86_64
294 abrt-python-2.1.11-52.0.1.el7.x86_64
295 abrt-addon-vmcore-2.1.11-52.0.1.el7.x86_64
296 abrt-tui-2.1.11-52.0.1.el7.x86_64
297 device-mapper-event-1.02.149-10.0.1.el7_6.3.x86_64
298 lvm2-2.02.180-10.0.1.el7_6.3.x86_64
299 NetworkManager-1.12.0-8.el7_6.x86_64
300 fprintd-0.8.1-2.el7.x86_64
301 openssh-clients-7.4p1-16.el7.x86_64
302 abrt-addon-python-2.1.11-52.0.1.el7.x86_64
303 authconfig-6.2.8-30.el7.x86_64
304 elfutils-0.172-2.el7.x86_64
305 abrt-cli-2.1.11-52.0.1.el7.x86_64
306 kernel-uek-4.1.12-112.16.4.el7uek.x86_64
307 libsss_nss_idmap-1.16.2-13.el7_6.5.x86_64
308 pciutils-3.5.1-3.el7.x86_64
309 kernel-uek-4.1.12-124.26.1.el7uek.x86_64
310 kbd-legacy-1.15.5-15.el7.noarch
311 vim-filesystem-7.4.160-5.el7.x86_64
312 rsync-3.1.2-4.el7.x86_64
313 libX11-common-1.6.5-2.el7.noarch
314 gdk-pixbuf2-2.36.12-3.el7.x86_64
315 firewalld-0.5.3-5.el7.noarch
316 vdo-6.1.1.125-3.el7.x86_64
317 ntpdate-4.2.6p5-28.0.1.el7.x86_64
318 abrt-console-notification-2.1.11-52.0.1.el7.x86_64
319 fprintd-pam-0.8.1-2.el7.x86_64
320 grub2-2.02-0.76.0.3.el7.1.x86_64

A-8
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


321 sysstat-10.1.5-17.el7.x86_64
322 rpcbind-0.2.0-47.el7.x86_64
323 tuned-2.10.0-6.el7.noarch
324 yum-langpacks-0.4.2-7.el7.noarch
325 rng-tools-6.3.1-3.el7.x86_64
326 chrony-3.2-2.0.1.el7.x86_64
327 cyrus-sasl-plain-2.1.26-23.el7.x86_64
328 irqbalance-1.0.8-1.el7.x86_64
329 mtr-0.85-7.el7.x86_64
330 mdadm-4.1-rc1_2.el7.x86_64
331 btrfs-progs-4.9.1-1.0.2.el7.x86_64
332 hwdata-0.252-9.1.el7.x86_64
333 tcsh-6.18.01-15.el7.x86_64
334 ledmon-0.90-1.el7.x86_64
335 cryptsetup-2.0.3-3.el7.x86_64
336 hunspell-en-0.20121024-6.el7.noarch
337 kernel-tools-3.10.0-957.5.1.el7.x86_64
338 rhn-check-2.0.2-24.0.5.el7.x86_64
339 bc-1.06.95-13.el7.x86_64
340 systemtap-runtime-3.3-3.el7.x86_64
341 unzip-6.0-19.el7.x86_64
342 gobject-introspection-1.56.1-1.el7.x86_64
343 time-1.7-45.el7.x86_64
344 libselinux-python-2.5-14.1.el7.x86_64
345 xfsprogs-4.5.0-18.0.1.el7.x86_64
346 alsa-lib-1.1.6-2.el7.x86_64
347 mariadb-libs-5.5.60-1.el7_5.x86_64
348 rdate-1.4-25.el7.x86_64
349 sg3_utils-1.37-17.el7.x86_64
350 bridge-utils-1.5-9.el7.x86_64
351 iprutils-2.4.16.1-1.el7.x86_64
352 libgomp-4.8.5-36.0.1.el7.x86_64
353 scl-utils-20130529-19.el7.x86_64
354 dosfstools-3.0.20-10.el7.x86_64
355 rootfiles-8.1-11.el7.noarch
356 iwl3160-firmware-22.0.7.0-999.1.el7.noarch
357 ivtv-firmware-20080701-26.el7.noarch
358 iwl7265-firmware-22.0.7.0-999.1.el7.noarch
359 iwl105-firmware-18.168.6.1-999.1.el7.noarch
360 iwl7260-firmware-22.0.7.0-999.1.el7.noarch
361 iwl100-firmware-39.31.5.1-999.1.el7.noarch
362 man-pages-3.53-5.el7.noarch

A-9
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


363 iwl5000-firmware-8.83.5.1_1-999.1.el7.noarch
364 iwl6000-firmware-9.221.4.1-999.1.el7.noarch
365 iwl6000g2b-firmware-17.168.5.2-999.1.el7.noarch
366 words-3.0-22.el7.noarch
367 iwl4965-firmware-228.61.2.24-999.1.el7.noarch
368 gpg-pubkey-ec551f03-53619141
369 iwl2000-firmware-18.168.6.1-999.1.el7.noarch
370 libgcc-4.8.5-36.0.1.el7.x86_64
371 NetworkManager-config-server-1.12.0-8.el7_6.noarch
372 redhat-release-server-7.6-4.0.1.el7.x86_64
373 libreport-filesystem-2.1.11-42.0.1.el7.x86_64
374 kbd-misc-1.15.5-15.el7.noarch
375 nss-softokn-freebl-3.36.0-5.0.1.el7_5.x86_64
376 glibc-2.17-260.0.15.el7_6.3.x86_64
377 libsepol-2.5-10.el7.x86_64
378 python-pycurl-7.19.0-19.el7.x86_64
379 langtable-0.0.31-3.el7.noarch
380 libuuid-2.23.2-59.el7.x86_64
381 libndp-1.2-7.el7.x86_64
382 langtable-data-0.0.31-3.el7.noarch
383 oraclelinux-release-7.6-1.0.15.el7.x86_64
384 libpcap-1.5.3-11.el7.x86_64
385 perl-5.16.3-294.el7_6.x86_64
386 ustr-1.0.4-16.el7.x86_64
387 file-5.11-35.el7.x86_64
388 sgpio-1.2.0.10-13.el7.x86_64
389 nss-softokn-3.36.0-5.0.1.el7_5.x86_64
390 jasper-libs-1.900.1-33.el7.x86_64
391 freetype-2.8-12.el7_6.1.x86_64
392 tar-1.26-35.el7.x86_64
393 chkconfig-1.7.4-1.el7.x86_64
394 krb5-libs-1.15.1-37.el7_6.x86_64
395 grep-2.20-3.el7.x86_64
396 shadow-utils-4.1.5.1-25.el7.x86_64
397 libcap-2.22-9.el7.x86_64
398 linux-firmware-20181031-999.1.git1baa3486.el7.noarch
399 crontabs-1.11-6.20121102git.el7.noarch
400 python-linux-procfs-0.4.9-4.el7.noarch
401 dbus-python-1.1.1-9.el7.x86_64
402 libgcrypt-1.5.3-14.el7.x86_64
403 python2-futures-3.1.1-5.el7.noarch
404 libnl3-3.2.28-4.el7.x86_64

A-10
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


405 bind-libs-lite-9.9.4-73.el7_6.x86_64
406 os-prober-1.58-9.0.1.el7.x86_64
407 tcp_wrappers-libs-7.6-77.el7.x86_64
408 binutils-2.27-34.base.0.1.el7.x86_64
409 openssh-7.4p1-16.el7.x86_64
410 diffutils-3.3-4.el7.x86_64
411 nss-sysinit-3.36.0-7.1.el7_6.x86_64
412 xz-5.2.2-1.el7.x86_64
413 curl-7.29.0-51.el7.x86_64
414 hunspell-en-US-0.20121024-6.el7.noarch
415 gnupg2-2.0.22-5.el7_5.x86_64
416 perl-HTTP-Tiny-0.033-3.el7.noarch
417 yum-utils-1.1.31-50.0.1.el7.noarch
418 perl-Text-ParseWords-3.29-4.el7.noarch
419 pixman-0.34.0-1.el7.x86_64
420 libpciaccess-0.14-1.el7.x86_64
421 libsss_idmap-1.16.2-13.el7_6.5.x86_64
422 perl-Exporter-5.68-3.el7.noarch
423 ipset-6.38-3.el7_6.x86_64
424 perl-Scalar-List-Utils-1.27-248.el7.x86_64
425 kpartx-0.4.9-123.el7.x86_64
426 perl-File-Temp-0.23.01-3.el7.noarch
427 cryptsetup-libs-2.0.3-3.el7.x86_64
428 ebtables-2.0.10-16.el7.x86_64
429 perl-Getopt-Long-2.40-3.el7.noarch
430 systemd-libs-219-62.0.4.el7_6.5.x86_64
431 alsa-firmware-1.0.28-2.el7.noarch
432 elfutils-default-yama-scope-0.172-2.el7.noarch
433 plymouth-core-libs-0.8.9-0.31.20140113.0.1.el7.x86_64
434 libassuan-2.1.0-3.el7.x86_64
435 systemd-sysv-219-62.0.4.el7_6.5.x86_64
436 python-gudev-147.2-7.el7.x86_64
437 libunistring-0.9.3-9.el7.x86_64
438 abrt-libs-2.1.11-52.0.1.el7.x86_64
439 slang-2.2.4-11.el7.x86_64
440 libstoragemgmt-1.6.2-4.el7.x86_64
441 jansson-2.10-1.el7.x86_64
442 NetworkManager-libnm-1.12.0-8.el7_6.x86_64
443 jbigkit-libs-2.0-11.el7.x86_64
444 libaio-0.3.109-13.el7.x86_64
445 dhcp-common-4.2.5-68.0.1.el7_5.1.x86_64
446 device-mapper-persistent-data-0.7.3-3.el7.x86_64

A-11
Appendix A
Repository Artifacts

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


447 grub2-tools-extra-2.02-0.76.0.3.el7.1.x86_64
448 libreport-plugin-ureport-2.1.11-42.0.1.el7.x86_64
449 pm-utils-1.4.1-27.el7.x86_64
450 abrt-addon-kerneloops-2.1.11-52.0.1.el7.x86_64
451 tcp_wrappers-7.6-77.el7.x86_64
452 abrt-addon-xorg-2.1.11-52.0.1.el7.x86_64
453 attr-2.4.46-13.el7.x86_64
454 wpa_supplicant-2.6-12.el7.x86_64
455 pinentry-0.8.1-17.el7.x86_64
456 systemd-python-219-62.0.4.el7_6.5.x86_64
457 openssh-server-7.4p1-16.el7.x86_64
458 abrt-addon-ccpp-2.1.11-52.0.1.el7.x86_64
459 mlocate-0.26-8.el7.x86_64
460 ncurses-5.9-14.20130511.el7_4.x86_64
461 kernel-uek-firmware-4.1.12-124.26.1.el7uek.noarch
462 snappy-1.1.0-3.el7.x86_64
463 firewalld-filesystem-0.5.3-5.el7.noarch
464 libX11-1.6.5-2.el7.x86_64
465 libseccomp-2.3.1-3.el7.x86_64
466 kbd-1.15.5-15.el7.x86_64
467 NetworkManager-team-1.12.0-8.el7_6.x86_64
468 libsysfs-2.1.0-16.el7.x86_64
469 selinux-policy-targeted-3.13.1-229.0.3.el7_6.9.noarch
470 tcpdump-4.9.2-3.el7.x86_64
471 audit-2.8.4-4.el7.x86_64
472 crda-3.18_2018.05.31-4.el7.x86_64
473 xfsdump-3.1.7-1.el7.x86_64
474 net-tools-2.0-0.24.20131004git.el7.x86_64
475 python-decorator-3.4.0-3.el7.noarch
476 libgudev1-219-62.0.4.el7_6.5.x86_64
477 parted-3.1-29.0.1.el7.x86_64
478 libpwquality-1.2.3-5.el7.x86_64
479 sudo-1.8.23-3.el7.x86_64
480 zip-3.0-11.el7.x86_64
481 python-six-1.9.0-2.el7.noarch
482 libcroco-0.6.12-4.el7.x86_64
483 ed-1.9-4.el7.x86_64
484 gettext-libs-0.19.8.1-2.el7.x86_64
485 logrotate-3.8.6-17.el7.x86_64
486 traceroute-2.0.22-2.el7.x86_64
487 yum-metadata-parser-1.1.4-10.el7.x86_64
488 wget-1.14-18.el7.x86_64

A-12
Appendix A
Docker Repository Requirements

Table A-1 (Cont.) OL YUM Repository Requirements

Num RPM Name/Version


489 uname26-1.0-1.el7.x86_64
490 python-kitchen-1.1.1-5.el7.noarch
491 lsof-4.87-6.el7.x86_64
492 pyliblzma-0.5.3-11.el7.x86_64
493 libfastjson-0.99.4-3.el7.x86_64
494 langtable-python-0.0.31-3.el7.noarch
495 man-pages-overrides-7.6.2-1.el7.x86_64
496 python-schedutils-0.4-6.el7.x86_64
497 emacs-filesystem-24.3-22.el7.noarch
498 iwl3945-firmware-15.32.2.9-999.1.el7.noarch
499 redhat-indexhtml-7-13.0.1.el7.noarch
500 mailx-12.5-19.el7.x86_64
501 iwl5150-firmware-8.24.2.2-999.1.el7.noarch
502 rsyslog-8.24.0-34.el7.x86_64
503 fipscheck-1.4.1-6.el7.x86_64
504 bind-license-9.9.4-73.el7_6.noarch
505 tzdata-2018i-1.el7.noarch
506 libuser-0.60-9.el7.x86_64

Docker Repository Requirements


The following manifest includes the current list of docker containers used by OCCNE Common
services as of 12 Mar 2019 that have been tested with a fully configured system.

Table A-2 Docker Repository Requirements

Num Docker Name/Version


1 docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.0
2 busybox:latest
3 quay.io/pires/docker-elasticsearch-curator:5.5.4
4 justwatch/elasticsearch_exporter:1.0.2
5 gcr.io/google-containers/fluentd-elasticsearch:v2.3.2
6 grafana/grafana:6.0.0
7 appropriate/curl:latest
8 busybox:1.30.0
9 docker.elastic.co/kibana/kibana-oss:6.6.0
10 metallb/controller:v0.7.3
11 metallb/speaker:v0.7.3
12 prom/alertmanager:v0.15.3
13 jimmidyson/configmap-reload:v0.2.2
14 quay.io/coreos/kube-state-metrics:v1.5.0
15 prom/node-exporter:v0.17.0

A-13
Appendix A
Docker Repository Requirements

Table A-2 (Cont.) Docker Repository Requirements

Num Docker Name/Version


16 prom/prometheus:v2.7.1
17 prom/pushgateway:v0.6.0
18 quay.io/prometheus/node-exporter:v0.17.0
19 jaegertracing/example-hotrod:latest
20 jaegertracing/jaeger-cassandra-schema:latest
21 jaegertracing/jaeger-agent:latest
22 jaegertracing/jaeger-collector:latest
23 jaegertracing/jaeger-query:latest
24 jaegertracing/spark-dependencies:latest
25 quay.io/external_storage/local-volume-provisioner:v2.2.0

OCCNE YUM Repository Configuration


To perform an installation without internet access, create a local YUM mirror with the OL7
latest, epel, and addons repositories used by the "OS installation" process. Additionally a local
repository is needed to hold the version of the docker-ce RPM used by the "Kubernetes
installer" process. Repository files will need to be created to reference these local YUM
repositories, and placed on the necessary machines (those which run the OCCNE installation
Docker instances).
Pre-requisites
1. Local YUM mirror repository for the OL7 'latest', 'epel', and 'addons' repositories.
Directions here: https://www.oracle.com/technetwork/articles/servers-storage-admin/yum-
repo-setup-1659167.html
2. Local YUM repository holding the required docker-ce RPM
3. Subscribe to following channels while creating the yum mirror from uln:
[ol7_x86_64_UEKR5]
[ol7_x86_64_ksplice]
[ol7_x86_64_latest]
[ol7_x86_64_addons]
[ol7_x86_64_developer]

For reference, view the yum mirroring instructions here: https://www.oracle.com/technetwork/


articles/servers-storage-admin/yum-repo-setup-1659167.html
1. Create OL7 repository mirror repo.
Below is an example of a repository file providing the details on a mirror with the
necessary repositories. This repository file would be placed on the machine that will run
the OCCNE deployment containers.
/etc/yum.repos.d/ol7-mirror.repo

[local_ol7_x86_64_UEKR5]
name=Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/UEKR5/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1

A-14
Appendix A
Docker Repository Requirements

proxy=_none_

[local_ol7_x86_64_latest]
name=Oracle Linux 7 Latest (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/latest/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_

[local_ol7_x86_64_addons]
name=Oracle Linux 7 Addons (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/addons/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_

[local_ol7_x86_64_ksplice]
name=Ksplice for Oracle Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/ksplice/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_

[local_ol7_x86_64_developer]
name=Packages for creating test and development environments for Oracle
Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/developer/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_

[local_ol7_x86_64_developer_EPEL]
name=EPEL Packages for creating test and development environments for Oracle
Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/developer/EPEL/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_
2. Create Docker CE repository repo.
Below is an example of a repository file providing the details on a repository with the
necessary docker-ce package.
/etc/yum.repos.d/docker-ce-stable.repo

[local_docker-ce-stable]
name=Docker CE Stable (x86_64)
baseurl=http://10.75.155.195/yum/centos/7/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1

The docker RPM placed into the repository should be docker-ce version 18.06.1.ce-3.el7
for x86_64 downloaded from: https://download.docker.com/linux/centos/7/x86_64/stable
gpg-key for the rpm can be downloaded from: https://download.docker.com/linux/
centos/gpg

A-15
Appendix A
Docker Repository Requirements

OCCNE HTTP Repository Configuration


Introduction
To perform an installation without the system needing access to the internet, a local HTTP
repository must be created and provisioned with the necessary files. These files are used to
provide the binaries for Kubernetes installation, as well as the Helm charts used during
Common Services installation.
Prerequisites
1. Docker is setup and docker commands can be run by the target system.
2. HTTP server that is reachable by the target system, Example- Running Nginx in docker
container.
$ docker run --name mynginx1 -p <port>:<port> -d nginx

More information can be found out on configuring and installing Nginx u sing docker here:
https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/
OR
Use the html directory of Apache http server created during setting up yum mirror to
perform the tasks listed below. Note: Create new directories for kubernetes binaries and
helm charts in html folder

A-16
Appendix A
Docker Repository Requirements

Procedure Steps

Table A-3 Steps to configure OCCNE HTTP Repository

Steps Procedure Description


1. Retrieve The Kubernetes installer requires access to an HTTP server
Kubernetes from which it can download the proper version of a set of
Binaries binary files. To provision an internal HTTP repository one will
need to obtain these files from the internet, and place them at a
known location on the internal HTTP server.
The following script will retrieve the proper binaries and place
them in a directory named 'binaries' under the command-line
specified directory. This 'binaries' directory needs to then be
placed on the HTTP server where it can be served up, with the
URL identified in the clusters hosts.ini inventory file (see
below).
deploy/retrieve_k8s_bins.sh

#!/bin/bash
###############################################
#################################
#
#
# Copyright (c) 2019 Oracle and/or its
affiliates. All rights reserved. #
#
#
###############################################
#################################

usage() {
echo "Retrieve kubespray binaries for a
private HTTP repo." 2>&1
echo "Expected 1 argument: webroot-
directory " 2>&1
exit 1
}

[[ "$#" -ne "1" ]] && usage

#
# Kubespray Binaries

kube_version='v1.12.5' # k8s_install/
kubespray/roles/download/defaults/main.yaml
kubeadm_version=$kube_version # k8s_install/
kubespray/roles/download/defaults/main.yaml
image_arch='amd64' # k8s_install/
kubespray/roles/download/defaults/main.yaml
etcd_version='v3.2.24' # k8s_install/
kubespray/roles/download/defaults/main.yaml
cni_version='v0.6.0' # k8s_install/
kubespray/roles/download/defaults/main.yaml

startdir=$pwd

A-17
Appendix A
Docker Repository Requirements

Table A-3 (Cont.) Steps to configure OCCNE HTTP Repository

Steps Procedure Description

mkdir -p $1/binaries/$kube_version

wget -P $1/binaries/$kube_version https://


storage.googleapis.com/kubernetes-release/
release/${kubeadm_version}/bin/linux/$
{image_arch}/kubeadm
wget -P $1/binaries/$kube_version https://
storage.googleapis.com/kubernetes-release/
release/${kube_version}/bin/linux/amd64/
hyperkube
wget -P $1/binaries https://github.com/coreos/
etcd/releases/download/${etcd_version}/etcd-$
{etcd_version}-linux-amd64.tar.gz
wget -P $1/binaries https://github.com/
containernetworking/plugins/releases/download/
$cni_version/cni-plugins-${image_arch}-$
{cni_version}.tgz

2. Run the script


$ retrieve_k8s_bins.sh <directoryname>

A-18
Appendix A
Docker Repository Requirements

Table A-3 (Cont.) Steps to configure OCCNE HTTP Repository

Steps Procedure Description


3. Retrieve Helm The Configuration installer requires access to an HTTP server
binaries and charts from which it can download the proper version of a set of
Helm charts for the common services. To provision an internal
HTTP repository one will need to obtain these charts from the
internet, and place them at a known location on the internal
HTTP server.
deploy/helm_images.txt

###############################################
#################################
#
#
# Copyright (c) 2019 Oracle and/or its
affiliates. All rights reserved. #
#
#
###############################################
#################################

# chart-name chart-version
stable/elasticsearch 1.27.2
stable/elasticsearch-curator 1.2.1
stable/elasticsearch-exporter 1.1.2
stable/fluentd-elasticsearch 2.0.7
stable/grafana 3.3.8
stable/kibana 3.0.0
stable/metallb 0.8.4
stable/prometheus 8.8.0
stable/prometheus-node-exporter 1.3.0
stable/metrics-server 2.5.1
incubator/jaeger 0.8.3

# this one is part of the configure code-base,


so not pulled. There is an image associated
in the docker image repo.
# storage/occne-local/helm/provisioner 2.3.0

A-19
Appendix A
Docker Repository Requirements

Table A-3 (Cont.) Steps to configure OCCNE HTTP Repository

Steps Procedure Description


4. Retrieve the The following script will retrieve the proper Helm binary, run
proper Helm it to retrieve the necessary charts, and place them in a
binary directory named 'charts' under the command-line specified
directory. This 'charts' directory needs to then be placed on the
HTTP server where it can be served up, with the URL
identified in the clusters hosts.ini inventory file (see below).
deploy/retrieve_helm.sh

#!/bin/bash

###############################################
#################################
#
#
# Copyright (c) 2019 Oracle and/or its
affiliates. All rights reserved. #
#
#
###############################################
#################################

usage() {
echo "Retrieve helm charts for a private
HTTP repo." 2>&1
echo "Expected 1 argument: webroot-
directory " 2>&1
echo "run with image list piped in: $0
webroot-directory < helm_images.txt" 2>&1
exit 1
}

[[ "$#" -ne "1" ]] && usage

startdir=$pwd
mkdir -p $1/charts

# helm_version='v2.11.0' #
k8s_install/kubespray/roles/download/defaults/
main.yaml configure/readme.md
helm_version='v2.9.1' # configure/
Dockerfile (and in environment)

# retrieve the helm binary


wget https://storage.googleapis.com/kubernetes-
helm/helm-${helm_version}-linux-amd64.tar.gz

# extract helm to ./helm/


tar -xvf helm-${helm_version}-linux-
amd64.tar.gz
rm -f helm-${helm_version}-linux-amd64.tar.gz
mv linux-amd64 helm
cd helm

# initialize helm, add repositories, and update

A-20
Appendix A
Docker Repository Requirements

Table A-3 (Cont.) Steps to configure OCCNE HTTP Repository

Steps Procedure Description

helm init --client-only


helm repo add incubator https://kubernetes-
charts-incubator.storage.googleapis.com/
helm repo add kiwigrid https://
kiwigrid.github.io
helm repo update

# fetch archives of helm charts and place them


in the proper directory for the local HTTP
repository
HELMCHART_DIR=$1/charts
echo "fetching charts..."
# regular expression to match for valid line
(variable as in-line regex is sometimes parsed
differently in different bash versions)
re='^(\S+)\s+(\S+)'
while read line; do
if [[ ${line} =~ ^'#'(.*) ]]; then
# comment, just echo it
echo "${BASH_REMATCH[0]}"
elif [[ ${line} =~ ^'`'(.*) ]]; then
# markdown code delimiter, ignore
:
elif [[ ${line} =~ ${re} ]]; then
echo "Retrieving chart='$
{BASH_REMATCH[1]}' version='$
{BASH_REMATCH[2]}'"
helm fetch ${BASH_REMATCH[1]} --
version=${BASH_REMATCH[2]} -d ${HELMCHART_DIR}
fi
done

echo "completed fetching charts"


cd $startdir

5. Run the script


$ retrieve_helm.sh <directoryname>
<helm_images>.txt

A-21
Appendix A
Docker Repository Requirements

Table A-3 (Cont.) Steps to configure OCCNE HTTP Repository

Steps Procedure Description


6. Update inventory The hosts.ini inventory file for the cluster needs to have a few
file with URLs variables set in the [occne:vars] section to direct the
installation logic to the repository directories populated above.
In this example the http server is winterfell on port 8082.
Note: the helm repo has a trailing / the k8s repo does NOT.
hosts.ini

...
[occne:vars]
...
occne_k8s_binary_repo='http://winterfell:8082/
binaries'
occne_helm_stable_repo_url='http://winterfell:
8082/charts/'
...

OCCNE Docker Image Registry Configuration


Introduction
To perform an installation without the system needing access to the internet, a local Docker
registry must be created, and provisioned with the necessary docker images. These docker
images are used to populate the Kubernetes pods once Kubernetes is installed, as well as
providing the services installed during Common Services installation.

Prerequisites
1. Docker is installed and docker commands can be run
2. Creating a local docker registry accessible by the target of the installation
$ docker run -d -p <port>:<port> --restart=always --name <registryname>
registry:2

(For more directions refer: https://docs.docker.com/registry/deploying/)


3. Make sure docker registry is running by registry name provided
$ docker ps

References
https://docs.docker.com/registry/deploying/
https://docs.docker.com/registry/configuration/

A-22
Appendix A
Docker Repository Requirements

Procedure Steps

Table A-4 Steps to configure OCCNE Docker Image Registry

Steps Procedure Description


1. Provision the On a machine that can reach the internet AND reach the
registry with the registry, populate the registry with the following images:
necessary images The images are listed in the text file deploy/
docker_images.txt included here, get the file and put it in a
docker_images.txt file
###############################################
#################################
#
#
# Copyright (c) 2019 Oracle and/or its
affiliates. All rights reserved. #
#
#
###############################################
#################################

#
# Kubespray Images

k8s.gcr.io/addon-resizer:1.8.3
coredns/coredns:1.2.6
gcr.io/google_containers/cluster-proportional-
autoscaler-amd64:1.3.0
quay.io/calico/kube-controllers:v3.1.3
quay.io/calico/node:v3.1.3
quay.io/calico/cni:v3.1.3
quay.io/calico/ctl:v3.1.3
gcr.io/google-containers/kube-apiserver:v1.12.5
gcr.io/google-containers/kube-controller-
manager:v1.12.5
gcr.io/google-containers/kube-proxy:v1.12.5
gcr.io/google-containers/kube-scheduler:v1.12.5
nginx:1.13
quay.io/external_storage/local-volume-
provisioner:v2.2.0
gcr.io/kubernetes-helm/tiller:v2.11.0
lachlanevenson/k8s-helm:v2.11.0
quay.io/jetstack/cert-manager-controller:v0.5.2
gcr.io/google-containers/pause:3.1
gcr.io/google_containers/pause-amd64:3.1
quay.io/coreos/etcd:v3.2.24

#
# Common Services Helm Chart Images

quay.io/pires/docker-elasticsearch-curator:
5.5.4
docker.elastic.co/elasticsearch/elasticsearch-
oss:6.7.0
justwatch/elasticsearch_exporter:1.0.2
grafana/grafana:6.1.6

A-23
Appendix A
Docker Repository Requirements

Table A-4 (Cont.) Steps to configure OCCNE Docker Image Registry

Steps Procedure Description

docker.elastic.co/kibana/kibana-oss:6.7.0
gcr.io/google-containers/fluentd-
elasticsearch:v2.3.2
metallb/controller:v0.7.3
metallb/speaker:v0.7.3
jimmidyson/configmap-reload:v0.2.2
quay.io/coreos/kube-state-metrics:v1.5.0
quay.io/prometheus/node-exporter:v0.17.0
prom/pushgateway:v0.6.0
prom/alertmanager:v0.15.3
prom/prometheus:v2.7.1
jaegertracing/jaeger-agent:1.9.0
jaegertracing/jaeger-collector:1.9.0
jaegertracing/jaeger-query:1.9.0
gcr.io/google_containers/metrics-server-
amd64:v0.3.1

A-24
Appendix A
Docker Repository Requirements

Table A-4 (Cont.) Steps to configure OCCNE Docker Image Registry

Steps Procedure Description


2. Create a script
deploy/retrieve_docker.sh
named below with
name
'retrieve_docker.s
#!/bin/bash
h'
###############################################
#################################
#
#
# Copyright (c) 2019 Oracle and/or its
affiliates. All rights reserved. #
#
#
###############################################
#################################

usage() {
echo "Pull, tag, and push images to a
private image repo." 2>&1
echo "Expected 1 argument: repo_name:port
" 2>&1
echo "run with image list piped in: $0
repo_name:port < docker_images.txt" 2>&1
exit 1
}

[[ "$#" -ne "1" ]] && usage

#
# Kubespray Images

while read line; do


if [[ $line =~ ^'#'(.*) ]]; then
echo "${BASH_REMATCH[1]}"
# comment, ignore
elif [[ $line =~ ^'`'(.*) ]]; then
echo "markdown"
# markdown code delimiter, ignore
elif [[ ! -z "$line" ]]; then
echo "Provisioning $line"
docker pull $line
docker tag $line $1/$line
docker push $1/$line
fi
done

This can be facilitated by using the above script, such as this


example:
$ retrieve_docker.sh repositoryaddr:port <
occne/deploy/docker_images.txt

A-25
Appendix A
Docker Repository Requirements

Table A-4 (Cont.) Steps to configure OCCNE Docker Image Registry

Steps Procedure Description


3. Verify the list of Access endpoint
repositories in the <dockerregistryhostip>:<dockerregistyport>/v2/_catalog using
docker registry a browser
or
using curl
$ curl http://dockerregistryhostip:5000/v2/
_catalog

Sample Result:
$ {"repositories":["coredns/
coredns","docker.elastic.co/elasticsearch/
elasticsearch-oss","docker.elastic.co/kibana/
kibana-oss","gcr.io/google-containers/fluentd-
elasticsearch","gcr.io/google-containers/kube-
apiserver","gcr.io/google-containers/kube-
controller-manager","gcr.io/google-containers/
kube-proxy","gcr.io/google-containers/kube-
scheduler","gcr.io/google-containers/
pause","gcr.io/google_containers/cluster-
proportional-autoscaler-amd64","gcr.io/
google_containers/metrics-server-
amd64","gcr.io/google_containers/pause-
amd64","gcr.io/kubernetes-helm/
tiller","grafana/grafana","jaegertracing/
jaeger-agent","jaegertracing/jaeger-
collector","jaegertracing/jaeger-
query","jimmidyson/configmap-
reload","justwatch/
elasticsearch_exporter","k8s.gcr.io/addon-
resizer","lachlanevenson/k8s-helm","metallb/
controller","metallb/speaker","nginx","prom/
alertmanager","prom/prometheus","prom/
pushgateway","quay.io/calico/cni","quay.io/
calico/ctl","quay.io/calico/kube-
controllers","quay.io/calico/node","quay.io/
coreos/etcd","quay.io/coreos/kube-state-
metrics","quay.io/external_storage/local-
volume-provisioner","quay.io/jetstack/cert-
manager-controller","quay.io/pires/docker-
elasticsearch-curator","quay.io/prometheus/
node-exporter"]}

A-26
Appendix A
Docker Repository Requirements

Table A-4 (Cont.) Steps to configure OCCNE Docker Image Registry

Steps Procedure Description


4. Set hosts.ini The hosts.ini inventory file for the cluster needs to have a few
variables variables set in the [occne:vars] section to direct the
installation logic to the registry, these variables need to be set
to the your docker registry configuration:
hosts.ini

...

[occne:vars]

...

occne_private_registry=winterfell

occne_private_registry_address='10.75.216.114'

occne_private_registry_port=5002

occne_helm_images_repo='winterfell:5002'

...

5. If error is In case a 500 error is encountered with message that states: 'no
encountered space left' during run of bash script listed above, please use
during execution following commands and re run to see if error is fixed:
of
retrieve_images.sh Docker clean up commands
script
$ docker ps --filter status=dead --filter
status=exited -aq | xargs -r docker rm -v

$ docker images --no-trunc | grep '<none>' |


awk '{ print $3 }' | xargs -r docker rmi

A-27
B
Reference Procedures

Inventory File Template


The host.ini file contains the inventory used by the various OCCNE deployment containers
that will instantiate the OCCNE cluster.

Template example
The inventory is composed of multiple groups (indicated by bracketed strings):
• local: OCCNE ansible use. Do not modify.
• occne: list of servers in the OCCNE cluster that will be installed by the os_install
container.
• k8s-cluster: list of servers in the kubernetes cluster.
• kube-master: list of servers that will be provisioned as kubernetes master nodes by the
k8s_install container.
• kube-node: list of servers that will be provisioned as kubernetes worker nodes by the
k8s_install container.
• etcd: list of servers that will be provisioned as part of kubernetes etcd cluster by the
k8s_install container.
• data_store: list of servers that will be host the VMs of the MySQL database cluster,
os_install container will install kvm on them.
• occne:vars: list of occne environment variables. Values for variables are required. See
below for description.

OCCNE Variables

Variable Definitions
occne_cluster_name k8s cluster name
nfs_host IP address OS install nfs host (host running the os_install container)
nfs_path path to mounted OS install media on nfs host. This should always be set
to /var/occne/
subnet_ipv4 subnet of IP addresses available for hosts in the OCCNE cluster
subnet_cidr subnet_ipv4 in cidr notation format
netmask subnet_ipv4 netmask
broadcast_address broadcast address on the OCCNE cluster on which pxe server will listen
default_route default router in the OCCNE cluster
next_server IP address of TFTP server used for pxe boot (host running the os_install
container)
name_server DNS name server for the OCCNE cluster
ntp_server NTP server for the OCCNE cluster

B-1
Appendix B
Inventory File Preparation

Variable Definitions
http_proxy HTTP Proxy server
https_proxy HTTPS Proxy server
occne_private_registry OCCNE private docker registry
occne_private_registry_add OCCNE private docker registry address
ress
occne_private_registry_port OCCNE private docker registry port
metallb_peer_address address of the BGP router peer that metalLB connects to
metallb_default_pool_proto protocol used to metalLB to announce allocated IP address
col
metallb_default_pool_addre range of IP address to be allocated by metalLB from the default pool
sses
pxe_install_lights_out_usr ILO user
pxe_install_lights_out_pass ILO user password
wd
pxe_config_metrics_persist (optional) Logical volume size for Metrics persistent storage, will override
_size default of 500G
pxe_config_es_data_persist (optional) Logical volume size for ElasticSearch data persistent storage,
_size will override default of 500G
pxe_config_es_master_pers (optional) Logical volume size for ElasticSearch master persistent storage,
ist_size will override default of 500G

Inventory File Preparation


Introduction
OCCNE Installation automation uses information within an OCCNE Inventory file to provision
servers and virtual machines, install cloud native components, as well as configure all of the
components within the cluster such that they constitute a cluster conformant to the OCCNE
platform specifications. To assist with the creation of the OCCNE Inventory, a boilerplate
OCCNE Inventory is provided. The boilerplate inventory file requires the input of site-specific
information.
This document outlines the procedure for taking the OCCNE Inventory boilerplate and creating
a site specific OCCNE Inventory file usable by the OCCNE Install Procedures.

Inventory File Overview


The inventory file is an Initialization (INI) formatted file. The basic elements of an inventory
file are hosts, properties, and groups.
A host is defined as a Fully Qualified Domain Name (FQDN). Properties are defined as
key=value pairs.
A property applies to a specific host when it appears on the same line as the host.
Square brackets define group names. For example [host_hp_gen_10] defines the group of
physical HP Gen10 machines. There is no explicit "end of group" delimiter, rather group
definitions end at the next group declaration or the end of the file. Groups can not be nested.
A property applies to an entire group when it is defined under a group heading not on the same
line as a host.

B-2
Appendix B
Inventory File Preparation

Groups of groups are formed using the children keyword. For example, the [occne:children]
creates an occne group comprised of several other groups.
Inline comments are not allowed.
The OCCNE Inventory file is composed of several groups:
• host_hp_gen_10: list of all physical hosts in the OCCNE cluster. Each host in this group
must also have several properties defined (outlined below)
– ansible_host: The IP address for the host's teamed primary interface. The occne/
os_install container uses this IP to configure a static IP for a pair of teamed interfaces
when the hosts are provisioned.
– ilo: The IP address of the host's iLO interface. This IP is manually configured as part
of the OCCNE Configure Addresses for RMS iLOs, OA, EBIPA process.
– mac: The MAC address of the host's network bootable interface. This is typically eno5
for Gen10 RMS hardware and eno1 for Gen10 bladed hardware. MAC addresses must
use all lowercase alphanumeric values with a dash as the separator
• host_kernel_virtual:list of all virtual hosts in the OCCNE cluster. Each host in this group
must have the same properties defined as above with the exception of the ilo property
• occne:children: Do not modify the children of the occne group
• occne:vars: This is a list of variables representing configurable site-specific data. While
some variables are optional, the ones listed in the boilerplate should be defined with valid
values. If a given site does not have applicable data to fill in for a variable, the OCCNE
installation or engineering team should be consulted. Individual variable values are
explained in subsequent sections.
• data_store: list of Storage Hosts
• kube-master: list of Master Node hosts where kubernetes master components run.
• etcd: set to the same list of nodes as the kube-master group. list of hosts that compose the
etcd server. Should always be an odd number.
• kube-node: list of Worker Nodes. Worker Nodes are where kubernetes pods run and should
be comprised of the bladed hosts.
• k8s-cluster:children: do not modify the children of k8s-cluster
Data Tier Groups
The MySQL service is comprised of several nodes running on virtual machines on RMS hosts.
This collection of hosts is referred to as the MySQL Cluster. Each host in the MySQL Cluster
requires a NodeID parameter. Each host in the MySQL cluster is required to have a NodeID
value that is unique across the MySQL cluster. Additional parameter range limitations are
outlined below.
• mysqlndb_mgm_nodes: list of MySQL Management nodes. In OCCNE 1.0 this group
consists of three virtual machines distributed equally among the kube-master nodes. These
nodes must have a NodeId parameter defined
– NodeId: Parameter must be unique across the MySQL Cluster and have a value
between 49 and 255.
• mysqlndb_data_nodes: List of MySQL Data nodes. In OCCNE 1.0 this group consists of
four virtual machines distributed equally among the Storage Hosts. Requires a NodeId
parameter.

B-3
Appendix B
Inventory File Preparation

– NodeId: Parameter must be unique across the MySQL Cluster and have a value
between 1 and 48.
• mysqlndb_sql_nodes: List of MySQL nodes. In OCCNE 1.0 this group consists of two
virtual machines distributed equally among the Storage Hosts. Requires a NodeId
parameters.
– NodeId: Parameter must be unique across the MySQL Clsuter and have a value
between 49 and 255.
• mysqlndb_all_nodes: Do not modify the children of the mysqlndb_all_nodes group.
• mysqlndb_all_nodes: Do not modify the variables in this group
Prerequisites
Prior to initiating the procedure steps, the Inventory Boilerplate should be copied to a system
where it can be edited and saved for future use. Eventually the hosts.ini file needs to be
transferred to OCCNE servers.
References
1. Ansible Inventory Intro

Procedure Steps

Table B-1 Procedure for OCCNE Inventory File Preparation

Step # Procedure Description


1. OCCNE In order to provide each OCCNE host with a unique FQDN, the first step in
Cluster composing the OCCNE Inventory is to create an OCCNE Cluster domain
Name suffix. The OCCNE Cluster domain suffix starts with a Top-level Domain
(TLD). The structure of a TLD is maintained by various government and
commercial authorities. Additional domain name levels help identify the
cluster and are added to help convey additional meaning. OCCNE suggests
adding at least one "ad hoc" identifier and at least one "geographic" and
"organizational" identifier.
Geographic and organizational identifiers may be multiple levels deep.
An example OCCNE Cluster Name using the following identifiers is below:
• Ad hoc Identifier: atlantic
• Organizational Identifier: lab1
• Organizational Identifier: research
• Geographical Identifier (State of North Carolina): nc
• Geographical Identifier (Country of United States): us
• TLD: oracle.com
Example OCCNE Cluster name: atlantic.lab1.research.nc.us.oracle.com
2. Create Using the OCCNE Cluster domain suffix created above, fill out the
host_hp_gen inventory boilerplate with the list of hosts in the host_hp_gen_10 and
_10 and host_kernel_virtual groups.
host_kernel_ The recommended host name prefix for nodes in the host_hp_gen_10
virtual group groups is "k8s-x" where x is a number 1 to N. Kubernetes "master" and
lists "worker" nodes should not be differentiated using the host name. The
recommended host name prefix for nodes in the host_kernel_virtual group
is "db-x" where x is a number 1 to N. MySQL Cluster nodes should not be
differentiated using host names.

B-4
Appendix B
Inventory File Preparation

Table B-1 (Cont.) Procedure for OCCNE Inventory File Preparation

Step # Procedure Description


3. Edit Edit the values in the occne:vars group to reflect site specific data. Values in
occne:vars the occne:vars group are defined below:
• occne_cluster_name: set to the OCCNE Cluster Name generated in
step 2.1 above.
• nfs_host: set to the IP of the bastion host
• nfs_path: set to the location of the nfs root created on the bastion host
• subnet_ipv4: set to the subnet of the network used to assign IPs for
OCCNE hosts
• subnet_cidr: appears this is not used so does not need to be included.
If it does need to be included, set to the cidr notation for the subnet. for
example /24
• netmask: set appropriately for the network used to assign IPs for
OCCNE hosts
• broadcast_address: set appropriately for the network used to assign
IPs for OCCNE hosts
• default_route: set to the IP of the TOR switch
• next_server: set to the IP of the bastion host
• name_server: Set to the IP of the bastion host.
• ntp_server: Set to the IP of the TOR switch
• http_proxy/https_proxy: set the http proxy.
• occne_private_registry: set to the non-FQDN of the docker registry
used by worker nodes to pull docker images from.
Note: It's ok if this name is not in DNS, or if DNS is not available. The
IP and Port settings are used to configure this registry on each host,
placing the name and IP in each host's /etc/hosts file, ensuring the
name resolves to an IP.
• occne_private_registry_address: set to the IP of the docker registry
above
• occne_private_registry_port: set to the Port of the docker registry
above
• helm_stable_repo_url: set to the url of the local helm repo
• occne_helm_stable_repo_url: set to the url of the local helm repo
• occne_helm_images_repo: set to the url where images referenced in
helm charts reside
• pxe_install_lights_out_usr: set to the user name configured for iLO
admins on each host in the OCCNE Frame
• pxe_install_lights_out_passwd: set to the password configured for
iLO admins on each host in the OCCNE Frame
• occne_k8s_binary_repo: set the IP of bastion-1 and the port
configured
• docker_rh_repo_base_url: set to the URL of the repo containing the
docker RPMs
• docker_rh_repo_gpgkey: set to the URL of the gpgkey in the docker
yum repo

OCCNE Inventory Boilerplate


################################################################################
# #
# Copyright (c) 2019 Oracle and/or its affiliates. All rights reserved. #
# #
################################################################################

B-5
Appendix B
Inventory File Preparation

################################################################################
# EXAMPLE OCCNE Cluster hosts.ini file. Defines OCCNE deployment variables
# and targets.

################################################################################
# Definition of the host node local connection for Ansible control,
# do not change
[local]
127.0.0.1 ansible_connection=local

################################################################################
# This is a list of all of the nodes in the targeted deployment system with the
# IP address to use for Ansible control during deployment.
# For bare metal hosts, the IP of the ILO is used for driving reboots.
# Host MAC addresses is used to identify nodes during PXE-boot phase of the
# os_install process.
# MAC addresses must be lowercase and delimited with a dash "-"

[host_hp_gen_10]
k8s-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
db-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
db-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx

[host_kernel_virtual]
db-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-8.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-9.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-10.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx

###############################################################################
# Node grouping of which nodes are in the occne system
[occne:children]
host_hp_gen_10
host_kernel_virtual
k8s-cluster
data_store

###############################################################################
# Variables that define the OCCNE environment and specify target configuration.
[occne:vars]

B-6
Appendix B
Inventory File Preparation

occne_cluster_name=foo.lab.us.oracle.com
nfs_host=10.75.216.xx
nfs_path=/var/occne
subnet_ipv4=10.75.216.0
subnet_cidr=/25
netmask=255.255.255.128
broadcast_address=10.75.216.127
default_route=10.75.216.1
next_server=10.75.216.114
name_server='10.75.124.245,10.75.124.246'
ntp_server='10.75.124.245,10.75.124.246'
http_proxy=http://www-proxy.us.oracle.com:80
https_proxy=http://www-proxy.us.oracle.com:80
occne_private_registry=bastion-1
occne_private_registry_address='10.75.216.xx'
occne_private_registry_port=5000
metallb_peer_address=10.75.216.xx
metallb_default_pool_protocol=bgp
metallb_default_pool_addresses='10.75.xxx.xx/xx'
pxe_install_lights_out_usr=root
pxe_install_lights_out_passwd=TklcRoot
occne_k8s_binary_repo='http://bastion-1:8082/binaries'
helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/'
occne_helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/'
occne_helm_images_repo='bastion-1:5000'/
docker_rh_repo_base_url=http://<bastion-1 IP addr>/yum/centos/7/updates/x86_64/
docker_rh_repo_gpgkey=http://<bastion-1 IP addr>/yum/centos/RPM-GPG-CENTOS

###############################################################################
# Node grouping of which nodes are in the occne data_store
[data_store]
db-1.foo.lab.us.oracle.com
db-2.foo.lab.us.oracle.com

###############################################################################
# Node grouping of which nodes are to be Kubernetes master nodes (must be at
least 2)
[kube-master]
k8s-1.foo.lab.us.oracle.com
k8s-2.foo.lab.us.oracle.com
k8s-3.foo.lab.us.oracle.com

################################################################################
# Node grouping specifying which nodes are Kubernetes etcd data.
# An odd number of etcd nodes is required.
[etcd]
k8s-1.foo.lab.us.oracle.com
k8s-2.foo.lab.us.oracle.com
k8s-3.foo.lab.us.oracle.com

################################################################################
# Node grouping specifying which nodes are Kubernetes worker nodes.
# A minimum of two worker nodes is required.
[kube-node]
k8s-4.foo.lab.us.oracle.com
k8s-5.foo.lab.us.oracle.com
k8s-6.foo.lab.us.oracle.com
k8s-7.foo.lab.us.oracle.com

# Node grouping of which nodes are to be in the OC-CNE Kubernetes cluster


[k8s-cluster:children]

B-7
Appendix B
OCCNE Artifact Acquisition and Hosting

kube-node
kube-master

################################################################################
# The following node groupings are for MySQL NDB cluster
# installation under control of MySQL Cluster Manager

################################################################################
# NodeId should be unique across the cluster, each node should be assigned with
# the unique NodeId, this id will control which data nodes should be part of
# different node groups. For Management nodes, NodeId should be between 49 to
# 255 and should be assigned with unique NodeId with in MySQL cluster.
[mysqlndb_mgm_nodes]
db-3.foo.lab.us.oracle.com NodeId=49
db-4.foo.lab.us.oracle.com NodeId=50

###############################################################################
# For data nodes, NodeId should be between 1 to 48, NodeId will be used to
# group the data nodes among different Node Groups.
[mysqlndb_data_nodes]
db-5.foo.lab.us.oracle.com NodeId=1
db-6.foo.lab.us.oracle.com NodeId=2
db-7.foo.lab.us.oracle.com NodeId=3
db-8.foo.lab.us.oracle.com NodeId=4

################################################################################
# For SQL nodes, NodeId should be between 49 to 255 and should be assigned with
# unique NodeId with in MySQL cluster.
[mysqlndb_sql_nodes]
db-9.foo.lab.us.oracle.com NodeId=56
db-10.foo.lab.us.oracle.com NodeId=57

################################################################################
# Node grouping of all of the nodes involved in the MySQL cluster
[mysqlndb_all_nodes:children]
mysqlndb_mgm_nodes
mysqlndb_data_nodes
mysqlndb_sql_nodes

################################################################################
# MCM and NDB cluster variables can be defined here to override the values.
[mysqlndb_all_nodes:vars]
occne_mysqlndb_NoOfReplicas=2
occne_mysqlndb_DataMemory=12G

OCCNE Artifact Acquisition and Hosting


Introduction
The OCCNE deployment containers require access to a number of resources that are usually
downloaded from the internet. For cases where the target system is isolated from the internet,
locally available repositories may be used. These repositories require provisioning with the
proper files and versions, and some of the cluster configuration needs to be updated to allow
the installation containers to locate these local repositories.
• A local YUM repository is needed to hold a mirror of a number of OL7 repositories, as
well as the version of docker-ce that is required by OCCNE's Kubernetes deployment
• A local HTTP repository is needed to hold Kubernetes binaries and Helm charts

B-8
Appendix B
Installation PreFlight Checklist

• A local Docker registry is needed to hold the proper Docker images to support the
containers that run Kubernetes and the common services that Kubernetes will manage
• A copy of the for OS installation
• A copy of the for database nodes

Installation PreFlight Checklist


Introduction
This procedure identifies the pre-conditions necessary to begin installation of a CNE frame.
This procedure is to be referenced by field install personnel to ensure the frame is properly
assembled and the inventory of needed artifacts are present before installation activities are
attempted.
Prerequisites
The primary function of this procedure is to identify the prerequisites necessary for installation
to begin.
Confirm Hardware Installation
Confirm hardware components are installed in the frame and connected as per the tables below
Rackmount ordering (frame not to scale)

B-9
Appendix B
Installation PreFlight Checklist

Figure B-1 Rackmount ordering

Enclosure, ToR, and RMS Connections


OCCNE frame installation is expected to be complete prior to executing any software
installation. This section provides reference to prove the frame installation is completed as
expected by software installation tools.
Enclosure Switch Connections
The HP 6127XLG switch (https://www.hpe.com/us/en/product-catalog/servers/server-
interconnects/pip.hpe-6127xlg-blade-switch.8699023.html) will have 4x10GE fiber (or DAC)
connections between it and ToR respective switches' SFP+ ports.

Table B-2 Enclosure Switch Connections

Switch Port Name/ID Destination (To) Cable Type Module


(From) Required
Internal 1 Blade 1, NIC (1 for IObay1, 2 for IObay2) Internal None
Internal 2 Blade 2, NIC (1 for IObay1, 2 for IObay2) Internal None
Internal 3 Blade 3, NIC (1 for IObay1, 2 for IObay2) Internal None
Internal 4 Blade 4, NIC (1 for IObay1, 2 for IObay2) Internal None

B-10
Appendix B
Installation PreFlight Checklist

Table B-2 (Cont.) Enclosure Switch Connections

Switch Port Name/ID Destination (To) Cable Type Module


(From) Required
Internal 5 Blade 5, NIC (1 for IObay1, 2 for IObay2) Internal None
Internal 6 Blade 6, NIC (1 for IObay1, 2 for IObay2) Internal None
Internal 7 Blade 7, NIC (1 for IObay1, 2 for IObay2) Internal None
Internal 8 Blade 8, NIC (1 for IObay1, 2 for IObay2) Internal None
Internal 9 Blade 9, NIC (1 for IObay1, 2 for IObay2) Internal None
Internal 10 Blade 10, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 11 Blade 11, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 12 Blade 12, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 13 Blade 13, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 14 Blade 14, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 15 Blade 15, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 16 Blade 16, NIC (1 for IObay1, 2 for Internal None
IObay2)
External 1 Uplink 1 to ToR Switch (A for IObay1, B Fiber (multi- 10GE Fiber
for IObay2) mode)
External 2 Uplink 2 to ToR Switch (A for IObay1, B Fiber (multi- 10GE Fiber
for IObay2) mode)
External 3 Uplink 3 to ToR Switch (A for IObay1, B Fiber (multi- 10GE Fiber
for IObay2) mode)
External 4 Uplink 4 to ToR Switch (A for IObay1, B Fiber (multi- 10GE Fiber
for IObay2) mode)
External 5 Not Used None None
External 6 Not Used None None
External 7 Not Used None None
External 8 Not Used None None
Internal 17 Crosslink to IObay (2 for IObay1, 1 for Internal None
IObay2)
Internal 18 Crosslink to IObay (2 for IObay1, 1 for Internal None
IObay2)
Management OA Internal None

ToR Switch Connections


This section contains the point to point connections for the switches. The switches in the
solution will follow the naming scheme of "Switch<series number>", i.e. Switch1, Switch2,
etc; where Switch1 is the first switch in the solution, and switch2 is the second. These two form
a redundant pair. The switch datasheet is linked here: https://www.cisco.com/c/en/us/products/
collateral/switches/nexus-9000-series-switches/datasheet-c78-736651.html.

B-11
Appendix B
Installation PreFlight Checklist

The first switch in the solution will serve to connect each server's first NIC in their respective
NIC pairs to the network. The next switch in the solution will serve to connect each server's
redundant (2nd) NIC in their respective NIC pairs to the network.

Table B-3 ToR Switch Connections

Switch Port From Switch 1 to From Switch 2 to Cable Type Module


Name/ID Destination Destination Required
(From)
1 RMS 1, FLOM NIC 1 RMS 1, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
2 RMS 1, iLO RMS 2, iLO CAT 5e or 6A 1GE Cu SFP
3 RMS 2, FLOM NIC 1 RMS 2, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
4 RMS 3, FLOM NIC 1 RMS 3, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
5 RMS 3, iLO RMS 4, iLO CAT 5e or 6A 1GE Cu SFP
6 RMS 4, FLOM NIC 1 RMS 4, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
7 RMS 5, FLOM NIC 1 RMS 5, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
8 RMS 5, iLO RMS 6, iLO CAT 5e or 6A 1GE Cu SFP
9 RMS 6, FLOM NIC 1 RMS 6, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
10 RMS 7, FLOM NIC 1 RMS 7, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
11 RMS 7, iLO RMS 8, iLO CAT 5e or 6A 1GE Cu SFP
12 RMS 8, FLOM NIC 1 RMS 8, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
13 RMS 9, FLOM NIC 1 RMS 9, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
14 RMS 9, iLO RMS 10, iLO CAT 5e or 6A 1GE Cu SFP
15 RMS 10, FLOM NIC RMS 10, FLOM NIC 2 Cisco 10GE Integrated in
1 DAC DAC
16 RMS 11, FLOM NIC RMS 11, FLOM NIC 2 Cisco 10GE Integrated in
1 DAC DAC
17 RMS 11, iLO RMS 12, iLO CAT 5e or 6A 1GE Cu SFP
18 RMS 12, FLOM NIC RMS 12, FLOM NIC 2 Cisco 10GE Integrated in
1 DAC DAC
19 Enclosure 6, OA 1, Enclosure 6, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
20 Enclosure 6, IOBay 1, Enclosure 6, IOBay 2, Cisco 10GE Integrated in
Port 17 Port 17 DAC DAC
21 Enclosure 6, IOBay 1, Enclosure 6, IOBay 2, Cisco 10GE Integrated in
Port 18 Port 18 DAC DAC
22 Enclosure 6, IOBay 1, Enclosure 6, IOBay 2, Cisco 10GE Integrated in
Port 19 Port 19 DAC DAC
23 Enclosure 6, IOBay 1, Enclosure 6, IOBay 2, Cisco 10GE Integrated in
Port 20 Port 20 DAC DAC
24 Enclosure 5, OA 1, Enclosure 5, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt

B-12
Appendix B
Installation PreFlight Checklist

Table B-3 (Cont.) ToR Switch Connections

Switch Port From Switch 1 to From Switch 2 to Cable Type Module


Name/ID Destination Destination Required
(From)
25 Enclosure 5, IOBay 1, Enclosure 5, IOBay 2, Cisco 10GE Integrated in
Port 17 Port 17 DAC DAC
26 Enclosure 5, IOBay 1, Enclosure 5, IOBay 2, Cisco 10GE Integrated in
Port 18 Port 18 DAC DAC
27 Enclosure 5, IOBay 1, Enclosure 5, IOBay 2, Cisco 10GE Integrated in
Port 19 Port 19 DAC DAC
28 Enclosure 5, IOBay 1, Enclosure 5, IOBay 2, Cisco 10GE Integrated in
Port 20 Port 20 DAC DAC
29 Enclosure 4, OA 1, Enclosure 4, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
30 Enclosure 4, IOBay 1, Enclosure 4, IOBay 2, Cisco 10GE Integrated in
Port 17 Port 17 DAC DAC
31 Enclosure 4, IOBay 1, Enclosure 4, IOBay 2, Cisco 10GE Integrated in
Port 18 Port 18 DAC DAC
32 Enclosure 4, IOBay 1, Enclosure 4, IOBay 2, Cisco 10GE Integrated in
Port 19 Port 19 DAC DAC
33 Enclosure 4, IOBay 1, Enclosure 4, IOBay 2, Cisco 10GE Integrated in
Port 20 Port 20 DAC DAC
34 Enclosure 3, OA 1, Enclosure 3, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
35 Enclosure 3, IOBay 1, Enclosure 3, IOBay 2, Cisco 10GE Integrated in
Port 17 Port 17 DAC DAC
36 Enclosure 3, IOBay 1, Enclosure 3, IOBay 2, Cisco 10GE Integrated in
Port 18 Port 18 DAC DAC
37 Enclosure 3, IOBay 1, Enclosure 3, IOBay 2, Cisco 10GE Integrated in
Port 19 Port 19 DAC DAC
38 Enclosure 3, IOBay 1, Enclosure 3, IOBay 2, Cisco 10GE Integrated in
Port 20 Port 20 DAC DAC
39 Enclosure 2, OA 1, Enclosure 2, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
40 Enclosure 2, IOBay 1, Enclosure 2, IOBay 2, Cisco 10GE Integrated in
Port 17 Port 17 DAC DAC
41 Enclosure 2, IOBay 1, Enclosure 2, IOBay 2, Cisco 10GE Integrated in
Port 18 Port 18 DAC DAC
42 Enclosure 2, IOBay 1, Enclosure 2, IOBay 2, Cisco 10GE Integrated in
Port 19 Port 19 DAC DAC
43 Enclosure 2, IOBay 1, Enclosure 2, IOBay 2, Cisco 10GE Integrated in
Port 20 Port 20 DAC DAC
44 Enclosure 1, OA 1, Enclosure 1, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
45 Enclosure 1, IOBay 1, Enclosure 1, IOBay 2, Cisco 10GE Integrated in
Port 17 Port 17 DAC DAC
46 Enclosure 1, IOBay 1, Enclosure 1, IOBay 2, Cisco 10GE Integrated in
Port 18 Port 18 DAC DAC
47 Enclosure 1, IOBay 1, Enclosure 1, IOBay 2, Cisco 10GE Integrated in
Port 19 Port 19 DAC DAC

B-13
Appendix B
Installation PreFlight Checklist

Table B-3 (Cont.) ToR Switch Connections

Switch Port From Switch 1 to From Switch 2 to Cable Type Module


Name/ID Destination Destination Required
(From)
48 Enclosure 1, IOBay 1, Enclosure 1, IOBay 2, Cisco 10GE Integrated in
Port 20 Port 20 DAC DAC
49 Mate Switch, Port 49 Mate Switch, Port 49 Cisco 40GE Integrated in
DAC DAC
50 Mate Switch, Port 50 Mate Switch, Port 50 Cisco 40GE Integrated in
DAC DAC
51 OAM Uplink to OAM Uplink to 40GE (MM or 40GE QSFP
Customer Customer SM) Fiber
52 Signaling Uplink to Signaling Uplink to 40GE (MM or 40GE QSFP
Customer Customer SM) Fiber
53 Unused Unused
54 Unused Unused
Management RMS 1, NIC 2 (1GE) RMS 1, NIC 3 (1GE) CAT5e or CAT None (RJ45
(Ethernet) 6A port)
Management Unused Unused None None
(Serial)

Rackmount Server Connections


Server quickspecs can be found here: https://h20195.www2.hpe.com/v2/getdocument.aspx?
docname=a00008180enw
The HP DL380 Gen10 RMS will be configured with an iLO, a 4x1GE LOM, and a 2x10GE
SFP+ FLOM.
• iLO. The integrated Lights Out management interface (iLO) contains an ethernet out of
band management interface for the server. This connection is 1GE RJ45.
• 4x1GE LOM. For most servers in the solution, their 4x1GE LOM ports will be unused.
The exception is the first server in the first frame. This server will serve as the
management server for the ToR switches. In this case, the server will use 2 of the LOM
ports to connect to ToR switches' respective out of band ethernet management ports. These
connections will be 1GE RJ45 (CAT 5e or CAT 6).
• 2x10GE FLOM. Every server will be equipped with a 2x10GE Flex LOM card (or
FLOM). These will be for in-band, or application and solution management traffic. These
connections are 10GE fiber (or DAC) and will terminate to the ToR switches' respective
SFP+ ports.
All RMS in the frame will only use the 10GE FLOM connections, except for the "management
server", the first server in the frame, which will have some special connections as listed below.

Table B-4 Rackmount Server Connections

Server Destination Cable Type Module Notes


Interface Required
Base NIC1 Unused None None
(1GE)

B-14
Appendix B
Installation PreFlight Checklist

Table B-4 (Cont.) Rackmount Server Connections

Server Destination Cable Type Module Notes


Interface Required
Base NIC2 Switch1A CAT5e or 6a None Switch Initialization
(1GE) Ethernet Mngt
Base NIC3 Switch1B CAT5e or 6a None Switch Initialization
(1GE) Ethernet Mngt
Base NIC4 Unused None None
(1GE)
FLOM NIC1 Switch1A Port 1 Cisco 10GE Integrated in OAM, Signaling, Cluster
DAC DAC
FLOM NIC2 Switch1B Port 1 Cisco 10GE Integrated in OAM, Signaling, Cluster
DAC DAC
USB Port1 USB Flash Drive None None Bootstrap Host Initialization
Only (temporary)
USB Port2 Keyboard USB None Bootstrap Host Initialization
Only (temporary)
USB Port3 Mouse USB None Bootstrap Host Initialization
Only (temporary)
Monitor Port Video Monitor DB15 None Bootstrap Host Initialization
Only (temporary)

OCCNE Required Artifacts Are Accessible


Ensure artifacts listed in the Artifacts are available in repositories accessible from the OCCNE
Frame.
Keyboard, Video, Mouse (KVM) Availability
The beginning stage of installation requires a local KVM for installing the bootstrap
environment.

Procedure

Complete Site Survey Subnet Table


Table values that are prefilled are fixed in the topology and do not need to be changed. Blank
values indicate that customer engagement is needed to determine the appropriate value.

Table B-5 Complete Site Survey Subnet Table

Sl No. Network Description Subnet Bitmask VLAN Gateway


Allocation ID Address
1 iLO/OA Network 192.168.20.0 24 2 N/A
2 Platform Network 172.16.3.0 24 3 172.16.3.1
3 Switch Configuration 192.168.2.0 29 N/A N/A
Network
4 Management Network - 29 4
Bastion Hosts
5 Signaling Network - MySQL 29 5
Replication
6 OAM Pool - metalLB pool for 27 N/A N/A (BGP
common services redistribution)

B-15
Appendix B
Installation PreFlight Checklist

Table B-5 (Cont.) Complete Site Survey Subnet Table

Sl No. Network Description Subnet Bitmask VLAN Gateway


Allocation ID Address
7 Signaling Pool - metalLB N/A N/A (BGP
pool for 5G NFs redistribution)
8 Other metalLB pools N/A N/A (BGP
(Optional) redistribution)
9 Other metalLB pools N/A N/A (BGP
(Optional) redistribution)
10 Other metalLB pools N/A N/A (BGP
(Optional) redistribution)
11 ToR Switch A OAM Uplink N/A
Subnet
12 ToR Switch B OAM Uplink N/A
Subnet
13 ToR Switch A Signaling N/A
Uplink Subnet
14 ToR Switch B Signaling N/A
Uplink Subnet
15 ToR Switch A/B Crosslink 100
Subnet (OSPF link)

Complete Site Survey Host IP Table


Table values that are prefilled are fixed in the topology and do not need to be changed. Blank
values indicate that customer engagement is needed to determine the appropriate value.

Table B-6 Complete Site Survey Host IP Table

Sl Component/ Platform iLO VLAN CNE Device iLO MAC Notes


No. Resource VLAN IP IP Address Management IP Address of
Address (VLAN 2) IP Address Primar
(VLAN 3) (VLAN 4) y NIC
1 RMS 1 Host IP 172.16.3.4 192.168.20.1 192.168.20. Eno5:
1 121
2 RMS 2 Host IP 172.16.3.5 192.168.20.1 192.168.20. Eno5:
2 122
3 RMS 3 Host IP 172.16.3.6 N/A N/A 192.168.20. Eno5:
123
4 RMS 4 Host IP 172.16.3.7 N/A N/A 192.168.20. Eno5:
124
5 RMS 5 Host IP 172.16.3.8 N/A N/A 192.168.20. Eno5:
125
6 Enclosure 1 Bay 1 172.16.3.11 N/A N/A 192.168.20. Eno1:
Host IP 141
7 Enclosure 1 Bay 2 172.16.3.12 N/A N/A 192.168.20. Eno1:
Host IP 142
8 Enclosure 1 Bay 3 172.16.3.13 N/A N/A 192.168.20. Eno1:
Host IP 143
9 Enclosure 1 Bay 4 172.16.3.14 N/A N/A 192.168.20. Eno1:
Host IP 144

B-16
Appendix B
Installation PreFlight Checklist

Table B-6 (Cont.) Complete Site Survey Host IP Table

Sl Component/ Platform iLO VLAN CNE Device iLO MAC Notes


No. Resource VLAN IP IP Address Management IP Address of
Address (VLAN 2) IP Address Primar
(VLAN 3) (VLAN 4) y NIC
10 Enclosure 1 Bay 5 172.16.3.15 N/A N/A 192.168.20. Eno1:
Host IP 145
11 Enclosure 1 Bay 6 172.16.3.16 N/A N/A 192.168.20. Eno1:
Host IP 146
12 Enclosure 1 Bay 7 172.16.3.17 N/A N/A 192.168.20. Eno1:
Host IP 147
13 Enclosure 1 Bay 8 172.16.3.18 N/A N/A 192.168.20. Eno1:
Host IP 148
14 Enclosure 1 Bay 9 172.16.3.19 N/A N/A 192.168.20. Eno1:
Host IP 149
15 Enclosure 1 Bay 172.16.3.20 N/A N/A 192.168.20. Eno1:
10 Host IP 150
16 Enclosure 1 Bay 172.16.3.21 N/A N/A 192.168.20. Eno1:
11 Host IP 151
17 Enclosure 1 Bay 172.16.3.22 N/A N/A 192.168.20. Eno1:
12 Host IP 152
18 Enclosure 1 Bay 172.16.3.23 N/A N/A 192.168.20. Eno1:
13 Host IP 153
19 Enclosure 1 Bay 172.16.3.24 N/A N/A 192.168.20. Eno1:
14 Host IP 154
20 Enclosure 1 Bay 172.16.3.25 N/A N/A 192.168.20. Eno1:
15 Host IP 155
21 Enclosure 1 Bay 172.16.3.26 N/A N/A 192.168.20. Eno1:
16 Host IP 156

Complete VM IP Table
Table values that are prefilled are fixed in the topology and do not need to be changed. Blank
values indicate that customer engagement is needed to determine the appropriate value.

Table B-7 Complete VM IP Table

Sl Component/ Platform iLO VLAN CNE SQL Notes


No. Resource VLAN IP IP Address Management Replication
Address (VLAN 2) IP Address IP
(VLAN 3) (VLAN 4) Address(VLA
N 5)
1 Bastion Host 1 172.16.3.100 192.168.20.10 N/A
0
2 Bastion Host 2 172.16.3.101 192.168.20.10 N/A
1
3 MySQL SQL 172.16.3.102 N/A N/A
Node 1
4 MySQL SQL 172.16.3.103 N/A N/A
Node 2

Complete OA and Switch IP Table

B-17
Appendix B
Installation PreFlight Checklist

Table values that are prefilled are fixed in the topology and do not need to be changed. Blank
values indicate that customer engagement is needed to determine the appropriate value.

Table B-8 Complete OA and Switch IP Table

S Procedure Reference Variable Description IP VLA Notes


l Name Address N ID
N
o
.
1 N/A Enclosure 1 IObay1 192.168.2 N/A
0.133
2 N/A Enclosure 1 IObay2 192.168.2 N/A
0.134
3 N/A Enclosure 1 OA1 192.168.2 N/A
0.131
4 N/A Enclosure 1 OA2 192.168.2 N/A
0.132
5 ToRswitchA_Platform_IP Host Platform 172.16.3.2 3
Network
6 ToRswitchB_Platform_IP Host Platform 172.16.3.3 3
Network
7 ToRswitch_Platform_VIP Host Platform 172.16.3.1 3 This address is also
Network Default used as the source NTP
Gateway address for all servers.
8 ToRswitchA_CNEManagement Bastion Host 4 Address needs to be
Net_IP Network without prefix length,
such as 10.25.100.2
9 ToRswitchB_CNEManagement Bastion Host 4 Address needs to be
Net_IP Network without prefix length,
such as 10.25.100.3
1 ToRswitch_CNEManagementN Bastion Host 4 No prefix length,
0 et_VIP Network Default address only for VIP
Gateway
1 CNEManagementNet_Prefix Bastion Host 4 number only such as
1 Network Prefix 29
Length
1 ToRswitchA_SQLreplicationNe SQL Replication 5 Address needs to be
2 t_IP Network with prefix length,
such as 10.25.200.2
1 ToRswitchB_SQLreplicationNe SQL Replication 5 Address needs to be
3 t_IP Network with prefix length,
such as 10.25.200.3
1 ToRswitch_SQLreplicationNet_ SQL Replication 5 No prefix length,
4 VIP Network Default address only for VIP
Gateway
1 SQLreplicationNet_Prefix SQL Replication 5 number only such as
5 Network Prefix 28
Length
1 ToRswitchA_oam_uplink_custo ToR Switch A OAM N/A No prefix length in
6 mer_IP uplink route path to address, static to be /30
customer network
1 ToRswitchA_oam_uplink_IP ToR Switch A OAM N/A No prefix length in
7 uplink IP address, static to be /30

B-18
Appendix B
Installation PreFlight Checklist

Table B-8 (Cont.) Complete OA and Switch IP Table

S Procedure Reference Variable Description IP VLA Notes


l Name Address N ID
N
o
.
1 ToRswitchB_oam_uplink_custo ToR Switch B OAM N/A No prefix length in
8 mer_IP uplink route path to address, static to be /30
customer network
1 ToRswitchB_oam_uplink_IP ToR Switch B OAM N/A No prefix length in
9 uplink IP address, static to be /30
2 ToRswitchA_signaling_uplink_ ToR Switch A N/A No prefix length in
0 customer_IP Signaling uplink address, static to be /30
route path to
customer network
2 ToRswitchA_signaling_uplink_ ToR Switch A N/A No prefix length in
1 IP Signaling uplink IP address, static to be /30
2 ToRswitchB_signaling_uplink_ ToR Switch B N/A No prefix length in
2 customer_IP Signaling uplink address, static to be /30
route path to
customer network
2 ToRswitchB_signaling_uplink_ ToR Switch B N/A No prefix length in
3 IP Signaling uplink IP address, static to be /30
2 ToRswitchA_mngt_IP ToR Switch A Out 192.168.2. N/A
4 of Band 1
Management IP
2 ToRswitchB_mngt_IP ToR Switch A Out 192.168.2. N/A
5 of Band 2
Management IP
2 MetalLB_Signal_Subnet_With_ ToR Switch route N/A From Section 2.1
6 Prefix provisioning for
metalLB
2 MetalLB_OAM_Subnet_With_ ToR Switch route N/A From Section 2.1
7 Prefix provisioning for
metalLB
2 Allow_Access_Server IP address of access-list
8 external Restrict_Access_ToR
management server denied all direct
to access ToR external access to ToR
switches switch vlan interfaces,
in case of trouble
shooting or
management need to
access direct access
from outside, allow
specific server to
access. If no need,
delete this line from
switch configuration
file. If need more than
one, add similar line.
2 SNMP_Trap_Receiver_Address IP address of the
9 SNMP trap receiver

B-19
Appendix B
Installation PreFlight Checklist

Table B-8 (Cont.) Complete OA and Switch IP Table

S Procedure Reference Variable Description IP VLA Notes


l Name Address N ID
N
o
.
3 SNMP_Community_String SNMP v2c To be easy, same for
0 community string snmpget and snmp
traps

ToR and Enclosure Switches Variables Table (Switch Specific)


Table values that are prefilled are fixed in the topology and do not need to be changed. Blank
values indicate that customer engagement is needed to determine the appropriate value.

Table B-9 ToR and Enclosure Switches Variables Table (Switch Specific)

Key/ ToR_S ToR Enclosu Enclosure_Switch2 Notes


Vairable witch _Sw re_Swit Value
Name A itch ch1
Value B Value
Val
ue
1 switch_nam N/A (This switch Customer defined switch name for
e will assume the each switch.
name of
Enclosure_Switch1
after IRF is applied
in configuration
procedures)
2 admin_pass Password for admin user. Strong
word password requirement: Length
should be at least 8 characters
Contain characters from at least
three of the following classes: lower
case letters, upper case letters, digits
and special characters. No '?' as
special character due to not working
on switches. No '/' as special
character due to the procedures.
3 user_name Customer defined user.
4 user_passwo Password for <user_name> Strong
rd password requirement: Length
should be at least 8 characters.
Contain characters from at least
three of the following classes: lower
case letters, upper case letters, digits
and special characters. No '?' as
special character due to not working
on switches. No '/' as special
character due to the procedures.
5 ospf_md5_k N/A N/A The key has to be same on all ospf
ey interfaces on ToR switches and
connected customer switches
6 ospf_area_id N/A N/A The number as OSPF area id.

B-20
Appendix B
Installation PreFlight Checklist

Table B-9 (Cont.) ToR and Enclosure Switches Variables Table (Switch Specific)

Key/ ToR_S ToR Enclosu Enclosure_Switch2 Notes


Vairable witch _Sw re_Swit Value
Name A itch ch1
Value B Value
Val
ue
7 nxos_versio N/A N/A The version nxos.9.2.3.bin is used
n by default and hard-coded in the
configuration template files. If the
installed ToR switches use a
different version, record the version
here. The installation procedures
will reference this variable and value
to update a configuration template
file.

Complete Site Survey Repository Location Table

Table B-10 Complete Site Survey Repository Location Table

Repository Location Override Value


Yum Repository
Docker Registry
MySQL Location
Helm Repository

Set up the Host Inventory File (hosts.ini)


Execute the Inventory File Preparation Procedure to populate the inventory file.
Assemble 2 USB Flash Drives
Given that the bootstrap environment isn't connected to the network until the ToR switches are
configured, it is necessary to provide the bootstrap environment with certain software via USB
flash drives to begin the install process.
One flash drive will be used to install an OS on the Installer Bootstrap Host. The setup of this
USB will be handled in a different procedure. This flash drive should have approximately 6GB
capacity.
Another flash drive will be used to transfer necessary configuration files to the Installer
Bootstrap Host once it has been setup with an OS. This flash drive should have approximately
6GB capacity.
Create the Utility USB
This Utility USB flash drive is used to transfer configuration and script files to the Bootstrap
Host during initial installation. This USB must include enough space to accommodate all the
necessary files listed below (approximately 6Gb).

B-21
Appendix B
Installation PreFlight Checklist

Note:

• The instructions listed here are for a linux host. Instructions to do this on a PC can
be obtained from the Web if needed. The mount instructions are for a Linux
machine.
• When creating these files on a USB from Windows (using notepad or some other
Windows editor), the files may contain control characters that are not recognized
when using in a Linux environment. Usually this includes a ^M at the end of each
line. These control characters can be removed by using the dos2unix command in
Linux with the file: dos2unix <filename>.
• When copying the files to this USB, make sure the USB is formatted as FAT32.

Miscellaneous Files
This procedure details any miscellaneous files that need to be copied to the Utility USB.
1. Copy the hosts.ini file from step 2.7 onto the Utility USB.
2. Copy the ol7-mirror.repo file from the customer's OL YUM mirror instance onto the
Utility USB. Reference procedure: OCCNE YUM Repository Configuration
3. Copy the docker-ce-stable.repo file from procedure: OCCNE YUM Repository
Configuration onto the Utility USB.
4. Copy the following switch configuration template files from OHC to the Utility USB:
a. 93180_switchA.cfg
b. 93180_switchB.cfg
c. 6127xlg_irf.cfg
d. ifcfg-vlan
e. ifcfg-bridge
5. Copy VM kickstart template file bastion_host.ks from OHC onto the Utility USB.
6. Copy the occne-ks.cfg.j2.new file from OHC into the Utility USB.
Copy and Edit the poap.py Script
This procedure is used to create the dhcpd.conf file that will be needed in procedure: OCCNE
Configure Top of Rack 93180YC-EX Switches.
1. Mount the Utility USB.

Note:
Instructions for mounting a USB in linux are at: OCCNE Installation of Oracle
Linux 7.5 on Bootstrap Server : Install Additional Packages. Only follow steps 1-3
to mount the USB.

2. cd to the mounted USB directory.


3. Download the poap.py straight to the usb. The file can be obtained using the following
command:

B-22
Appendix B
Installation PreFlight Checklist

wget https://raw.githubusercontent.com/datacenter/nexus9000/master/nx-os/
poap/poap.py
on any linux server or laptop
4. Rename the poap.py script to poap_nexus_script.py.
mv poap.py poap_nexus_script.py
5. The switches' firmware version is handled before the installation procedure, no need to
handle it from here. Comment out the lines to handle the firmware at lines 1931-1944.
vi poap_nexus_script.py

# copy_system()

# if single_image is False:

# copy_kickstart()

# signal.signal(signal.SIGTERM, sig_handler_no_exit)

# # install images

# if single_image is False:

# install_images()

# else:

# install_images_7_x()

# # Cleanup midway images if any

# cleanup_temp_images()

Create the dhcpd.conf File


This procedure is used to create the dhcpd.conf file that will be needed in procedure: OCCNE
Configure Top of Rack 93180YC-EX Switches.
1. Edit file: dhcpd.conf.
2. Copy the following contents to that file and save it on the USB.
# DHCP Server Configuration file.

# see /usr/share/doc/dhcp*/dhcpd.conf.example

# see dhcpd.conf(5) man page

subnet 192.168.2.0 netmask 255.255.255.0 {

range 192.168.2.101 192.168.2.102;

default-lease-time 10800;

max-lease-time 43200;

allow unknown-clients;

filename "poap_nexus_script.py";

B-23
Appendix B
Installation PreFlight Checklist

option domain-name-servers 192.168.2.11;

option broadcast-address 192.168.2.255;

option tftp-server-name "192.168.2.11";

option routers 192.168.2.11;

next-server 192.168.2.11;

subnet 192.168.20.0 netmask 255.255.255.0 {

range 192.168.20.101 192.168.20.120;

default-lease-time 10800;

max-lease-time 43200;

allow unknown-clients;

option domain-name-servers 192.168.20.11;

option broadcast-address 192.168.20.255;

option tftp-server-name "192.168.20.11";

option routers 192.168.20.11;

next-server 192.168.20.11;

Create the md5Poap Bash Script


This procedure is used to copy the sed command to a script and copy this to the USB.
This script is needed in procedure: OCCNE Configure Top of Rack 93180YC-EX Switches.
1. Edit file: md5Poap.sh
2. Copy the following contents to that file and save it on the USB.
#!/bin/bash

f=poap_nexus_script.py ; cat $f | sed '/^#md5sum/d' > $f.md5 ;


sed -i "s/^#md5sum=.*/#md5sum=\"$(md5sum $f.md5 | sed 's/ .*//')\"/" $f
Create the Bastion Host Kickstart File
This procedure is used to create the Bastion Host kickstart file. This file can be copied as is
written.
The file is used in procedure: OCCNE Installation of the Bastion Host.
Copy the following contents to the Utility USB as bastion_host.ks.

Note:
This file includes some variables that must be updated when used in procedure:
OCCNE Installation of the Bastion Host.

B-24
Appendix B
Installation PreFlight Checklist

Note:
The steps to update those variables are contained in that procedure.

#version=DEVEL

# System authorization information

auth --enableshadow --passalgo=sha512

repo --name="Server-HighAvailability" --baseurl=file:///run/install/repo/addons/


HighAvailability

repo --name="Server-ResilientStorage" --baseurl=file:///run/install/repo/addons/


ResilientStorage

# Use CDROM installation media

cdrom

# Use text mode install

text

# Run the Setup Agent on first boot

firstboot --enable

ignoredisk --only-use=sda

# Keyboard layouts

keyboard --vckeymap=us --xlayouts=''

# System language

lang en_US.UTF-8

# Network information

network --bootproto=static --device=ens3 --ip=BASTION_VLAN3_IP --


nameserver=NAMESERVERIPS --netmask=255.255.255.0 --ipv6=auto --activate

network --bootproto=static --device=ens4 --ip=BASTION_VLAN2_IP --


netmask=255.255.255.0 --ipv6=auto --activate

network --bootproto=static --device=ens5 --gateway=GATEWAYIP --


ip=BASTION_VLAN4_IP --netmask=BASTION_VLAN4_MASK --ipv6=auto --activate

network --hostname=NODEHOSTNAME

# Root password

rootpw --iscrypted $6$etqyspJhPUG440VO

B-25
Appendix B
Installation PreFlight Checklist

$0FqnB.agxmnDqb.Bh0sSLhq7..t37RwUZr7SlVmIBvMmWVoUjb2DJJ2f4VlrW9RdfVi.IDXxd2/
Eeo41FCCJ01

# System services

services --enabled="chronyd"

# Do not configure the X Window System

skipx

# System timezone

timezone Etc/GMT --isUtc --ntpservers=NTPSERVERIPS

user --groups=wheel --name=admusr --password=$6$etqyspJhPUG440VO


$0FqnB.agxmnDqb.Bh0sSLhq7..t37RwUZr7SlVmIBvMmWVoUjb2DJJ2f4VlrW9RdfVi.IDXxd2/
Eeo41FCCJ01 --iscrypted --gecos="admusr"

# System bootloader configuration

bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda

#autopart --type=lvm

# Partition clearing information

clearpart --all --initlabel --drives=sda

# Disk partitioning information

part /boot --fstype="xfs" --ondisk=sda --size=1024

part pv.11 --size 1 --grow --ondisk=sda

volgroup ol pv.11

logvol / --fstype="xfs" --size=20480 --name=root --vgname=ol

logvol /var --fstype="xfs" --size=1 --grow --name=var --vgname=ol

%packages

@^minimal

@compat-libraries

@base

@core

@debugging

@development

chrony

kexec-tools

B-26
Appendix B
Installation PreFlight Checklist

%end

%addon com_redhat_kdump --enable --reserve-mb='auto'

%end

%anaconda

pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges --notempty

pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges --emptyok

pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges --notempty

%end

%post --log=/root/occne-ks.log

echo "===================== Running Post Configuration ======================="

# Set shell editor to vi

echo set -o vi >> /etc/profile.d/sh.local

# selinux set to permissive

setenforce permissive

sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config

# Set sudo to nopassword

sed --in-place 's/^#\s*\(%wheel\s\+ALL=(ALL)\s\+NOPASSWD:\s\+ALL\)/\1/' /etc/


sudoers

echo "proxy=HTTP_PROXY" >> /etc/yum.conf

# Configure keys for admusr

mkdir -m0700 /home/admusr/.ssh/

B-27
Appendix B
Installation PreFlight Checklist

chown admusr:admusr /home/admusr/.ssh

cat <<EOF >/home/admusr/.ssh/authorized_keys

PUBLIC_KEY

EOF

echo "Configuring SSH..."

cp /etc/ssh/sshd_config /etc/ssh/sshd_config.orig && \

sed -i 's/#Protocol 2/Protocol 2/' /etc/ssh/sshd_config && \

sed -i 's/#LogLevel.*/LogLevel INFO/' /etc/ssh/sshd_config && \

sed -i 's/X11Forwarding yes/X11Forwarding no/' /etc/ssh/sshd_config && \

sed -i 's/#MaxAuthTries.*/MaxAuthTries 4/' /etc/ssh/sshd_config && \

sed -i 's/#IgnoreRhosts.*/IgnoreRhosts yes/' /etc/ssh/sshd_config

if [ `grep HostBasedAuthentication /etc/ssh/sshd_config | wc -l` -lt 1 ]; then

echo 'HostBasedAuthentication no' >> /etc/ssh/sshd_config

fi

sed -i 's/#PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config && \

sed -i 's/PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config && \

sed -i 's/#PermitEmptyPasswords.*/PermitEmptyPasswords no/' /etc/ssh/sshd_config


&& \

sed -i 's/#PermitUserEnvironment.*/PermitUserEnvironment no/' /etc/ssh/


sshd_config && \

sed -i 's/PermitUserEnvironment.*/PermitUserEnvironment no/' /etc/ssh/sshd_config

if [ `grep -i 'Ciphers aes128-ctr,aes192-ctr,aes256-ctr' /etc/ssh/sshd_config |


wc -l` -lt 1 ]; then

echo 'Ciphers aes128-ctr,aes192-ctr,aes256-ctr' >> /etc/ssh/sshd_config

if [ $? -ne 0 ]; then

echo " ERROR: echo 1 failed"

B-28
Appendix B
Installation PreFlight Checklist

fi

fi

if [ `grep '^MACs' /etc/ssh/sshd_config | wc -l` -lt 1 ]; then

echo 'MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-


etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-
sha2-256,umac-128@openssh.com' >> /etc/ssh/sshd_config

if [ $? -ne 0 ]; then

echo " ERROR: echo 2 failed"

fi

fi

sed -i 's/#ClientAliveInterval.*/ClientAliveInterval 300/' /etc/ssh/sshd_config

sed -i 's/#ClientAliveCountMax.*/ClientAliveCountMax 0/' /etc/ssh/sshd_config

sed -i 's/#Banner.*/Banner \/etc\/issue.net/' /etc/ssh/sshd_config

egrep -q "^(\s*)LoginGraceTime\s+\S+(\s*#.*)?\s*$" /etc/ssh/sshd_config && sed -


ri "s/^(\s*)LoginGraceTime\s+\S+(\s*#.*)?\s*$/\1LoginGraceTime 60\2/" /etc/ssh/
sshd_config || echo "LoginGraceTime 60" >> /etc/ssh/sshd_config

echo 'This site is for the exclusive use of Oracle and its authorized customers
and partners. Use of this site by customers and partners is subject to the Terms
of Use and Privacy Policy for this site, as well as your contract with Oracle.
Use of this site by Oracle employees is subject to company policies, including
the Code of Conduct. Unauthorized access or breach of these terms may result in
termination of your authorization to use this site and/or civil and criminal
penalties.' > /etc/issue

echo 'This site is for the exclusive use of Oracle and its authorized customers
and partners. Use of this site by customers and partners is subject to the Terms
of Use and Privacy Policy for this site, as well as your contract with Oracle.
Use of this site by Oracle employees is subject to company policies, including
the Code of Conduct. Unauthorized access or breach of these terms may result in
termination of your authorization to use this site and/or civil and criminal
penalties.' > /etc/issue.net

%end

reboot

B-29
Appendix B
Installation Use Cases and Repository Requirements

Installation Use Cases and Repository Requirements


Goals
• Identify the parameters and use case for initial setup and sustained support of an On-prem
CNE
• Identify what components need access to software repositories, and how they will be
accessed

Background and strategic fit


The installation process will assume a software delivery model of "Indirect Internet
Connection". This model allows for a more rapid time to market for initial deployment and
security update of the CNE. However, this model creates situations during the install process
that require careful explanation and walk-through. Thus, the need for this page.

Requirements
• Installer notebooks may be used to access resources; however, the following limitations
will need to be considered:
– The installer notebook may not arrive on site with Oracle IP, such as source code or
install tools
– The installer notebook may not have customer sensitive material stored on it, such as
access credentials
• Initial install may require trained personnel to be on site; however, DR of any individual
component should not require trained software personnel to be local to the installing device
– Physical rackmounting and cabling of replacement equipment should be performed by
customer or contractor personnel; but software configuration and restoration of
services should not require personnel to be sent to site.
• Oracle Linux Yum repository, Docker registry, and Helm repository is configured and
available to the CNE frame for installation activities. Oracle will define what artifacts need
to be in these repositories. It will be the customer responsibility to pull the artifacts into
repositories reachable by the OCCNE frame.

User Interaction and Design


This section walks through expected installation steps for the CNE given the selected software
delivery model.

CNE Overview

CNE Frame Overview


For reference regarding installation practices, it is useful to understand the hardware layout
involved with the CNE deployment.

B-30
Appendix B
Installation Use Cases and Repository Requirements

Figure B-2 Frame reference

Problem Statement
A solution is needed to initialize the frame with an OS, a Kubernetes cluster, and a set of
common services for 5G NFs to be deployed into. How the frame is brought from
manufacturing default state to configured and operational state is the topic of this page.
Manufacturing Default State characteristics/assumptions:
• Frame components are "racked and stacked", with power and network connections in place
• Frame ToR switches are not connected to the customer network until they are configured
(alternatively, the links can be disabled from the customer side)
• An installer is on-site
• An installer has a notebook and a USB flash drive with which to configure at the first
server in the frame
• An installer's notebook has access to the repositories setup by the customer

CNE Installation Preparation


Setting up the Notebook

B-31
Appendix B
Installation Use Cases and Repository Requirements

The installer notebook is considered to be an Oracle asset. As such, it will have limitations
applied as mentioned above. The notebook will be used to access the customer instantiated
repositories to pull down the OL iso and apply it to a USB flash drive. Steps involved in
creating the bootable USB drive will be dependent upon the OS on the notebook (for example,
Rufus can be used for a Windows PC, or "dd" command can be used for a Linux PC).

Figure B-3 Setup the Notebook and USB Flash Drive

CNE Installation - Setup the Management Server and Switches


Install OS on a "Bootstrap" Server
The 1st RMS in the frame will be temporarily used as a bootstrap server, whereby a manual
method of initial OS install will be applied to start a "standard" process for installing the frame.
The activity performed by this "bootstrap" server should be minimized to get to a standard "in-
frame configuration platform" as soon as possible. The bootstrap server should be re-paved to
an "official" configuration as soon as possible. This means the "bootstrap" server will facilitate
the configuration of the ToR switches, and the configuration of a Management VM. Once these
two items have been completed, and the management VM is accessible from outside the frame,
the "bootstrap" server will have fulfilled its purpose and can then be re-paved.
The figure below is very busy with information. Here are the key takeaways:
• The ToR switch uplinks are disabled or disconnected, as the ToR is not yet configured.
This prevents nefarious network behavior due to redundant connections to unconfigured
switches.

B-32
Appendix B
Installation Use Cases and Repository Requirements

– Until the ToR switches are configured, there is no connection to the customer
repositories.
• The red server is special in that it has connections to ToR out of band interfaces (not
shown).
• The red server is installed via USB flash drive and local KVM (Keyboard, Video, Mouse).

Figure B-4 Setup the Management Server

Setup the Switches


Setup Switch Configuration Services
Configure DHCP, tftp, and network interfaces to support ToR switch configuration activities.
For the initial effort of CNE 1.0, this process is expected to be manual, without the need for
files to be delivered to the field. Reference configuration files will be made available through
documentation. If any files are needed from internet sources, they will be claimed as a

B-33
Appendix B
Installation Use Cases and Repository Requirements

dependency in the customer repositories and will be delivered by USB to the bootstrap server,
similar to the OL iso.

Figure B-5 Management Server Unique Connections

Configure the Enclosure Access


Using the Enclosure Insight Display, configure an IP address for the enclosure.

Configure the OA EBIPA


From the management server, use an automated method, manual procedure, or configuration
file to push configuration to the OA, in particular, the EBIPA information for the Compute and
IO Bays' management interfaces.

B-34
Appendix B
Installation Use Cases and Repository Requirements

Figure B-6 Configure OAs

Configure the Enclosure Switches


Update switch configuration templates and/or tools with site specific information. Using the
switch installation scripts or templates, push the configuration to the switches.

Figure B-7 Configure the Enc. Switches

Engage Customer Downlinks to Frame


At this point, the management server and switches are configured and can be joined to the
customer network. Enable the customer uplinks.
Setup Installation Tools
With all frame networking assets (ToR and Enclosure switches) configured and online, the rest
of the frame can be setup from the management server.

B-35
Appendix B
Installation Use Cases and Repository Requirements

Install OceanSpray Tools


Install the OceanSpray solution on the Management Server: Host OS Provisioner, Kubespray
Installer, Configurator (Helm installer). This will require the management server to pull from
the customer-provided docker registry.

Figure B-8 OceanSpray Download Path

Configure site specific details in configuration files


Where appropriate, update configuration files with site specific data (hosts.ini, config maps,
etc).
Install the Host OS on All Compute Nodes

Perform Host OS installations


Run Host OS Provisioner against all compute nodes (Master nodes, worker nodes, DB nodes).
Ansible Interacts with Server iLOs to perform PXe boot
Over an iLO network, Ansible will communicate to server iLOs to instruct the servers to reboot
looking for a network boot option. In the below figure, note that the iLO network is considered
to be on a private local network. This isn't a functional requirement, however, it does limit
attack vectors. The only potential reason known to the author to make this public would be for
sending alarms or telemetry to external NMS stations. We are expecting to feed this same
telemetry and alerts into the Cluster. Thus, the iLOs are intending to stay private.

B-36
Appendix B
Installation Use Cases and Repository Requirements

Figure B-9 Install OS on CNE Nodes - Server boot instruction

Servers Install Host OS


Servers boot by sending DHCP request out available NIC list. The broadcasts out the 10GE
NICs are answered by the management server host OS provisioner setup. The management
server provides the DHCP address, a boot loader, kickstart file and an OL ISO via NFS (a
change in a future release should move this operation to HTTP).
At the end of this installation process, the servers should reboot.

B-37
Appendix B
Installation Use Cases and Repository Requirements

Figure B-10 Install OS on CNE Nodes - Server boot process

Package Update
At this point, server's host OS is installed, hopefully from the latest OL release. If this was done
from a released ISO, then this step involves updating to the latest Errata. If the previous step
already involved grabbing the latest package offering, then this step is already taken care of.
Ansible triggers servers to do a Yum update
Ansible playbooks interact with servers to instruct them to perform a Yum update.

B-38
Appendix B
Installation Use Cases and Repository Requirements

Figure B-11 Update OS on CNE Nodes - Ansible

Servers do a Yum update


Up to this point, the host OS management network could have been a private network without
access to the outside world. At this point, the servers have to reach out to the defined
repositories to access the Yum repository. Implementation can choose to either provide public
addresses on the host OS instances, or a NAT function can be employed on the routers to hide
the host OS network topology. If a NAT is used, it is expected to be a 1 to n NAT, rather than a
1 to 1. Further, ACLs can be added to prevent any other type of communication in or out of the
frame on this network.
At the end of this installation process, the servers should reboot.

B-39
Appendix B
Installation Use Cases and Repository Requirements

Figure B-12 Update OS on CNE Nodes - Yum pull

Harden the OS
Ansible instructs the servers to run a script to harden the OS.

B-40
Appendix B
Installation Use Cases and Repository Requirements

Figure B-13 Harden the OS

Install VMs as Needed


Some hosts in the CNE solution are to have VMs to address certain functionality, such as the
DB service. The management server has a dual role of hosting the configuration aspects as well
as hosting a DB data node VM. The K8s master nodes are to host a DB management node VM.
This section shows the installation process for this activity.

Create the Guests


Ansible creates the guests on the target hosts.

B-41
Appendix B
Installation Use Cases and Repository Requirements

Figure B-14 Create the Guest

Install the Guest OS


Following a similar process to sections 2.5.1-2.5.3, the VM OS is installed, updated, and
hardened. The details of how this is done is slightly different than the host OS, as an iLO
connection is not necessary; however, they are similar enough that they won't be detailed here.

Install MySQL
Execute Ansible Playbooks from DB Installer Container
Show simple picture of Ansible touching the DB nodes.

Install Kubernetes on CNE Nodes


Customize Configuration Files
If needed, customize site-specific or deployment specific files.
Run Kubespray Installer
For each master and worker node, install the cluster.
Ansible/Kubespray Reaches Out to Servers to Perform Install

B-42
Appendix B
Installation Use Cases and Repository Requirements

Figure B-15 Install the Cluster on CNE Nodes

Servers Reach Out to Repos and Install Software


This is the 2nd instance where the host OS interfaces need to reach a distant repo. Thus,
another NAT traversal is needed. Any ACL restricting access in/out of the solution needs to
account for this traffic.

B-43
Appendix B
Installation Use Cases and Repository Requirements

Figure B-16 Install the Cluster on CNE Nodes - Pull in Software

Configure Common Services on CNE Cluster


Customize Site or Deployment Specific Files
If needed, customize site or deployment specific files, such as values files.
Run Configurator on Kubernetes Nodes
Install the Common Services using Helm install playbooks. Kubernetes will ensure appropriate
distribution of all Common Services in the cluster.
Ansible Connects to K8s to Run Helm

B-44
Appendix B
Installation Use Cases and Repository Requirements

Figure B-17 Execute Helm on Master Node

Helm Pulls Needed Items from Repositories


In this step, the Cluster IP sources the communication to pull from Helm repositories and
Docker registries to install the needed services. The Values files used by Helm are provided in
the Configurator container.

B-45
Appendix B
Topology Connection Tables

Figure B-18 Master Node Pulls from Repositories

Topology Connection Tables


Enclosure Connections

Blade Server Connections


The HP BL460 Gen10 Blade Server will have a base set of NICs that connect internally to the
enclosure's IO bays 1 and 2. These connections are "hard-wired" and won't be documented
here. Since no additional NICs are planned beyond the base pair of NICs, no further
declarations of network connectivity on the blades needs mentioning. Blade specifications are
linked here:
https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.specifications.hpe-
proliant-bl460c-gen10-server-blade.1010025832.html.

OA Connections
The Enclosure's Onboard Administrator (OA) will be deployed as a redundant pair, with each
connecting with 1GE copper connection to the respective ToR switches' SFP+ ports.

Topology Connections

Enclosure Switch Connections

B-46
Appendix B
Topology Connection Tables

The HP 6127XLG switch (https://www.hpe.com/us/en/product-catalog/servers/server-


interconnects/pip.hpe-6127xlg-blade-switch.8699023.html) will have 4x10GE fiber (or DAC)
connections between it and ToR respective switches' SFP+ ports.

Table B-11 Enclosure Switch Connections

Switch Port Name/ID Destination (To) Cable Type Module


(From) Required
Internal 1 Blade 1, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 2 Blade 2, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 3 Blade 3, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 4 Blade 4, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 5 Blade 5, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 6 Blade 6, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 7 Blade 7, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 8 Blade 8, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 9 Blade 9, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 10 Blade 10, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 11 Blade 11, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 12 Blade 12, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 13 Blade 13, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 14 Blade 14, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 15 Blade 15, NIC (1 for IObay1, 2 for Internal None
IObay2)
Internal 16 Blade 16, NIC (1 for IObay1, 2 for Internal None
IObay2)
External 1 Uplink 1 to ToR Switch (A for Fiber (multi- 10GE Fiber
IObay1, B for IObay2) mode)
External 2 Uplink 2 to ToR Switch (A for Fiber (multi- 10GE Fiber
IObay1, B for IObay2) mode)
External 3 Uplink 3 to ToR Switch (A for Fiber (multi- 10GE Fiber
IObay1, B for IObay2) mode)
External 4 Uplink 4 to ToR Switch (A for Fiber (multi- 10GE Fiber
IObay1, B for IObay2) mode)
External 5 Not Used None None
External 6 Not Used None None
External 7 Not Used None None
External 8 Not Used None None

B-47
Appendix B
Topology Connection Tables

Table B-11 (Cont.) Enclosure Switch Connections

Switch Port Name/ID Destination (To) Cable Type Module


(From) Required
Internal 17 Crosslink to IObay (2 for IObay1, 1 Internal None
for IObay2)
Internal 18 Crosslink to IObay (2 for IObay1, 1 Internal None
for IObay2)
Management OA Internal None

ToR Switch Connections

This section contains the point to point connections for the switches. The switches in the
solution will follow the naming scheme of "Switch<series number>", i.e. Switch1, Switch2,
etc; where Switch1 is the first switch in the solution, and switch2 is the second. These two form
a redundant pair. The switch datasheet is linked here: https://www.cisco.com/c/en/us/products/
collateral/switches/nexus-9000-series-switches/datasheet-c78-736651.html.
The first switch in the solution will serve to connect each server's first NIC in their respective
NIC pairs to the network. The next switch in the solution will serve to connect each server's
redundant (2nd) NIC in their respective NIC pairs to the network.

Table B-12 ToR Switch Connections

Switch Port From Switch 1 to From Switch 2 to Cable Type Module


Name/ID Destination Destination Required
(From)
1 RMS 1, FLOM NIC 1 RMS 1, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
2 RMS 1, iLO RMS 2, iLO CAT 5e or 6A 1GE Cu SFP
3 RMS 2, FLOM NIC 1 RMS 2, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
4 RMS 3, FLOM NIC 1 RMS 3, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
5 RMS 3, iLO RMS 4, iLO CAT 5e or 6A 1GE Cu SFP
6 RMS 4, FLOM NIC 1 RMS 4, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
7 RMS 5, FLOM NIC 1 RMS 5, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
8 RMS 5, iLO RMS 6, iLO CAT 5e or 6A 1GE Cu SFP
9 RMS 6, FLOM NIC 1 RMS 6, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
10 RMS 7, FLOM NIC 1 RMS 7, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
11 RMS 7, iLO RMS 8, iLO CAT 5e or 6A 1GE Cu SFP
12 RMS 8, FLOM NIC 1 RMS 8, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
13 RMS 9, FLOM NIC 1 RMS 9, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC

B-48
Appendix B
Topology Connection Tables

Table B-12 (Cont.) ToR Switch Connections

Switch Port From Switch 1 to From Switch 2 to Cable Type Module


Name/ID Destination Destination Required
(From)
14 RMS 9, iLO RMS 10, iLO CAT 5e or 6A 1GE Cu SFP
15 RMS 10, FLOM NIC 1 RMS 10, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
16 RMS 11, FLOM NIC 1 RMS 11, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
17 RMS 11, iLO RMS 12, iLO CAT 5e or 6A 1GE Cu SFP
18 RMS 12, FLOM NIC 1 RMS 12, FLOM NIC 2 Cisco 10GE Integrated in
DAC DAC
19 Enclosure 6, OA 1, Enclosure 6, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
20 Enclosure 6, IOBay 1, Enclosure 6, IOBay 2, Port Cisco 10GE Integrated in
Port 17 17 DAC DAC
21 Enclosure 6, IOBay 1, Enclosure 6, IOBay 2, Port Cisco 10GE Integrated in
Port 18 18 DAC DAC
22 Enclosure 6, IOBay 1, Enclosure 6, IOBay 2, Port Cisco 10GE Integrated in
Port 19 19 DAC DAC
23 Enclosure 6, IOBay 1, Enclosure 6, IOBay 2, Port Cisco 10GE Integrated in
Port 20 20 DAC DAC
24 Enclosure 5, OA 1, Enclosure 5, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
25 Enclosure 5, IOBay 1, Enclosure 5, IOBay 2, Port Cisco 10GE Integrated in
Port 17 17 DAC DAC
26 Enclosure 5, IOBay 1, Enclosure 5, IOBay 2, Port Cisco 10GE Integrated in
Port 18 18 DAC DAC
27 Enclosure 5, IOBay 1, Enclosure 5, IOBay 2, Port Cisco 10GE Integrated in
Port 19 19 DAC DAC
28 Enclosure 5, IOBay 1, Enclosure 5, IOBay 2, Port Cisco 10GE Integrated in
Port 20 20 DAC DAC
29 Enclosure 4, OA 1, Enclosure 4, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
30 Enclosure 4, IOBay 1, Enclosure 4, IOBay 2, Port Cisco 10GE Integrated in
Port 17 17 DAC DAC
31 Enclosure 4, IOBay 1, Enclosure 4, IOBay 2, Port Cisco 10GE Integrated in
Port 18 18 DAC DAC
32 Enclosure 4, IOBay 1, Enclosure 4, IOBay 2, Port Cisco 10GE Integrated in
Port 19 19 DAC DAC
33 Enclosure 4, IOBay 1, Enclosure 4, IOBay 2, Port Cisco 10GE Integrated in
Port 20 20 DAC DAC
34 Enclosure 3, OA 1, Enclosure 3, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
35 Enclosure 3, IOBay 1, Enclosure 3, IOBay 2, Port Cisco 10GE Integrated in
Port 17 17 DAC DAC
36 Enclosure 3, IOBay 1, Enclosure 3, IOBay 2, Port Cisco 10GE Integrated in
Port 18 18 DAC DAC
37 Enclosure 3, IOBay 1, Enclosure 3, IOBay 2, Port Cisco 10GE Integrated in
Port 19 19 DAC DAC

B-49
Appendix B
Topology Connection Tables

Table B-12 (Cont.) ToR Switch Connections

Switch Port From Switch 1 to From Switch 2 to Cable Type Module


Name/ID Destination Destination Required
(From)
38 Enclosure 3, IOBay 1, Enclosure 3, IOBay 2, Port Cisco 10GE Integrated in
Port 20 20 DAC DAC
39 Enclosure 2, OA 1, Enclosure 2, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
40 Enclosure 2, IOBay 1, Enclosure 2, IOBay 2, Port Cisco 10GE Integrated in
Port 17 17 DAC DAC
41 Enclosure 2, IOBay 1, Enclosure 2, IOBay 2, Port Cisco 10GE Integrated in
Port 18 18 DAC DAC
42 Enclosure 2, IOBay 1, Enclosure 2, IOBay 2, Port Cisco 10GE Integrated in
Port 19 19 DAC DAC
43 Enclosure 2, IOBay 1, Enclosure 2, IOBay 2, Port Cisco 10GE Integrated in
Port 20 20 DAC DAC
44 Enclosure 1, OA 1, Enclosure 1, OA 2, Mngt CAT 5e or 6A 1GE Cu SFP
Mngt
45 Enclosure 1, IOBay 1, Enclosure 1, IOBay 2, Port Cisco 10GE Integrated in
Port 17 17 DAC DAC
46 Enclosure 1, IOBay 1, Enclosure 1, IOBay 2, Port Cisco 10GE Integrated in
Port 18 18 DAC DAC
47 Enclosure 1, IOBay 1, Enclosure 1, IOBay 2, Port Cisco 10GE Integrated in
Port 19 19 DAC DAC
48 Enclosure 1, IOBay 1, Enclosure 1, IOBay 2, Port Cisco 10GE Integrated in
Port 20 20 DAC DAC
49 Mate Switch, Port 49 Mate Switch, Port 49 Cisco 40GE Integrated in
DAC DAC
50 Mate Switch, Port 50 Mate Switch, Port 50 Cisco 40GE Integrated in
DAC DAC
51 OAM Uplink to OAM Uplink to Customer 40GE (MM or 40GE QSFP
Customer SM) Fiber
52 Signaling Uplink to Signaling Uplink to 40GE (MM or 40GE QSFP
Customer Customer SM) Fiber
53 Unused Unused
54 Unused Unused
Management RMS 1, NIC 2 (1GE) RMS 1, NIC 3 (1GE) CAT5e or CAT None (RJ45
(Ethernet) 6A port)
Management Unused Unused None None
(Serial)

Rackmount Server Connections


Server quickspecs can be found here: https://h20195.www2.hpe.com/v2/getdocument.aspx?
docname=a00008180enw
The HP DL380 Gen10 RMS will be configured with an iLO, a 4x1GE LOM, and a 2x10GE
SFP+ FLOM.
• iLO: The integrated Lights Out management interface (iLO) contains an ethernet out of
band management interface for the server. This connection is 1GE RJ45.

B-50
Appendix B
Network Redundancy Mechanisms

• 4x1GE LOM: For most servers in the solution, their 4x1GE LOM ports will be unused.
The exception is the first server in the first frame. This server will serve as the
management server for the ToR switches. In this case, the server will use 2 of the LOM
ports to connect to ToR switches' respective out of band ethernet management ports. These
connections will be 1GE RJ45 (CAT 5e or CAT 6).
• 2x10GE FLOM: Every server will be equipped with a 2x10GE Flex LOM card (or
FLOM). These will be for in-band, or application and solution management traffic. These
connections are 10GE fiber (or DAC) and will terminate to the ToR switches' respective
SFP+ ports.
All RMS in the frame will only use the 10GE FLOM connections, except for the "management
server", the first server in the frame, which will have some special connections as listed below:

Table B-13 Management Server Connections

Server Destination Cable Type Module Notes


Interface Required
Base NIC1 Unused None None
(1GE)
Base NIC2 Switch1A CAT5e or 6a None Switch Initialization
(1GE) Ethernet Mngt
Base NIC3 Switch1B Ethernet CAT5e or 6a None Switch Initialization
(1GE) Mngt
Base NIC4 Unused None None
(1GE)
FLOM NIC1 Switch1A Port 1 Cisco 10GE Integrated in OAM, Signaling, Cluster
DAC DAC
FLOM NIC2 Switch1B Port 1 Cisco 10GE Integrated in OAM, Signaling, Cluster
DAC DAC
USB Port1 USB Flash Drive None None Bootstrap Host Initialization Only
(temporary)
USB Port2 Keyboard USB None Bootstrap Host Initialization Only
(temporary)
USB Port3 Mouse USB None Bootstrap Host Initialization Only
(temporary)
Monitor Port Video Monitor DB15 None Bootstrap Host Initialization Only
(temporary)

Network Redundancy Mechanisms


Network Redundancy Mechanisms
This section is intended to cover the redundancy mechanisms used within the CNE Platform.
With all links, BFD should be investigated as an optimal failure detection strategy.

Server Level Redundancy


The blade server hardware will be configured with a base pair of 10GE NICs that map to
Enclosure IO bays 1 and 2. IO bays 1 and 2 will be equipped with 10GE switches. The blade
server OS configuration must pair the base pair of NICs in an active/active configuration.

B-51
Appendix B
Network Redundancy Mechanisms

Figure B-19 Blade Server NIC Pairing

The rackmount servers (RMS) will be configured with a base quad 1GE NICs that will be
mostly unused (except for switch management connections on the management server). The
RMS will also be equipped with a 10GE Flex LOM (FLOM) card. The FLOM NIC ports will
be connected to the ToR switches. The RMS OS configuration must pair the FLOM NIC ports
in an active/active configuration.

Figure B-20 Rackmount Server NIC Pairing

Production Use-Case
The production environment will use LACP mode for active/active NIC pairing. This is so the
NICs can form one logical interface, using a load-balancing algorithm involving a hash of
source/dest MAC or IP pairs over the available links. For this to work, the upstream switches
need to be "clustered" as a single logical switch. LACP mode will not work if the upstream
switches are operating as independent switches (not sharing the same switching fabric). The

B-52
Appendix B
Network Redundancy Mechanisms

current projected switches to be used in the solution are capable of a "clustering" technology,
such as HP's IRF, and Cisco's vPC.

Lab Use-Case
Some lab infrastructure will be able to support the production use-case. However, due to its
dependence on switching technology, and the possibility that much of the lab will not have the
dependent switch capabilities, the NIC pairing strategy will need to support an active/active
mode that does not have dependence on switch clustering technologies (adaptive load
balancing, round-robin), active/standby that does not have dependence on switch clustering
technologies, or a simplex NIC configuration for non-redundant topologies.

Enclosure Switch Redundancy

To support LACP mode of NIC teaming, the Enclosure switches will need to be clustered
together as one logical switch. This will involve the switches to be connected together with the
Enclosure's internal pathing between the switches. Below is an Enclosure switch interconnect
table for reference.
Each blade server will form a 2x10GE Link Aggregation Group (LAG) to the upstream
enclosure switches. Up to 16 blades will be communicating through these enclosure switches.
Without specific projected data rates, the enclosure uplinks to the Top of Rack (ToR) switches
will be sized to a 4x10GE LAG each. Thus, with the switches logically grouped together, an
8x10GE LAG will be formed to the ToR.
For simplicity's sake, the figure below depicts a single black line for each connection. This
black line may represent one or more links between the devices. Consult the interconnect tables
for what the connection actually represents.

B-53
Appendix B
Network Redundancy Mechanisms

Figure B-21 Logical Switch View

ToR Switch Redundancy


The ToR switch is intended to be deployed as a single logical unit, using a switch clustering
technology. If the ToR switch does not support switch clustering, then, minimally, the switch
must support Virtual Port channeling technology that allows the downlinks to both enclosure
switches to be logically grouped into a common port channel. This is to simplify design,
increase throughput, and shorten failover time in the event of a switch failure. Connections
upstream to the customer network will be dependent on customer requirements. For the initial
launch of CNE, the uplink requirement will vary depending on OAM or Signaling uplink
requirements.

OAM Uplink Redundancy


The OAM network uplinks are expected to use static routing in the first deployment of CNE for
a targeted customer. This means that the first ToR switch will have a default route to a specific
customer OAM router interface, while the other ToR switch has a default route to a different
customer OAM router interface. If the ToR switches are clustered together as one logical
switch, then this behavior should still work in certain failure scenarios, but more testing would
be needed. For example, if the link to OAM subnet 1 were to go down, then the switch physical
or virtual interface to OAM subnet 1 should go down, and thus be removed from the route
table. If, however, the switch link doesn't go down, but the customer OAM router interface

B-54
Appendix B
Network Redundancy Mechanisms

were to become unreachable, the static default route to OAM subnet 1 would still be active, as
there is no intelligence at play to converge to a different default route. This is an area in need of
further exploration and development.

Figure B-22 OAM Uplink View

Signaling Uplink Redundancy

The signaling network uplinks are expected to use OSPF routing protocol with the customer
switches to determine optimal and available route paths to customer signaling router interfaces.
This implementation will require tuning with the customer network for optimal performance.
If the ToR switches are able to cluster together as one logical switch, then there is no need for
an OSPF relationship between the ToR switches. In this case, they would share a common route
table and have two possible routes out to the customer router interfaces. If, however, they do
not cluster together as a single logical unit, then there would be an OSPF relationship between
the two to share route information.

B-55
Appendix B
Network Redundancy Mechanisms

Figure B-23 Top of Rack Customer Uplink View

OAM and Signaling Separation

Cloud Native networks are typically flat networks, consisting of a single subnet within a
cluster. The Kubernetes networking technology is natively a single network IP space for the
cluster. This presents difficulty when deploying in telecom networks that still maintain a strict
OAM and Signaling separation in the customer infrastructure. The OC-CNE will provide
metalLB load-balancer with BGP integration to the ToR switches as a means to address this
problem. Each service end-point will configure itself to use a specific pool of addresses
configured within metalLB. There will be two address pools to choose from, OAM and
Signaling. As each service is configured with an address from a specific address pool, BGP will
share a route to that service over the cluster network. At the ToR, some method of advertising
these address pools to OAM and signaling paths will be needed. OAM service endpoints will
likely be addressed through static route provisioning. Signaling service endpoints will likely
redistribute just one address pool (signaling) into the OSPF route tables.
OAM and Signaling configurations and test results are in the following page:

B-56
Appendix B
Network Redundancy Mechanisms

Figure B-24 OAM and Signaling Separation

B-57
Appendix B
Install VMs for MySQL Nodes and Management Server

OAM type common services, such as EFK, Prometheus, and Grafana will have their service
endpoints configured from the OAM address pool. Signaling services, like the 5G NFs, will
have their service endpoints configured from the signaling address pool.

Install VMs for MySQL Nodes and Management


Server
The following procedure will create Virtual Machines (VM's) for installing MySQL Cluster
nodes(Management nodes, Data nodes and SQL nodes) and creating Bastion Host VM on the
each Storage Host, install Oracle Linux 7.5 on each VM, Procedure for MySQL Cluster
installation on VM's are documented in the Database Tier Installer.
After all the hosts are provisioned using the os-install container, This procedure is used for
creating the VM's in kubernetes Master nodes and Storage Hosts.
The below procedure details the steps required for installing the Bastion Hosts in Storage Hosts
and MySQL Cluster node VM's. This procedure will require all the network information
required for creating the VM's in different host servers like k8 Master Nodes, Storage Hosts.
Here VM's are created manually using the virt-install CLI tool and MySQL Cluster is installed
using the db-install docker container as outlined in the OCCNE Database Tier Installer.

B-58
Appendix B
Install VMs for MySQL Nodes and Management Server

MySQL Cluster Manager is a distributed client/server application consisting of two main


components. The MySQL Cluster Manager agent is a set of one or more agent processes that
manage NDB Cluster nodes, and the MySQL Cluster Manager client provides a command-line
interface to the agent's management functions.
MySQL Cluster Manager binary distributions that include MySQL NDB Cluster is used for
installing MySQL Cluster Manager and MySQL NDB Cluster.
Steps for downloading the MySQL Cluster Manager from Oracle Software Delivery Cloud
(OSDC) is found in Pre-flight Checklist.

MySQL Cluster Topology


In OCCNE 1.0, MySQL Cluster is installed as shown below in each cluster.

Figure B-25 MySQL Cluster Topology

Virtual Machine
MySQL Cluster is installed on Virtual machines, so the number of VM's required in the Storage
Hosts and K8 Master Nodes are as shown below. Each k8 master node is used to create 1 VM
for installing the MySQL Management node, so there are 3 MySQL management nodes in the
MySQL Cluster. In each storage nodes, 4 VM's are created, i.e. 2 VM's for data nodes, 1 VM
for SQL nodes and 1 VM for Management node VM.
No Of MySQL Management Nodes: 3
No Of Data Nodes: 4
No of SQL nodes: 2
No Of Bastion Hosts: 2
Below table shows VM's Created in Host servers:

Host Server No Of VM's Node Name


K8 Master node 1 1 MySQL management node 1

B-59
Appendix B
Install VMs for MySQL Nodes and Management Server

Host Server No Of VM's Node Name


K8 Master node 2 1 MySQL management node 2
K8 Master node 3 1 MySQL management node 3
Storage Host 1 4 2 MySQL Data Node, 1 MySQL SQL node, 1
Bastion Host VM
Storage Host 2 4 2 MySQL Data Node,1 MySQL SQL node, 1
Bastion Host VM

VM Profile for Management and MySQL Cluster Nodes:

Node Type RAM HDD vcpus No Of Nodes


MySQL 8GB 300GB 4 3
Management
Node
MySQL Data 50GB 800GB 10 4
Node
MySQL SQL 16GB 600GB 10 2
Node
Bastion Host 8GB 120GB 4 2

System Details

IP Address, host names for VM's, Network information for creating the VM's are captured in
OCCNE 1.0 Installation PreFlight Checklist

Prerequisites
1. All the hosts servers where VM's are created are captured in OCCNE Inventory File
Preparation, The kubernetes master nodes are mentioned under [kube-master] and Storage
Hosts are mentioned under [data_store].
2. All Hosts should be provisioned using os-install container as defined and installed site
hosts.ini file.
3. Oracle Linux 7.5 iso (OracleLinux-7.5-x86_64-disc1.iso) is copied in /var/occne in the
bastion host as specified in OCCNE Oracle Linux OS Installer procedure. This "/var/
occne" path is shared to other hosts as specified in OCCNE Configuration of the Bastion
Host.
4. Host names and IP Address, network information assigned to these VM's should be
captured in the Pre-flight Checklist.
5. Bastion Host should be installed in Storage Host(RMS2).
6. SSH keys configured in host servers by os-install container is stored in Management Node.
7. Storage Host(RMS1) and Storage Host(RMS2) should be configured with same SSH keys.
8. SSH keys should be configured in these VM's, so that db-install container can install these
VM's with the MySQL Cluster software, these SSH keys are configured in the VM's using
the kickstart files while creating VM's.

B-60
Appendix B
Install VMs for MySQL Nodes and Management Server

Limitations and Expectations


1. Both Storage Hosts will have one Management server VM, where Docker is installed, All
the host servers are provisioned from this Management server VM.
2. Once both storage nodes and host servers are provisioned using the os-install container,
VM's are created on kubernetes master and DB storage nodes.

References
1. https://linux.die.net/man/1/virt-install
2. https://linuxconfig.org/how-to-create-and-manage-kvm-virtual-machines-from-cli
3. https://www.cyberciti.biz/faq/kvm-install-centos-redhat-using-kickstart-ks-cfg/
4. https://opensource.com/business/16/9/linux-users-guide-lvm

Add bridge interface in all the hosts

Create a bridge interface on the Team(team0) interface for creating VM's.

Note:
Below steps should be performed to create the bridge interface (teambr0 and vlan5-br)
in each storage hosts and bridge interface(teambr0) in each kubernetes Master nodes
one at a time.

B-61
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 Procedure to install VMs for MySQL Nodes and Management Server

Step # Procedure Description


Add bridge Create a bridge interface on the Team(team0) interface for creating
1. interface in all the VM's.
hosts Note: Below steps should be performed to create the bridge
interface(teambr0 and vlan5-br) in each storage hosts and bridge
interface(teambr0) in each kubernetes Master nodes one at a time.
1. Create a bridge interface(teambr0) on team0 interface for host
network.
a. Login to the Host server
$ ssh admusr@10.75.216.XXX
$ sudo su
b. Note down the IP address, Gateway IP and DNS from the
team0 interface, these details can be obtained from the "/etc/
sysconfig/network-scripts/ifcfg-team0" interface file.

IP ADDRESS
GATEWAY IP
DNS

c. Create a new bridge interface(teambr0)


$ nmcli c add type bridge con-name teambr0
ifname teambr0
d. Modify this newly added bridge interface by assigning the IP
address, Gateway IP and DNS from the team0 interface.
$ nmcli c mod teambr0 ipv4.method manual
\ipv4.addresses <IPADDRESS/PREFIX>
ipv4.gateway <GATEWAYIP> \ipv4.dns <DNSIPS>
bridge.stp
no
e. Edit "/etc/sysconfig/network-scripts/ifcfg-team0" interface
file, add BRIDGE="teambr0" at the end of this file.
$ echo'BRIDGE="teambr0"'>> /etc/sysconfig/
network-scripts/ifcfg-team0
f. Remove IPADDR, PREFIX, GATEWAY,DNS from "/etc/
sysconfig/network-scripts/ifcfg-team0" interface file using
the below command.
$ sed -i'/IPADDR/d;/PREFIX/d;/
GATEWAY/d;/DNS/d'/etc/sysconfig/network-
scripts/ifcfg-team0
2. Reboot the host server.
$ reboot
3. Create Signal bridge(vlan5-br) interface in Storage Hosts.
$ sudo su
$ ip link add link team0 name team0.5 type vlan

B-62
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

id 5
$ ip link set team0.5 up
$ brctl addbr vlan5-br
$ ip link set vlan5-br up
$ brctl addif vlan5-br team0.5
4. Create Signal bridge(vlan5-br) interface configuration file in
Storage Hosts to keep these interfaces persistent over reboot,
update below variables in ifcfg config files using the below sed
commands.
a. PHY_DEV
b. VLAN_ID
c. BRIDGE_NAME
==============================================
==============================================
================================
Create ifcfg-team0.5 and ifcfg-vlan5-br file
in /etc/sysconfig/network-scripts directory
to keep these interfaces up over reboot.
==============================================
==============================================
================================
[root@db-2 network-scripts]# vi /tmp/ifcfg-
team0.VLAN_ID
VLAN=yes
TYPE=Vlan
PHYSDEV={PHY_DEV}
VLAN_ID={VLAN_ID}
REORDER_HDR=yes
GVRP=no
MVRP=no
PROXY_METHOD=none
BROWSER_ONLY=no
DEFROUTE=no
IPV4_FAILURE_FATAL=no
DEVICE={PHY_DEV}.{VLAN_ID}
NAME={PHY_DEV}.{VLAN_ID}
ONBOOT=yes
BRIDGE={BRIDGE_NAME}
NM_CONTROLLED=no

[root@db-2 network-scripts]# vi /tmp/ifcfg-


BRIDGE_NAME
STP=no
BRIDGING_OPTS=priority=32768
TYPE=Bridge
PROXY_METHOD=none
BROWSER_ONLY=no

B-63
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME={BRIDGE_NAME}
DEVICE={BRIDGE_NAME}
ONBOOT=yes
NM_CONTROLLED=no

$ cp /tmp/ifcfg-BRIDGE_NAME /etc/sysconfig/
network-scripts/ifcfg-vlan5-br
$ chmod 644 /etc/sysconfig/network-scripts/
ifcfg-vlan5-br
$ sed -i 's/{BRIDGE_NAME}/vlan5-br/g' /etc/
sysconfig/network-scripts/ifcfg-vlan5-br

$ cp /tmp/ifcfg-team0.VLAN_ID /etc/sysconfig/
network-scripts/ifcfg-team0.5
$ chmod 644 /etc/sysconfig/network-scripts/
ifcfg-team0.5
$ sed -i 's/{BRIDGE_NAME}/vlan5-br/g' /etc/
sysconfig/network-scripts/ifcfg-team0.5
$ sed -i 's/{PHY_DEV}/team0/g' /etc/sysconfig/
network-scripts/ifcfg-team0.5
$ sed -i 's/{VLAN_ID}/5/g' /etc/sysconfig/
network-scripts/ifcfg-team0.5
5. Reboot the host
$ reboot
Perform above steps in all the K8 Master nodes and Storage Hosts,
where MySQL VM's are created.

B-64
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description


Configure SSH After bastion host is installed and configured and performing the OS
2. keys in Storage installation as configured in the host inventory file, the public key
Host(RMS2) configured in Storage Host(RMS2) is different than the public key
configured in every other host.
Following steps are performed to configure the public key in Storage
Host(RMS2).
1. Login to the bastion host, and make sure the SSH keys are present
in "/var/occne/<cluster_name>" directory, which are generated by
os-install container. The public key configured in every other host
should be configured in the Storage Host(
RMS1
).
2. Change to "/var/occne/<cluster_name>" directory.
$ cd /var/occne/<cluster_name>;
3. Copy the public key in Storage Host(RMS2).update
<cluster_name> and < RMS2_PRIVATE_KEY> in below
command and execute to configure SSH key.
$ cat /var/occne/<cluster_name>/.ssh/
occne_id_rsa.pub | ssh -i <RMS2_PRIVATE_KEY>
admusr@172.16.3.6"mkdir -p /home/admusr/.ssh; cat
>>
/home/admusr/.ssh/authorized_keys"
4. Verify whether we are able to login to Storage Host(RMS2) using
SSH key, Since public key is configured in Storage Host(RMS2),
it will not prompt for password.
$ ssh -i .ssh/occne_id_rsa admusr@10.75.216.XXX

Mount Linux ISO Oracle Linux 7.5 iso(OracleLinux-7.5-x86_64-disc1.iso) is present in


3. "/var/occne" nfs path in Bastion host, This path should be mounted in
all the kubernetes master nodes and Storage Hosts for creating VM's.
1. Login to host.
$ ssh admusr@10.75.XXX.XXX
$ sudo su
2. Create a mount directory in a host.
$ mkdir -p /mnt/nfsoccne
$ chmod -R 755 /mnt/nfsoccne
3. Mount the nfs path from bastion host where Oracle Linux 7.5
iso(OracleLinux-7.5-x86_64-disc1.iso) is present.
$ mount -t nfs <BASTION_HOST_IP>:/var/occne /mnt/
nfsoccne
Perform above steps in all the K8 Master nodes and Storage Hosts,
where MySQL VM's are created.

B-65
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description


Creating Logical For Management VM's, MySQL Management node VM's, default
4. Volumes in Logical volume('ol') is used, For data nodes, Two Logical volumes
Storage Hosts will be created and each Logical volume will be assigned to each Data
Node VM.
Below procedures are used for creating the Logical Volumes in
Storage Hosts, Each storage Hosts contains 2 x 1.8 TB HDD and 2 x
960GB SSD drives, Logical volumes will be created using SSD drives.
1. Login to Storage Host
$ sudo su
2. Set partition types to "Linux LVM(8e)" for /dev/sdc and /dev/sdd
using fdisk command.
$ fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you


decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition


table
Building a new DOS disklabel with disk identifier
0x5d3d670f.

The device presents a logical sector size that is


smaller than
the physical sector size. Aligning to a physical
sector (or optimal
I/O) size boundary is recommended, or performance
may be impacted.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-1875385007, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G}
(2048-1875385007, default 1875385007):
Using default value 1875385007
Partition 1 of type Linux and of size 894.3 GiB
is set

Command (m for help): t


Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): p

B-66
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

Disk /dev/sdc: 960.2 GB, 960197124096 bytes,


1875385008 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096
bytes
I/O size (minimum/optimal): 4096 bytes / 4096
bytes
Disk label type: dos
Disk identifier: 0x5d3d670f
Device Boot Start End
Blocks Id System
/dev/sdc1 2048 1875385007
937691480 8e Linux LVM
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Perform same steps for /dev/sdd to set the partition type to Linux
LVM(8e).
$ fdisk /dev/sdd
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you


decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition


table
Building a new DOS disklabel with disk identifier
0xa241f6b9.

The device presents a logical sector size that is


smaller than
the physical sector size. Aligning to a physical
sector (or optimal
I/O) size boundary is recommended, or performance
may be impacted.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-1875385007, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G}
(2048-1875385007, default 1875385007):
Using default value 1875385007

B-67
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

Partition 1 of type Linux and of size 894.3 GiB


is set

Command (m for help): t


Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): p

Disk /dev/sdd: 960.2 GB, 960197124096 bytes,


1875385008 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096
bytes
I/O size (minimum/optimal): 4096 bytes / 4096
bytes
Disk label type: dos
Disk identifier: 0xa241f6b9

Device Boot Start End


Blocks Id System
/dev/sdd1 2048 1875385007
937691480 8e Linux LVM

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
3. Create physical volumes.
$ pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully
created.
$ pvcreate /dev/sdd1
Physical volume "/dev/sdd1" successfully
created.
4. Create separate volume groups for each physical volumes.
$ vgcreate strip_vga /dev/sdc1
Volume group "strip_vga" successfully created
$ vgcreate strip_vgb /dev/sdd1
Volume group "strip_vgb" successfully created
5. Create logical volume's for data nodes
$ lvcreate -L 900G -n strip_lva strip_vga
Logical volume "strip_lva" created.
$ lvextend -l +100%FREE /dev/strip_vga/strip_lva

$ lvcreate -L 900G -n strip_lvb strip_vgb

B-68
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

Logical volume "strip_lvb" created.


$ lvextend -l +100%FREE /dev/strip_vgb/strip_lvb
These logical volumes are used for creating the MySQL Data node
VM's.
Copy kickstart For creating MySQL Node VM's, kickstart template files are used.
5. files Download the kickstart files from OHC.
Steps for creating Bastion host is used for host provisioning, MySQL Cluster, installing
6. Bastion Host the hosts with kubernetes, common services. virt-install tool is used for
creating the Bastion Host.
Bastion hosts are already created during the installation procedure, so
this procedure is required only in case of re installation of the bastion
host in Storage Hosts.
1. Login to the Storage Host
2. Follow the procedure OCCNE Installation of the Bastion Host for
creating the Management VM in RMS2.
3. Follow the procedure Configuration of the Bastion Host for
configuring the Management VM.
Repeat these steps for creating other bastion host in other Storage
Hosts.

B-69
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description


Steps for creating For installing the MySQL NDB cluster, 3 MySQL Management node
7. MySQL VM's will be installed. In each kubernetes master nodes, 1 MySQL
Management Management node VM will be created.
Node VM Perform below steps to install MySQL Management Node VM in
kubernetes Master Node.
1. Login in to the kubernetes Master Node host and make sure the
bridge interface(teambr0) is added in this Host. Follow steps "2.
Add bridge interface in all the hosts" for adding the bridge
interface(teambr0) if it doesn't exists.
[root@k8s-1 admusr]# ifconfig teambr0
teambr0:
flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu
1500
inet 10.75.216.68 netmask
255.255.255.128 broadcast 10.75.216.127
inet6 2606:b400:605:b827:1330:2c49:6b7e:
8ffe prefixlen 64 scopeid 0x0<global>
inet6 fe80::2738:43b3:347:cd43 prefixlen
64 scopeid 0x20<link>
ether b4:b5:2f:6d:22:30 txqueuelen 0
(Ethernet)
RX packets 217597 bytes 19182440 (18.2
MiB)
RX errors 0 dropped 0 overruns 0 frame
0
TX packets 9193 bytes 1328986 (1.2 MiB)
TX errors 0 dropped 0 overruns 0
carrier 0 collisions 0
2. Mount the nfs path from the management VM in this K8's Master
Node, follow steps "4. Mount Linux ISO" for mounting the nfs
path /var/occne in the host servers.
3. Create Kickstart Template file for creating MySQL Management
Node VM
a. Change to root user
$ sudo su
b. Copy DB_MGM_TEMPLATE.ks in /tmp directory in k8
master node host server
c. Copy DB_MGM_TEMPLATE.ks to DB_MGMNODE_1.ks
$ cp /tmp/DB_MGM_TEMPLATE.ks /tmp/
DB_MGMNODE_1.ks
d. Update the kickstart file(DB_MGMNODE_1.ks) using the
following commands to set the following file variables
i. VLAN3_IPADDRESS: IP address assigned to this VM
as configured hosts.ini inventory file(created using
procedure: OCCNE Inventory File Preparation).

B-70
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

ii. VLAN3_GATEWAYIP: Gateway IP address for IP


address configured in hosts.ini inventory file.
iii. VLAN3_NETMASKIP: Netmask for this network.
iv. NAMESERVERIPS: IP address of the DNS servers,
multiple nameservers should be separated by comma as
shown below. For ex: 10.10.10.1,10.10.10.2 if there are
no name servers to configure then remove this variable
from the kickstart file.sed -i 's/--
nameserver=NAMESERVERIPS//' /tmp/
DB_MGMNODE_1.ks
v. NODEHOSTNAME: host name of the VM as
configured in hosts.ini inventory file.
vi. NTPSERVERIPS
: IP address of the NTP servers, multiple NTP servers
should be separated by comma as shown below. For ex:
10.10.10.3,10.10.10.4
vii. HTTP_PROXY: http proxy for yum, if not required
then comment "echo "proxy=HTTP_PROXY" >> /etc/
yum.conf" line in the kickstart file. sed -i 's/echo
"proxy=HTTP_PROXY" >> \/etc\/yum.conf/#echo
"proxy=HTTP_PROXY" >> \/etc\/yum.conf/' /tmp/
DB_MGMNODE_1.ks
viii. PUBLIC_KEY: SSH public key configured in host(/
home/admusr/.ssh/authorized_keys) is used to update
the kickstart file, So that VM can be accessed using the
same private key generated using host provisioning.
Note: HTTP_PROXY in the commands below require
only the URL as the "http://" is provided in the sed
command.
$ sed -i 's/VLAN3_GATEWAYIP/
ACTUAL_GATEWAY_IP/g' /tmp/DB_MGMNODE_1.ks
$ sed -i 's/VLAN3_IPADDRESS/
ACTUAL_IPADDRESS/g' /tmp/DB_MGMNODE_1.ks
$ sed -i 's/NAMESERVERIPS/
ACTUAL_NAMESERVERIPS/g' /tmp/
DB_MGMNODE_1.ks
$ sed -i 's/VLAN3_NETMASKIP/
ACTUAL_NETMASKIP/g' /tmp/DB_MGMNODE_1.ks
$ sed -i 's/NODEHOSTNAME/
ACTUAL_NODEHOSTNAME/g' /tmp/
DB_MGMNODE_1.ks
$ sed -i 's/NTPSERVERIPS/
ACTUAL_NTPSERVERIPS/g' /tmp/
DB_MGMNODE_1.ks
$ sed -i 's/HTTP_PROXY/http:\/\/

B-71
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

ACTUAL_HTTP_PROXY/g' /tmp/DB_MGMNODE_1.ks
$ sed -e '/PUBLIC_KEY/{' -e 'r /home/
admusr/.ssh/authorized_keys' -e 'd' -e
'}' -i /tmp/DB_MGMNODE_1.ks

for Ex: Below commands show how to update these


values in the /tmp/DB_MGMNODE_1.ks file.
$ sed -i 's/VLAN3_GATEWAYIP/
172.16.3.1/g' /tmp/DB_MGMNODE_1.ks
$ sed -i 's/VLAN3_IPADDRESS/
172.16.3.91/g' /tmp/DB_MGMNODE_1.ks
$ sed -i 's/NAMESERVERIPS/
172.16.3.4/g' /tmp/DB_MGMNODE_1.ks
$ sed -i 's/VLAN3_NETMASKIP/
255.255.255.0/g' /tmp/DB_MGMNODE_1.ks
$ sed -i 's/NODEHOSTNAME/db-
mgm2.rainbow.lab.us.oracle.com/g' /tmp/
DB_MGMNODE_1.ks
$ sed -i 's/NTPSERVERIPS/
172.16.3.4/g' /tmp/DB_MGMNODE_1.ks
$ sed -i 's/HTTP_PROXY/http:\/\/www-
proxy.us.oracle.com:80/g' /tmp/
DB_MGMNODE_1.ks
$ sed -e '/PUBLIC_KEY/{' -e 'r /home/
admusr/.ssh/authorized_keys' -e 'd' -e
'}' -i /tmp/DB_MGMNODE_1.ks
4. After updating DB_MGMNODE_1.ks kickstart file, use below
command to start the creation of MySQL Management node VM.
This command will use the "/tmp/DB_MGMNODE_1.ks"
kickstart file for creating the VM and configuring the MySQL
Management node VM, update <NDBMGM_NODE_NAME> as
specified in hosts.ini invetory file(created using procedure:
OCCNE Inventory File Preparation) and
<NDBMGM_NODE_DESC>in the below command.
$ virt-install --name <NDBMGM_NODE_NAME> --memory
8192 --memorybacking hugepages=yes --vcpus 4 \
--metadata
description=<NDBMGM_NODE_DESC> --autostart --
location /mnt/nfsoccne/OracleLinux-7.5-x86_64-
disc1.iso \
--initrd-inject=/tmp/
DB_MGMNODE_1.ks --os-variant ol7.5 \
--extra-args "ks=file:/
DB_MGMNODE_1.ks console=tty0
console=ttyS0,115200" \
--disk path=/var/lib/libvirt/
images/<NDBMGM_NODE_NAME>.qcow2,size=120 \
--network bridge=teambr0 --
graphics none

B-72
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

For Ex: After updating the <NDBMGM_NODE_NAME>,


<NDBMGM_NODE_DESC> in the above command
[root@k8s-1 admusr]# virt-install --name
ndbmgmnodea1 --memory 8192 --memorybacking
hugepages=yes --vcpus 4 \
--metadata
description=ndbmgmnodea_vm1 --autostart --
location /mnt/nfsoccne/OracleLinux-7.5-x86_64-
disc1.iso \
--initrd-inject=/tmp/
DB_MGMNODE_1.ks --os-variant ol7.5 \
--extra-args "ks=file:/
DB_MGMNODE_1.ks console=tty0
console=ttyS0,115200" \
--disk path=/var/lib/libvirt/
images/ndbmgmnodea1.qcow2,size=120 \
--network bridge=teambr0 --
graphics none
Starting install...
Retrieving
file .treeinfo...

Retrieving file
vmlinuz...

Retrieving file
initrd.img...

Allocating
'ndbmgmnodea1.qcow2'

Connected to domain ndbmgmnodea1


Escape character is ^]
5. After Installation is complete, prompted for login.
6. To Exit from the virsh console Press CTRL+ '5' keys, after logout
from VM.
$ exit
press CTRL+'5'keys to exit from the virsh console.
Repeat these steps for creating remaining MySQL Management node
VM's in kubernetes Master Nodes.

B-73
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description


Steps for creating MySQL Data Node VM's are created in Storage Hosts, each Storage
8. MySQL Data Host contains 2 x 1.8 TB HDD and 2 x 960GB SSD drives.
Node VM For data nodes. the Logical volumes are created using the SSD drives,
Each SSD Drive is assigned to one Logical Volume. each data node
will use one Logical Volume.
Procedure for creating the Logical Volumes is specified in "5. Creating
Logical Volumes in Storage Hosts" above.
To create these VM's DB Data Node Kickstart template
file(DB_DATANODE_TEMPLATE.ks) is used to generate the
individual kickstart files for each MySQL Data Node VM's, These
kickstart files are updated with all the required information(Network,
admin user, host names, ntp servers, DNS servers), host name, SSH
keys and so on.
1. Login in to the Storage Host
a. Check if the bridge interface(teambr0) present, if not follow
the steps "2. Add bridge interface in all the hosts".
$ ifconfig teambr0
teambr0:
flags=4163<UP,BROADCAST,RUNNING,MULTICAST>
mtu 1500
inet 10.75.216.68 netmask
255.255.255.128 broadcast 10.75.216.127
inet6
2606:b400:605:b827:1330:2c49:6b7e:8ffe
prefixlen 64 scopeid 0x0<global>
inet6 fe80::2738:43b3:347:cd43
prefixlen 64 scopeid 0x20<link>
ether b4:b5:2f:6d:22:30 txqueuelen
0 (Ethernet)
RX packets 217597 bytes 19182440
(18.2 MiB)
RX errors 0 dropped 0 overruns 0
frame 0
TX packets 9193 bytes 1328986 (1.2
MiB)
TX errors 0 dropped 0 overruns 0
carrier 0 collisions 0
b. Check if the Logical Volumes(strip_lva, strip_lvb) exists in
the Storage hosts, if not follow steps in "5. Creating Logical
Volumes in Storage Hosts"
$ lvs
LV VG Attr LSize Pool
Origin Data% Meta% Move Log Cpy%Sync Convert
root ol -wi-ao---- 20.00g
var ol -wi-ao---- <1.62t
strip_lva strip_vga -wi-ao---- 894.25g
strip_lvb strip_vgb -wi-ao---- 894.25g
c. Mount nfs path to in the host, follow steps "4. Mount Linux
ISO" for mounting the image from the shared nfs path.

B-74
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

2. Create Kickstart file for creating MySQL Data Node VM.


a. Change to root user
$ sudo su
b. Copy DB_DATANODE_TEMPLATE.ks in /tmp directory in
DB Storage node host server
c. Copy DB_DATANODE_TEMPLATE.ks to
DATANODEVM_1.ks
$ cp /tmp/DB_DATANODE_TEMPLATE.ks /tmp/
DATANODEVM_1.ks
d. Update the kickstart file(DATANODEVM_1.ks) using the
following commands to set the following file variables
i. VLAN3_IPADDRESS: IP address assigned to this VM
as configured hosts.ini inventory file(created using
procedure: OCCNE Inventory File Preparation).
ii. VLAN3_GATEWAYIP: Gateway IP address for IP
address configured in hosts.ini inventory file.
iii. VLAN3_NETMASKIP: Netmask for this network.
iv. NAMESERVERIPS: IP address of the DNS servers,
multiple nameservers should be separated by comma as
shown below. For ex: 10.10.10.1,10.10.10.2 if there are
no name servers to configure then remove this variable
from the kickstart file.sed -i 's/--
nameserver=NAMESERVERIPS//' /tmp/
DB_MGMNODE_1.ks
v. NODEHOSTNAME: host name of the VM as
configured in hosts.ini inventory file.
vi. NTPSERVERIPS: IP address of the NTP servers,
multiple NTP servers should be separated by comma as
shown below. For ex: 10.10.10.3,10.10.10.4
vii. HTTP_PROXY: http proxy for yum, if not required
then comment "echo "proxy=HTTP_PROXY" >> /etc/
yum.conf" line in the kickstart file. sed -i 's/echo
"proxy=HTTP_PROXY" >> \/etc\/yum.conf/#echo
"proxy=HTTP_PROXY" >> \/etc\/yum.conf/' /tmp/
DB_MGMNODE_1.ks
viii. PUBLIC_KEY: SSH public key configured in host(/
home/admusr/.ssh/authorized_keys) is used to update
the kickstart file, So that VM can be accessed using the
same private key generated using host
provisioning.Note: HTTP_PROXY in the commands
below require only the URL as the "http://" is provided
in the sed command.

B-75
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

$ sed -i 's/VLAN3_GATEWAYIP/
ACTUAL_GATEWAY_IP/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/VLAN3_IPADDRESS/
ACTUAL_IPADDRESS/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/NAMESERVERIPS/
ACTUAL_NAMESERVERIPS/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/VLAN3_NETMASKIP/
ACTUAL_NETMASKIP/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/NODEHOSTNAME/
ACTUAL_NODEHOSTNAME/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/NTPSERVERIPS/
ACTUAL_NTPSERVERIPS/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/HTTP_PROXY/
ACTUAL_HTTP_PROXY/g' /tmp/DATANODEVM_1.ks
$ sed -e '/PUBLIC_KEY/{' -e 'r /home/
admusr/.ssh/authorized_keys' -e 'd' -e '}' -
i /tmp/DATANODEVM_1.ks
Similarly generate
DATANODEVM_2.ks, DATANODEVM_3.ks,
DATANODEVM_4.ks kickstart files, which are used for creating
MySQL Data node VM's.
3. After updating DATANODEVM_1.ks kickstart file, use below
command to start the creation of MySQL Data node VM. This
command will use the "/tmp/DATANODEVM_1.ks" kickstart file
for creating the VM and configuring the MySQL Data node VM,
update <DATANODEVM_NAME> as specified in hosts.ini
invetory file(created using procedure: OCCNE Inventory File
Preparation) and <DATANODEVM_NODE_DESC>in the below
command.
For Creating ndbdatanodea1 Data Node VM in DB Storage Node
1:
$ virt-install --name <DATANODEVM_NAME> --memory
51200 --memorybacking hugepages=yes --vcpus 10 \
--metadata
description=<DATANODEVM_DESC> --autostart --
location /mnt/nfsoccne/OracleLinux-7.5-x86_64-
disc1.iso \
--initrd-inject=/tmp/
DATANODEVM_1.ks --os-variant=ol7.5 \
--extra-args="ks=file:/
DATANODEVM_1.ks console=tty0
console=ttyS0,115200n8" \
--disk path=/var/lib/libvirt/
images/<DATANODEVM_NAME>.qcow2,size=100 --disk
path=/dev/mapper/strip_vga-strip_lva \
--network bridge=teambr0 --
nographics

For Creating ndbdatanodea2 Data Node VM in DB Storage Node


2:

B-76
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

$ virt-install --name <DATANODEVM_NAME> --memory


51200 --memorybacking hugepages=yes --vcpus 10 \
--metadata
description=<DATANODEVM_DESC> --autostart --
location /mnt/nfsoccne/OracleLinux-7.5-x86_64-
disc1.iso \
--initrd-inject=/tmp/
DATANODEVM_2.ks --os-variant=ol7.5 \
--extra-args="ks=file:/
DATANODEVM_2.ks console=tty0
console=ttyS0,115200n8" \
--disk path=/var/lib/libvirt/
images/<DATANODEVM_NAME>.qcow2,size=100 --disk
path=/dev/mapper/strip_vga-strip_lva \
--network bridge=teambr0 --
nographics

For Creating ndbdatanodea3 Data Node VM in DB Storage Node


1
$ virt-install --name <DATANODEVM_NAME> --memory
51200 --memorybacking hugepages=yes --vcpus 10 \
--metadata
description=<DATANODEVM_DESC> --autostart --
location /mnt/nfsoccne/OracleLinux-7.5-x86_64-
disc1.iso \
--initrd-inject=/tmp/
DATANODEVM_3.ks --os-variant=ol7.5 \
--extra-args="ks=file:/
DATANODEVM_3.ks console=tty0
console=ttyS0,115200n8" \
--disk path=/var/lib/libvirt/
images/<DATANODEVM_NAME>.qcow2,size=100 --disk
path=/dev/mapper/strip_vgb-strip_lvb \
--network bridge=teambr0 --
nographics

For Creating ndbdatanodea4 Data Node VM in DB Storage Node


2
$ virt-install --name <DATANODEVM_NAME> --memory
51200 --memorybacking hugepages=yes --vcpus 10 \
--metadata
description=<DATANODEVM_DESC> --autostart --
location /mnt/nfsoccne/OracleLinux-7.5-x86_64-
disc1.iso \
--initrd-inject=/tmp/
DATANODEVM_4.ks --os-variant=ol7.5 \
--extra-args="ks=file:/
DATANODEVM_4.ks console=tty0
console=ttyS0,115200n8" \
--disk path=/var/lib/libvirt/
images/<DATANODEVM_NAME>.qcow2,size=100 --disk

B-77
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

path=/dev/mapper/strip_vgb-strip_lvb \
--network bridge=teambr0 --
nographics
4. After Installation is complete, prompt for login.
5. To Exit from the virsh console Press CTRL+ '5' keys, after logout
from VM.
$ exit
press CTRL+'5' keys to exit from the virsh
console.
Repeat these steps for creating all the MySQL Data node VM's in
Storage Hosts.

B-78
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description


Steps for creating Along with Data node VM's in Storage Hosts, MySQL SQL Node
9. MySQL SQL VM's are created in Storage Hosts, One SQL node is created in each of
Node VM Storage Host.
To create these MySQL SQL node VM's, DB Node kickstart template
file(DB_SQL_TEMPLATE.ks) is used, SQL Node kickstart template
file is updated with all the required information(Network, admin user,
host names, ntp servers, DNS servers and so on.
1. Login in to the Storage Host and make sure the bridge
interface(teambr0) and bridge interface(vlan5-br) on VLAN 5 is
present, if not follow the steps "Add bridge interface in all the
hosts".
$ ifconfig teambr0
teambr0:
flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu
1500
inet 10.75.216.68 netmask
255.255.255.128 broadcast 10.75.216.127
inet6 2606:b400:605:b827:1330:2c49:6b7e:
8ffe prefixlen 64 scopeid 0x0<global>
inet6 fe80::2738:43b3:347:cd43 prefixlen
64 scopeid 0x20<link>
ether b4:b5:2f:6d:22:30 txqueuelen 0
(Ethernet)
RX packets 217597 bytes 19182440 (18.2
MiB)
RX errors 0 dropped 0 overruns 0 frame
0
TX packets 9193 bytes 1328986 (1.2 MiB)
TX errors 0 dropped 0 overruns 0
carrier 0 collisions 0

$ ifconfig vlan5-br
vlan5-br:
flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu
1500
inet6 fe80::645e:5cff:febf:fbd6
prefixlen 64 scopeid 0x20<link>
ether 48:df:37:7a:40:48 txqueuelen 1000
(Ethernet)
RX packets 150600 bytes 7522366 (7.1 MiB)
RX errors 0 dropped 0 overruns 0 frame
0
TX packets 7 bytes 626 (626.0 B)
TX errors 0 dropped 0 overruns 0
carrier 0 collisions 0
2. Create Kickstart file for creating MySQL SQL Node VM
a. Change to root user
$ sudo su

B-79
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

b. Copy DB_SQL_TEMPLATE.ks in /tmp directory in Storage


Host
c. Copy DB_SQL_TEMPLATE.ks to DB_SQLNODE_1.ks
$ cp /tmp/DB_SQL_TEMPLATE.ks /tmp/
DB_SQLNODE_1.ks
d. Update the kickstart file(DB_MGMNODE_1.ks) using the
following commands to set the following file
variablesUpdate DB_SQLNODE_1.ks file with
VLAN3_IPADDRESS, VLAN3_NAMESERVERIPS,
VLAN3_NETMASKIP, NETMASKIP,
NODEHOSTNAME, SIGNAL_GATEWAYIP,
SIGNAL_IPADDRESS, SIGNAL_NETMASKIP,
NODEHOSTNAME, NTPSERVERIPS and HTTP_PROXY,
Replace ACTUAL_* variables in below commands with
actual values which will update SQLNODEVM_1.ks
file.Update the kickstart file(DB_MGMNODE_1.ks) using
the following commands to set the following file variables
i. VLAN3_IPADDRESS: IP address assigned to this VM
as configured hosts.ini inventory file(created using
procedure: OCCNE Inventory File Preparation).
ii. VLAN3_NETMASKIP: Netmask for this network.
iii. SIGNAL_VLAN5_IPADDRESS: Signalling IP address
assigned to this VM.
iv. SIGNAL_VLAN5_GATEWAYIP: Gateway IP address
for signalling network.
v. SIGNAL_VLAN5_NETMASKIP: Netmask for this
signalling network.
vi. NAMESERVERIPS: IP address of the DNS servers,
multiple nameservers should be separated by comma as
shown below. For ex: 10.10.10.1,10.10.10.2 if there are
no name servers to configure then remove this variable
from the kickstart file.sed -i 's/--
nameserver=NAMESERVERIPS//' /tmp/
DB_MGMNODE_1.ks
vii. NODEHOSTNAME: host name of the VM as
configured in hosts.ini inventory file.
viii. NTPSERVERIPS: IP address of the NTP servers,
multiple NTP servers should be separated by comma as
shown below. For ex: 10.10.10.3,10.10.10.4
ix. HTTP_PROXY: http proxy for yum, if not required
then comment "echo "proxy=HTTP_PROXY" >> /etc/
yum.conf" line in the kickstart file. sed -i 's/echo
"proxy=HTTP_PROXY" >> \/etc\/yum.conf/#echo

B-80
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

"proxy=HTTP_PROXY" >> \/etc\/yum.conf/' /tmp/


DB_MGMNODE_1.ks
x. PUBLIC_KEY: SSH public key configured in host(/
home/admusr/.ssh/authorized_keys) is used to update
the kickstart file, So that VM can be accessed using the
same private key generated using host provisioning.
Note: HTTP_PROXY in the commands below require
only the URL as the "http://" is provided in the sed
command.
$ sed -i 's/VLAN3_IPADDRESS/
ACTUAL_VLAN3_IPADDRESS/g' /tmp/DB_SQLNODE_1.ks
$ sed -i 's/VLAN3_NETMASKIP/
ACTUAL_VLAN3_NETMASKIP/g' /tmp/DB_SQLNODE_1.ks
$ sed -i 's/NAMESERVERIPS/
ACTUAL_NAMESERVERIPS/g' /tmp/DB_SQLNODE_1.ks
$ sed -i 's/SIGNAL_VLAN5_GATEWAYIP/
ACTUAL_SIGNAL_VLAN5_GATEWAYIP/g' /tmp/
DB_SQLNODE_1.ks
$ sed -i 's/SIGNAL_VLAN5_IPADDRESS/
ACTUAL_SIGNAL_VLAN5_IPADDRESS/g' /tmp/
DB_SQLNODE_1.ks
$ sed -i 's/SIGNAL_VLAN5_NETMASKIP/
ACTUAL_SIGNAL_VLAN5_NETMASKIP/g' /tmp/
DB_SQLNODE_1.ks
$ sed -i 's/NODEHOSTNAME/
ACTUAL_NODEHOSTNAME/g' /tmp/DB_SQLNODE_1.ks
$ sed -i 's/NTPSERVERIPS/
ACTUAL_NTPSERVERIPS/g' /tmp/DB_SQLNODE_1.ks
$ sed -i 's/HTTP_PROXY/
ACTUAL_HTTP_PROXY/g' /tmp/DB_SQLNODE_1.ks
$ sed -e '/PUBLIC_KEY/{' -e 'r /home/
admusr/.ssh/authorized_keys' -e 'd' -e '}' -
i /tmp/DB_SQLNODE_1.ks
3. After updating DB_SQLNODE_1.ks kickstart file, use below
command to start the creation of MySQL SQL node VM. This
command will use the "/tmp/DB_SQLNODE_1.ks" kickstart file
for creating the VM and configuring the MySQL SQL node VM,
update <NDBSQL_NODE_NAME> as specified in hosts.ini
invetory file(created using procedure: OCCNE Inventory File
Preparation) and <NDBSQL_NODE_DESC>in the below
command.
$ virt-install --name <NDBSQL_NODE_NAME> --memory
16384 --memorybacking hugepages=yes --vcpus 10 \
--metadata
description=<NDBSQL_NODE_DESC> --autostart --
location /mnt/nfsoccne/OracleLinux-7.5-x86_64-
disc1.iso \
--initrd-inject=/tmp/
DB_SQLNODE_1.ks --os-variant ol7.5 \

B-81
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server

Step # Procedure Description

--extra-args
"ks=file:/DB_SQLNODE_1.ks console=tty0
console=ttyS0,115200" \
--disk path=/var/lib/
libvirt/images/<NDBSQL_NODE_NAME>.qcow2,size=600 \
--network
bridge=teambr0 --network bridge=vlan5-br --
graphics none
4. After Installation is complete, prompt for login.
5. To Exit from the virsh console Press CTRL+ '5' keys, after logout
from VM.
$ exit
press CTRL+'5' keys to exit from the virsh
console.
Repeat these steps for creating MySQL SQL node VM's in Storage
Hosts.
Unmount Linux After all the MySQL node VM's are created in all kubernetes master
10. ISO nodes and Storage Hosts, unmount "/mnt/nfsoccne" and delete this
directory.
1. Login to host.
2. Unmount "/mnt/nfsoccne" in host
$ umount /mnt/nfsoccne
3. Delete directory
$ rm -rf /mnt/nfsoccne
Perform above steps in all the K8 Master nodes and Storage Hosts.

B-82
Appendix B
Install VMs for MySQL Nodes and Management Server

Table B-14 Procedure to install VMs for MySQL Nodes and Management Server

Step # Procedure Description


Open firewall For installing MySQL Cluster in these VM's we need to open the ports
11. ports in firewall in MySQL Management Node VM's, Data Node VM's and
SQL Node VM's.
Below table lists the ports to be opened in firewall:

Node Name Ports


MySQL Management Node 1862 18620 1186
MySQL Data Node 1862 18620 2202
MySQL SQL Node 1862 18620 3306

Firewall commands to open these ports are as follows.


1. For MySQL Management Node VM's
$ sudo su
$ firewall-cmd --zone=public --permanent --add-
port=1862/tcp
$ firewall-cmd --zone=public --permanent --add-
port=18620/tcp
$ firewall-cmd --zone=public --permanent --add-
port=1186/tcp
$ firewall-cmd --reload
2. For MySQL Data Node VM's
$ sudo su
$ firewall-cmd --zone=public --permanent --add-
port=1862/tcp
$ firewall-cmd --zone=public --permanent --add-
port=18620/tcp
$ firewall-cmd --zone=public --permanent --add-
port=2202/tcp
$ firewall-cmd --reload
3. For MySQL SQL Node VM's
$ sudo su
$ firewall-cmd --zone=public --permanent --add-
port=1862/tcp
$ firewall-cmd --zone=public --permanent --add-
port=18620/tcp
$ firewall-cmd --zone=public --permanent --add-
port=3306/tcp
$ firewall-cmd --reload

B-83

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy