CNE Installation Guide
CNE Installation Guide
Release 1.0
F16979-01
July 2019
Oracle Communications OC-CNE Installation Guide, Release 1.0
F16979-01
This software and related documentation are provided under a license agreement containing restrictions on use and
disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or
allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit,
perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation
of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find
any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of
the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any
programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial
computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating
system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license
terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not
developed or intended for use in any inherently dangerous applications, including applications that may create a risk of
personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all
appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates
disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their
respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under
license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and
the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products, and
services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all
warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an
applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss,
costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in
an applicable agreement between you and Oracle.
Contents
1 Introduction
Glossary 1-1
Key terms 1-1
Key Acronyms and Abbreviations 1-2
Overview 1-3
OCCNE Installation Overview 1-3
Frame and Component Overview 1-4
Frame Overview 1-4
Host Designations 1-5
Node Roles 1-6
Transient Roles 1-7
Create OCCNE Instance 1-8
How to use this document 1-10
Documentation Admonishments 1-11
Locate Product Documentation on the Oracle Help Center Site 1-12
Customer Training 1-12
My Oracle Support 1-12
Emergency Response 1-13
2 Installation Prerequisites
Obtain Site Data and Verify Site Installation 2-1
Configure Artifact Acquisition and Hosting 2-1
Oracle eDelivery Artifact Acquisition 2-1
Third Party Artifacts 2-1
Populate the MetalLB Configuration 2-2
3 Install Procedure
Initial Configuration - Prepare a Minimal Boot Strapping Environment 3-1
Installation of Oracle Linux 7.5 on Bootstrap Host 3-1
Configure the Installer Bootstrap Host BIOS 3-8
Configure Top of Rack 93180YC-EX Switches 3-14
iii
Configure Addresses for RMS iLOs, OA, EBIPA 3-27
Configure Legacy BIOS on Remaining Hosts 3-35
Configure Enclosure Switches 3-41
Bastion Host Installation 3-47
Install Host OS onto RMS2 from the Installer Bootstrap Host (RMS1) 3-48
Installation of the Bastion Host 3-57
Configuration of the Bastion Host 3-63
Software Installation Procedures - Automated Installation 3-72
Oracle Linux OS Installer 3-72
Install Backup Bastion Host 3-81
Database Tier Installer 3-82
OCCNE Kubernetes Installer 3-86
OCCNE Automated Initial Configuration 3-89
A Artifacts
Repository Artifacts A-1
Docker Repository Requirements A-13
OCCNE YUM Repository Configuration A-14
OCCNE HTTP Repository Configuration A-16
OCCNE Docker Image Registry Configuration A-22
B Reference Procedures
Inventory File Template B-1
Inventory File Preparation B-2
OCCNE Artifact Acquisition and Hosting B-8
Installation PreFlight Checklist B-9
Installation Use Cases and Repository Requirements B-30
Topology Connection Tables B-46
Network Redundancy Mechanisms B-51
Install VMs for MySQL Nodes and Management Server B-58
iv
List of Figures
1-1 Frame Overview 1-5
1-2 Host Designations 1-6
1-3 Node Roles 1-7
1-4 Transient Roles 1-8
1-5 OCCNE Installation Overview 1-9
1-6 Example of a Procedure Steps Used in This Document 1-11
B-1 Rackmount ordering B-10
B-2 Frame reference B-31
B-3 Setup the Notebook and USB Flash Drive B-32
B-4 Setup the Management Server B-33
B-5 Management Server Unique Connections B-34
B-6 Configure OAs B-35
B-7 Configure the Enc. Switches B-35
B-8 OceanSpray Download Path B-36
B-9 Install OS on CNE Nodes - Server boot instruction B-37
B-10 Install OS on CNE Nodes - Server boot process B-38
B-11 Update OS on CNE Nodes - Ansible B-39
B-12 Update OS on CNE Nodes - Yum pull B-40
B-13 Harden the OS B-41
B-14 Create the Guest B-42
B-15 Install the Cluster on CNE Nodes B-43
B-16 Install the Cluster on CNE Nodes - Pull in Software B-44
B-17 Execute Helm on Master Node B-45
B-18 Master Node Pulls from Repositories B-46
B-19 Blade Server NIC Pairing B-52
B-20 Rackmount Server NIC Pairing B-52
B-21 Logical Switch View B-54
B-22 OAM Uplink View B-55
B-23 Top of Rack Customer Uplink View B-56
B-24 OAM and Signaling Separation B-57
B-25 MySQL Cluster Topology B-59
v
List of Tables
1-1 Key Terms 1-1
1-2 Key Acronyms and Abbreviations 1-2
1-3 Admonishments 1-11
2-1 Oracle eDelivery Artifact Acquisition 2-1
2-2 Procedure to configure MetalLB pools and peers 2-2
3-1 Bootstrap Install Procedure 3-2
3-2 Procedure to configure the Installer Bootstrap Host BIOS 3-9
3-3 Procedure to configure Top of Rack 93180YC-EX Switches 3-14
3-4 Procedure to verify Top of Rack 93180YC-EX Switches 3-24
3-5 Procedure to configure Addresses for RMS iLOs, OA, EBIPA 3-27
3-6 Procedure to configure the Legacy BIOS on Remaining Hosts 3-36
3-7 Procedure to configure enclosure switches 3-42
3-8 Procedure to install the OL7 image onto the RMS2 via the installer bootstrap host 3-49
3-9 Procedure to Install the Bastion Host 3-58
3-10 Procedure to configure Bastion Host 3-64
3-11 Procedure to run the auto OS-installer container 3-73
3-12 Procedure to Install Backup Bastion Host 3-81
3-13 OCCNE Database Tier Installer 3-83
3-14 Procedure to install OCCNE Kubernetes 3-86
3-15 Procedure to install common services 3-89
4-1 OCCNE Post Install Verification 4-1
A-1 OL YUM Repository Requirements A-1
A-2 Docker Repository Requirements A-13
A-3 Steps to configure OCCNE HTTP Repository A-17
A-4 Steps to configure OCCNE Docker Image Registry A-23
B-1 Procedure for OCCNE Inventory File Preparation B-4
B-2 Enclosure Switch Connections B-10
B-3 ToR Switch Connections B-12
B-4 Rackmount Server Connections B-14
B-5 Complete Site Survey Subnet Table B-15
B-6 Complete Site Survey Host IP Table B-16
B-7 Complete VM IP Table B-17
B-8 Complete OA and Switch IP Table B-18
B-9 ToR and Enclosure Switches Variables Table (Switch Specific) B-20
B-10 Complete Site Survey Repository Location Table B-21
vi
B-11 Enclosure Switch Connections B-47
B-12 ToR Switch Connections B-48
B-13 Management Server Connections B-51
B-14 Procedure to install VMs for MySQL Nodes and Management Server B-62
vii
1
Introduction
This document details the procedure for installing an Oracle Communications Signaling,
Network Function Cloud Native Environment, referred to in these installation procedures
simply as OCCNE. The intended audiences for this document are Oracle engineers who work
with customers to install a Cloud Native Environment (CNE) on-site at customer facilities.
This document applies to version 1.0 of the OCCNE installation procedure.
Glossary
Key terms
This table below lists terms used in this document.
Term Definition
Host A computer running an instance of an operating system with an IP address. Hosts can
be virtual or physical. The HP DL380 Gen10 Rack Mount Servers and BL460c
Gen10 Blades are physical hosts. KVM based virtual machines are virtual hosts.
Hosts are also referred to as nodes, machines, or computers.
Database Host The Database (DB) Host is a physical machine that hosts guest virtual machines
which in turn provide OCCNE's MySQL service and Database Management System
(DBMS). The Database Hosts are comprised of two Rack Mount Servers (RMSs)
below the Top of Rack (TOR) switches. For some customers, these will be HP Gen10
servers.
Management The Management Host is a physical machine in the frame that has a special
Host configuration to support hardware installation and configuration of other components
within a frame. For CNE, there is one machine with dedicated connectivity to out of
band (OOB) interfaces on the Top of Rack switches. The OOB interfaces provide
connectivity needed to initialize the ToR switches. In OCCNE 1.0, the Management
Host role and Database Host roles are assigned to the same physical machine. When
referring to a machine as a "Management Host", the context is with respect to its
OOB connections which are unique to the Management Host hardware.
Bastion Host The Bastion Host provides general orchestration support for the site. The Bastion
Host runs as a virtual machine on a Database Host. Sometimes referred to as the
Management VM. During the install process, the Bastion Host is used to host the
automation environment and execute install automation. The install automation
provisions and configures all other hosts, nodes, and switches within the frame. After
the install process is completed, the Bastion Host continues to serve as the customer
gateway to cluster operations and control.
Installer As an early step in the site installation process, one of the hosts (which is eventually
Bootstrap Host re-provisioned as a Database Server) is minimally provisioned to act as an Installer
Bootstrap Host. The Installer Bootstrap Host has a very short lifetime as its job is to
provision the first Database Server. Later in the install process, the server being used
to host the Bootstrap server is re-provisioned as another Database Server. The
Installer Bootstrap Host is also referred to simply as the Bootstrap Host.
1-1
Chapter 1
Glossary
Node A logical computing node in the system. A node is usually a networking endpoint.
May or may not be virtualized or containerized. Database nodes refer to hosts
dedicated primarily to running Database services. Kubernetes nodes refer to hosts
dedicated primarily to running Kubernetes.
Master Node Some nodes in the system (three RMSs in the middle of the equipment rack) are
dedicated to providing Container management. These nodes are responsible for
managing all of the containerized services (which run on the worker nodes.)
Worker Node Some nodes in the system (the blade servers at the bottom of the equipment rack) are
dedicated to hosting Containerized software and providing the 5G application
services.
Container An encapsulated software service. All 5G applications and OAM functions are
delivered as containerized software. The purpose of the OCCNE is to host
containerized software providing 5G Network Functions and services.
Cluster A collection of hosts and nodes dedicated to providing either Database or
Containerized services and applications. The Database service is comprised of the
collection of Database nodes and is managed by MySQL. The Container cluster is
comprised of the collection of Master and Worker Nodes and is managed by
Kubernetes.
Acronym/ Definition
Abbreviation/Term
5G NF 3GPP 5G Network Function
BIOS Basic Input Output System
CLI Command Line Interface
CNE Cloud Native Environment
DB Database
DBMS Database Management System
DHCP(D) Dynamic Host Configuration Protocol
DNS Domain Name Server
EBIPA Enclosure Bay IP Addressing
FQDN Fully Qualified Domain name
GUI Graphical User Interface
HDD Hard Disk Drive
HP Hewlett Packard
HPE Hewlett Packard Enterprise
HTTP HyperText Transfer Protocol
iLO HPE Integrated Lights-Out Management System
IP Internet Protocol; may be used as shorthand to refer to an IP layer 3 address.
IPv4 Internet Protocol version 4
IPv6 Internet Protocol version 6
1-2
Chapter 1
Overview
Overview
OCCNE Installation Overview
The installation procedures in this document provision and configure an Oracle
Communications Signaling, Network Function Cloud Native Environment (OCCNE). Using
1-3
Chapter 1
Overview
Oracle partners, the customer purchases the required hardware which is then configured and
prepared for installation by Oracle Consulting.
To aid with the provisioning, installation, and configuration of OCCNE, a collection of
container-based utilities are used to automate much of the initial setup. These utilities are based
on tools such as PXE, the Kubespray project, and Ansible:
• PXE helps reliably automate provisioning the hosts with a minimal operating system.
• Kubespray helps reliably install a base Kubernetes cluster, including all dependencies (like
etcd), using the Ansible provisioning tool.
• Ansible is used to deploy and manage a collection of operational tools (Common Services)
provided by open source third party products such as Prometheus, Grafana, ElasticSearch
and Kibana.
• Common services and functions such as load balancers and ingress controllers are
deployed, configured, and managed as Helm packages.
Note:
In the installation process, some of the roles of servers change as the installation
procedure proceeds.
Frame Overview
The physical frame is comprised of HP c-Class enclosure (BL460c blade servers), 5 DL380
rack mount servers, and 2 Top of Rack (ToR) Cisco switches.
1-4
Chapter 1
Overview
Host Designations
Each physical server has a specific role designation within the CNE solution.
1-5
Chapter 1
Overview
Node Roles
Along with the primary role of each host, a secondary role may be assigned. The secondary role
may be software related, or, in the case of the Bootstrap Host, hardware related, as there are
unique OOB connections to the ToR switches.
1-6
Chapter 1
Overview
Transient Roles
Transient role is unique in that it has OOB connections to the ToR switches, which brings the
designation of Bootstrap Host. This role is only relevant during initial switch configuration and
disaster recovery of the switch. RMS1 also has a transient role as the Installer Bootstrap Host,
which is only relevant during initial install of the frame, and subsequent to getting an official
install on RMS2, this host is re-paved to its Storage Host role.
1-7
Chapter 1
Overview
1-8
Chapter 1
Overview
The following is an overview or basic install flow for reference to understand the overall effort
contained within these procedures:
1. Check that the hardware is on-site and properly cabled and powered up.
2. Pre-assemble the basic ingredients needed to perform a successful install:
a. Identify
i. Download and stage software and other configuration files using provided
manifests. Refer to Artifacts for manifests information.
ii. Identify the layer 2 (MAC) and layer 3 (IP) addresses for the equipment in the
target frame
iii. Identify the addresses of key external network services (e.g., NTP, DNS, etc.)
iv. Verify / Set all of the credentials for the target frame hardware to known settings
b. Prepare
i. Software Repositories: Load the various SW repositories (YUM, Helm, Docker,
etc.) using the downloaded software and configuration
ii. Configuration Files: Populate the hosts inventory file with credentials and layer 2
and layer 3 network information, switch configuration files with assigned IP
addresses, and yaml files with appropriate information.
3. Bootstrap the System:
a. Manually configure a Minimal Bootstrapping Environment (MBE); perform the
minimal set of manual operations to enable networking and initial loading of a single
Rack Mount Server - RMS1 - the transient Installer Bootstrap Host. In this procedure,
1-9
Chapter 1
How to use this document
a minimal set of packages needed to configure switches, iLOs, PXE boot environment,
and provision RMS2 as an OCCNE Storage Host are installed.
b. Using the newly constructed MBE, automatically create the first (complete)
Management VM on RMS2. This freshly installed Storage Host will include a virtual
machine for hosting the Bastion Host.
c. Using the newly constructed Bastion Host on RMS2, automatically deploy and
configure the OCCNE on the other servers in the frame
4. Final Steps
a. Perform post installation checks
b. Perform recommended security hardening steps
1-10
Chapter 1
Documentation Admonishments
Documentation Admonishments
Admonishments are icons and text throughout this manual that alert the reader to assure
personal safety, to minimize possible service interruptions, and to warn of the potential for
equipment damage.
Icon Description
Danger:
(This icon and text indicate the possibility of
personal injury.)
Warning:
(This icon and text indicate the possibility of
equipment damage.)
Caution:
(This icon and text indicate the possibility of
service interruption.)
1-11
Chapter 1
Locate Product Documentation on the Oracle Help Center Site
Customer Training
Oracle University offers training for service providers and enterprises. Visit our web site to
view, and register for, Oracle Communications training at http://education.oracle.com/
communication.
To obtain contact phone numbers for countries or regions, visit the Oracle University Education
web site at www.oracle.com/education/contacts.
My Oracle Support
My Oracle Support (https://support.oracle.com) is your initial point of contact for all product
support and training needs. A representative at Customer Access Support can assist you with
My Oracle Support registration.
Call the Customer Access Support main number at 1-800-223-1711 (toll-free in the US), or call
the Oracle Support hotline for your local country from the list at http://www.oracle.com/us/
support/contact/index.html. When calling, make the selections in the sequence shown below on
the Support telephone menu:
1. Select 2 for New Service Request.
2. Select 3 for Hardware, Networking and Solaris Operating System Support.
3. Select one of the following options:
• For Technical issues such as creating a new Service Request (SR), select 1.
• For Non-technical issues such as registration or assistance with My Oracle Support,
select 2.
You are connected to a live agent who can assist you with My Oracle Support registration and
opening a support ticket.
1-12
Chapter 1
Emergency Response
My Oracle Support is available 24 hours a day, 7 days a week, 365 days a year.
Emergency Response
In the event of a critical service situation, emergency response is offered by the Customer
Access Support (CAS) main number at 1-800-223-1711 (toll-free in the US), or by calling the
Oracle Support hotline for your local country from the list at http://www.oracle.com/us/support/
contact/index.html. The emergency response provides immediate coverage, automatic
escalation, and other features to ensure that the critical situation is resolved as rapidly as
possible.
A critical situation is defined as a problem with the installed equipment that severely affects
service, traffic, or maintenance capabilities, and requires immediate corrective action. Critical
situations affect service and/or system operation resulting in one or several of these situations:
• A total system failure that results in loss of all transaction processing capability
• Significant reduction in system capacity or traffic handling capability
• Loss of the system’s ability to perform automatic system reconfiguration
• Inability to restart a processor or the system
• Corruption of system databases that requires service affecting corrective actions
• Loss of access for maintenance or recovery operations
• Loss of the system ability to provide any required critical or major trouble notification
Any other problem severely affecting service, capacity/traffic, billing, and maintenance
capabilities may be defined as critical by prior discussion and agreement with Oracle.
1-13
2
Installation Prerequisites
Complete the procedures outlined in this section before moving on to the Install Procedures
section. OCCNE installation procedures require certain artifacts and information to be made
available prior to executing installation procedures. This section addresses these prerequisites.
2-1
Chapter 2
Populate the MetalLB Configuration
2-2
3
Install Procedure
Prerequisites
1. USB drive of sufficient size to hold the ISO (approximately 5Gb)
2. Oracle Linux 7.x iso
3. YUM repository file
4. Keyboard, Video, Mouse (KVM)
References
1. Oracle Linux 7 Installation guide: https://docs.oracle.com/cd/E52668_01/E54695/html/
index.html
2. HPE Proliant DL380 Gen10 Server User Guide
3-1
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
$ dd if=/var/occne/OracleLinux-7.5-x86_64-disc1.iso
of=/dev/sdf bs=1048576
3-2
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
3-3
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
10. At the conclusion of the install, remove the USB and select Reboot
to complete the install and boot to the OS on the host. At the end of
the boot, the login prompt appears.
3-4
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
$ lsblk
sdd 8:48 0 894.3G 0 disk
sde 8:64 0 1.7T 0 disk
sdc 8:32 0 894.3G 0 disk
├─sdc2 8:34 0 1G 0 part /boot
├─sdc3 8:35 0 893.1G 0 part
│ ├─ol-swap 252:1 0 4G 0 lvm [SWAP]
│ ├─ol-home 252:2 0 839.1G 0 lvm /home
│ └─ol-root 252:0 0 50G 0 lvm /
└─sdc1 8:33 0 200M 0 part /boot/efi
sda 8:0 1 29.3G 0 disk
├─sda2 8:2 1 8.5M 0 part
└─sda1 8:1 1 4.3G 0 part
3-5
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
type 0
[8851.232978] sd 1:0:0:0: [sda] 61341696 512-byte
logical blocks: (31.4 GB/29.3 GiB)
[8851.234598] sd 1:0:0:0: [sda] Write Protect is off
[8851.234600] sd 1:0:0:0: [sda] Mode Sense: 43 00
00 00
[8851.234862] sd 1:0:0:0: [sda] Write cache:
disabled, read cache: enabled, doesn't support DPO
or FUA
[8851.255300] sda: sda1 sda2
...
The USB device should contain at least two partitions. One is the
boot partition and the other is the install media. The install media is
the larger of the two partitions. To find information about the
partitions use the fsdisk command to list the filesystems on the
USB device. Use the device name discovered via the steps outlined
above. In the examples above, the USB device is /dev/sda.
$ fdisk -l /dev/sda
Disk /dev/sda: 31.4 GB, 31406948352 bytes, 61341696
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512
bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x137202cf
$ mount
...
/dev/sda1 on /media/usb type iso9660
(ro,relatime,nojoliet,check=s,map=n,blocksize=2048)
5. Create a yum config file to install packages from local install
media.
Create a repo file /etc/yum.repos.d/Media.repo with the
following information:
3-6
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
[ol7_base_media]
name=Oracle Linux 7 Base Media
baseurl=file:///media/usb
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
6. Disable the default public yum repo. This is done by renaming the
current .repo file to end with something other than .repo.
Adding .disabled to the end of the file name is standard.
Note: This can be left in this state as the Installer Bootstrap Host is
re-paved in a later procedure.
$ mv /etc/yum.repos.d/public-yum-ol7.repo /etc/
yum.repos.d/public-yum-ol7.repo.disabled
7. Use the yum repolist command to check the repository
configuration.
The output of yum repolist should look like the example below.
Verify there no errors regarding un-reachable yum repos.
$ yum repolist
Loaded plugins: langpacks, ulninfo
repo id repo
name
status
ol7_base_media Oracle Linux 7
Base Media
5,134
repolist: 5,134
8. Use yum to install the additional packages from the USB repo.
$ yum install dnsmasq
$ yum install dhcp
$ yum install xinetd
$ yum install tftp-server
$ yum install dos2unix
$ yum install nfs-utils
9. Verify installation of dhcp, xinetd, and tftp-server.
Note: Currently dnsmasq is not being used. The verification of tftp
makes sure the tftp file is included in the /etc/xinetd.d directory.
Installation/Verification does not include actually starting any of the
services. Service configuration/starting is performed in a later
procedure.
Verify dhcp is installed:
-------------------------
$ cd /etc/dhcp
$ ls
dhclient.d dhclient-exit-hooks.d dhcpd6.conf
dhcpd.conf scripts
3-7
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
---------------------------
$ cd /etc/xinetd.d
$ ls
chargen-dgram chargen-stream daytime-dgram
daytime-stream discard-dgram discard-stream
echo-dgram echo-stream tcpmux-server time-dgram
time-stream
$ umount /media/usb
$ mount
Verify that /dev/sda1 is no longer shown as mounted
to /media/usb.
11. This procedure is complete.
Prerequisites
Procedure OCCNE Installation of Oracle Linux 7.5 on Bootstrap Host is complete.
3-8
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
2. Change over from Should the System Utility default the booting mode to UEFI or has
UEFI Booting been changed to UEFI, it will be necessary to switch the booting
Mode to Legacy mode to Legacy.
BIOS Booting
1. Expose the System Configuration Utility by following Step 1.
Mode
2. Select System Configuration.
3. Select BIOS/Platform Configuration (RBSU).
4. Select Boot Options.
If the Boot Mode is set to UEFI Mode then this procedure
should be used to change it to Legacy BIOS Mode.
Note: The server reset must go through an attempt to boot
before the changes will actually apply.
5. The user is prompted to select the Reboot Required popup
dialog. This will drop back into the boot process. The boot must
go into the process of actually attempting to boot from the boot
order. This should fail since the disks have not been installed at
this point. The System Utility can be accessed again.
6. After the reboot and the user re-enters the System Utility, the
Boot Options page should appear.
7. Select F10: Save if it's desired to save and stay in the utility or
select the F12: Save and Exit if its desired to save and exit to
complete the current boot process.
3-9
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-2 (Cont.) Procedure to configure the Installer Bootstrap Host BIOS
3-10
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-2 (Cont.) Procedure to configure the Installer Bootstrap Host BIOS
3-11
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-2 (Cont.) Procedure to configure the Installer Bootstrap Host BIOS
e. Select F10: Save to save and stay in the utility or select the
F12: Save and Exit to save and exit, to complete the
current boot process.
3-12
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-2 (Cont.) Procedure to configure the Installer Bootstrap Host BIOS
8. Configure the iLO When configuring the Bootstrap host, the static IP address for the
5 Static IP Address iLO 5 must be configured.
Note: This procedure requires a reboot after completion.
1. Expose the System Configuration Utility by following Step 1.
2. Select System Configuration.
3. Select iLO 5 Configuration Utility.
4. Select Network Options.
5. Enter the IP Address, Subnet Mask, and Gateway IP Address
fields provided in OCCNE 1.0 Installation PreFlight Checklist.
6. Select F12: Save and Exit to complete the current boot process.
A reboot is required when setting the static IP for the iLO 5. A
warning appears indicating that the user must wait 30 seconds
for the iLO to reset and then a reboot is required. A prompt
appears requesting a reboot. Select Reboot.
7. Once the reboot is complete, the user can re-enter the System
Utility and verify the settings if necessary.
3-13
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Note:
All instructions in this procedure are executed from the Bootstrap Host.
Prerequisites
1. Procedure OCCNE Installation of Oracle Linux 7.5 on Bootstrap Host has been completed.
2. The switches are in factory default state.
3. The switches are connected as per OCCNE 1.0 Installation PreFlight Checklist. Customer
uplinks are not active before outside traffic is necessary.
4. DHCP, XINETD, and TFTP are already installed on the Bootstrap host but are not
configured.
5. The Utility USB is available containing the necessary files as per: OCCNE 1.0 Installation
PreFlight checklist: Create Utility USB.
Limitations/Expectations
All steps are executed from a Keyboard, Video, Mouse (KVM) connection.
References
https://github.com/datacenter/nexus9000/blob/master/nx-os/poap/poap.py
Procedures
Configuration
3-14
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
3-15
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
5. Enable tftp on
$ systemctl start tftp
the Bootstrap
$ systemctl enable tftp
host.
Verify tftp is active and enabled:
$ systemctl status tftp
$ ps -elf | grep tftp
6. Copy the Copy the dhcpd.conf file from the Utility USB in OCCNE 1.0 Installation
dhcpd.conf PreFlight checklist : Create the dhcpd.conf File to the /etc/dhcp/ directory.
file
$ cp /media/usb/dhcpd.conf /etc/dhcp/
7. Restart and
$ /bin/systemctl restart dhcpd.service
enable dhcpd
$ /bin/systemctl enable dhcpd.service
service.
Use the systemctl status dhcpd command to verify active
and enabled.
$ systemctl status dhcpd
8. Copy the Copy the switch configuration and script files from the Utility USB to
switch directory /var/lib/tftpboot/.
configuration
and script $ cp /media/usb/93180_switchA.cfg /var/lib/tftpboot/.
files $ cp /media/usb/93180_switchB.cfg /var/lib/tftpboot/.
$ cp /media/usb/poap_nexus_script.py /var/lib/tftpboot/.
3-16
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
10. Modify Modify POAP script File. Make the following change for the first server
POAP script information: The username and password are the credentials used to login
File. to the Bootstrap host.
$ vi /var/lib/tftpboot/poap_nexus_script.py
Host name and user credentials
options = {
"username": "<username>",
"password": "<password>",
"hostname": "192.168.2.11",
"transfer_protocol": "scp",
"mode": "serial_number",
"target_system_image": "nxos.9.2.3.bin",
}
11. Modify Modify POAP script file md5sum by executing the md5Poap.sh script from
POAP script the Utility USB created from OCCNE 1.0 Installation PreFlight checklist :
file Create the md5Poap Bash Script.
$ cd /var/lib/tftpboot/
$ /bin/bash md5Poap.sh
12. Create the The serial number is located on a pullout card on the back of the switch in
files the left most power supply of the switch.
necessary to
configure the
ToR switches
using the
serial number
from the
switch.
3-17
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
3-18
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
's#{SQL_replication_SwB_Address}#<ToRswitchB_SQLreplicatio
nNet_IP>#g' conf.<switchA serial number>
$ sed -i
's#{SQL_replication_Prefix}#<SQLreplicationNet_Prefix>#g'
conf.<switchA serial number>
$ ipcalc -n <ToRswitchA_SQLreplicationNet_IP/
<SQLreplicationNet_Prefix> | awk -F'=' '{print $2}'
$ sed -i 's/{SQL_replication_Subnet}/<output from ipcalc
command as SQL_replication_Subnet>/' conf.<switchA serial
number>
$ sed -i 's/{CNE_Management_VIP}/
<ToRswitch_CNEManagementNet_VIP>/g' conf.<switchA serial
number>
$ sed -i 's/{SQL_replication_VIP}/
<ToRswitch_SQLreplicationNet_VIP>/g' conf.<switchA serial
number>
$ sed -i 's/{OAM_UPLINK_CUSTOMER_ADDRESS}/
<ToRswitchA_oam_uplink_customer_IP>/' conf.<switchA
serial number>
$ sed -i 's/{OAM_UPLINK_SwA_ADDRESS}/
<ToRswitchA_oam_uplink_IP>/g' conf.<switchA serial number>
$ sed -i 's/{SIGNAL_UPLINK_SwA_ADDRESS}/
<ToRswitchA_signaling_uplink_IP>/g' conf.<switchA serial
number>
$ sed -i 's/{OAM_UPLINK_SwB_ADDRESS}/
<ToRswitchB_oam_uplink_IP>/g' conf.<switchA serial number>
$ sed -i 's/{SIGNAL_UPLINK_SwB_ADDRESS}/
<ToRswitchB_signaling_uplink_IP>/g' conf.<switchA serial
number>
$ ipcalc -n <ToRswitchA_signaling_uplink_IP>/30 | awk -
F'=' '{print $2}'
$ sed -i 's/{SIGNAL_UPLINK_SUBNET}/<output from ipcalc
command as signal_uplink_subnet>/' conf.<switchA serial
number>
3-19
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
$ sed -i 's/{Allow_Access_Server}/<Allow_Access_Server>/'
conf.<switchA serial number>
3-20
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
3-21
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
$ sed -i 's/{CNE_Management_VIP}/
<ToRswitch_CNEManagementNet_VIP>/' conf.<switchB serial
number>
$ sed -i 's/{SQL_replication_VIP}/
<ToRswitch_SQLreplicationNet_VIP>/' conf.<switchB serial
number>
$ sed -i 's/{OAM_UPLINK_CUSTOMER_ADDRESS}/
<ToRswitchB_oam_uplink_customer_IP>/' conf.<switchB
serial number>
$ sed -i 's/{OAM_UPLINK_SwA_ADDRESS}/
<ToRswitchB_oam_uplink_IP>/g' conf.<switchB serial number>
$ sed -i 's/{SIGNAL_UPLINK_SwA_ADDRESS}/
<ToRswitchB_signaling_uplink_IP>/g' conf.<switchB serial
number>
$ sed -i 's/{OAM_UPLINK_SwB_ADDRESS}/
<ToRswitchB_oam_uplink_IP>/g' conf.<switchB serial number>
$ sed -i 's/{SIGNAL_UPLINK_SwB_ADDRESS}/
<ToRswitchB_signaling_uplink_IP>/g' conf.<switchB serial
number>
$ ipcalc -n <ToRswitchB_signaling_uplink_IP>/30 | awk -
F'=' '{print $2}'
$ sed -i 's/{SIGNAL_UPLINK_SUBNET}/<output from ipcalc
command as signal_uplink_subnet>/' conf.<switchB serial
number>
3-22
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
17. Disable
$ systemctl stop firewalld
firewalld.
$ systemctl disable firewalld
To verify:
$ systemctl status firewalld
Once this is complete, the ToR Switches will attempt to boot from the
tftpboot files automatically. Eventually the verification steps can be
executed below. It may take about 5 minutes for this to complete.
Verification
3-23
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
3-24
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
3-25
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
3-26
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Prerequisites
Procedure OCCNE Configure Top of Rack 93180YC-EX Switches has been completed.
Limitations/Expectations
All steps are executed from the ssh session of the Bootstrap server.
References
HPE BladeSystem Onboard Administrator User Guide
Table 3-5 Procedure to configure Addresses for RMS iLOs, OA, EBIPA
2. Subnet and conf file The /etc/dhcp/dhcp.conf file should already have been configured
address in procedure
OCCNE Configure Top of Rack 93180YC-EX Switches and dhcp
started/enabled on the bootstrap server. The second subnet
192.168.20.0 is used to assign addresses for OA and RMS iLOs.
The "next-server 192.168.20.11" option is same as the server
team0.2 IP address.
3-27
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA
3-28
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA
}
lease 192.168.20.108 {
starts 5 2019/03/29 09:57:02;
ends 5 2019/03/29 21:57:02;
tstp 5 2019/03/29 21:57:02;
cltt 5 2019/03/29 09:57:02;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet fc:15:b4:1a:ea:05;
uid "\001\374\025\264\032\352\005";
client-hostname "OA-FC15B41AEA05";
}
lease 192.168.20.107 {
starts 5 2019/03/29 12:02:50;
ends 6 2019/03/30 00:02:50;
tstp 6 2019/03/30 00:02:50;
cltt 5 2019/03/29 12:02:50;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet 9c:b6:54:80:d7:d7;
uid "\001\234\266T\200\327\327";
client-hostname "SA-9CB65480D7D7";
}
server-duid "\000\001\000\001$#
\364\344\270\203\003Gim";
lease 192.168.20.107 {
starts 5 2019/03/29 18:09:47;
ends 6 2019/03/30 06:09:47;
cltt 5 2019/03/29 18:09:47;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet 9c:b6:54:80:d7:d7;
uid "\001\234\266T\200\327\327";
client-hostname "SA-9CB65480D7D7";
}
lease 192.168.20.108 {
starts 5 2019/03/29 18:09:54;
ends 6 2019/03/30 06:09:54;
cltt 5 2019/03/29 18:09:54;
binding state active;
next binding state free;
rewind binding state free;
hardware ethernet fc:15:b4:1a:ea:05;
uid "\001\374\025\264\032\352\005";
client-hostname "OA-FC15B41AEA05";
}
lease 192.168.20.106 {
starts 5 2019/03/29 18:10:04;
ends 5 2019/03/29 21:10:04;
cltt 5 2019/03/29 18:10:04;
binding state active;
3-29
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA
4. Access RMS iLO Note: The DNS Name on the pull-out label. The DNS Name on the
from the DHCP pull-out label should be used to match the physical machine with
address with default the iLO IP since the same default DNS Name from the pull-out
Administrator label is displayed upon logging in to the iLO command line
password. From the interface, as shown in the example below.
above
dhcpd.leases file, $ ssh Administrator@192.168.20.104
find the IP address Administrator@192.168.20.104's password:
for the iLO name, the User:Administrator logged-in to
default username is ILO2M2909004F.labs.nc.tekelec.com(192.168.20.104 /
Administrator, the FE80::BA83:3FF:FE47:649C)
password is on the iLO Standard 1.37 at Oct 25 2018
label which can be Server Name:
pulled out from front Server Power: On
of server.
5. Create RMS iLO
</>hpiLO-> create /map1/accounts1 username=root
new user. Create new
password=TklcRoot
user with customized
group=admin,config,oemHPE_rc,oemHPE_power,oemHPE_vm
username and
status=0
password.
status_tag=COMMAND COMPLETED
Tue Apr 2 20:08:30 2019
User added successfully.
3-30
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA
3-31
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA
3-32
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA
3-33
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA
</>hpiLO->
status=2
status_tag=COMMAND PROCESSING FAILED
error_tag=COMMAND SYNTAX ERROR
Tue Apr 23 16:18:58 2019
User added successfully.
3-34
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-5 (Cont.) Procedure to configure Addresses for RMS iLOs, OA, EBIPA
Note:
The procedures in this document apply to the HP iLO console accessed via KVM. Each
procedure is executed in the order listed.
Prerequisites
Procedure OCCNE Configure Addresses for RMS iLOs, OA, EBIPA is complete.
3-35
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
References
1. HPE iLO 5 User Guide 1.15
2. UEFI System Utilities User Guide for HPE ProLiant Gen10 Servers and HPE Synergy
3. UEFI Workload-based Performance and Tuning Guide for HPE ProLiant Gen10 Servers
and HPE Synergy
4. HPE BladeSystem Onboard Administrator User Guide
5. OCCNE Inventory File Preparation
3-36
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts
</>hpiLO->
b. Use VSP to connect to the blade remote console.
</>hpiLO->vsp
c. Power cycle the blade to bring up the System Utility for that
blade.
Note: The System Utility is a text based version of that
exposed on the RMS via the KVM. The user must use the
directional (arrow) keys to manipulate between selections,
ENTER key to select, and ESC to go back from the current
selection.
d. Access the System Utility by hitting ESC 9.
2. Enabling Virtualization
This procedure provides the steps required to enable virtualization
on a given Bare Metal Server. Virtualization can be configured
using the default settings or via the default Workload Profiles.
Verifying Default Settings
a. Expose the System Utility by following step 1 or 2 depending
on the hardware being configured.
b. Select System Configuration
c. Select BIOS/Platform Configuration (RBSU)
d. Select Virtualization Options
This view displays the settings for the Intel(R) Virtualization
Technology (IntelVT), Intel(R) VT-d, and SR-IOV options
(Enabled or Disabled). The default values for each option is
Enabled.
e. Select F10 if it is desired to save and stay in the utility or select
the F12 if it is desired to save and exit to continue the current
boot process.
3-37
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts
3-38
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts
3-39
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts
3-40
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
Table 3-6 (Cont.) Procedure to configure the Legacy BIOS on Remaining Hosts
3-41
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
All steps are executed from a Keyboard, Video, Mouse (KVM) connection.
References
1. https://support.hpe.com/hpsc/doc/public/display?docId=c04763537
Procedure
$ ls -l /var/lib/tftpboot/
total 1305096
2. Modify the These values are contained at OCCNE 1.0 Installation PreFlight
switch specific checklist : Create the OA 6127XLG Switch Configuration File from
values in column Enclosure_Switch.
the /var/lib/
tftpboot/
6127xlg_irf.cfg $ cd /var/lib/tftpboot
file. $ sed -i 's/{switchname}/<switch_name>/' 6127xlg_irf.cfg
$ sed -i 's/{admin_password}/<admin_password>/'
6127xlg_irf.cfg
$ sed -i 's/{user_name}/<user_name>/' 6127xlg_irf.cfg
$ sed -i 's/{user_password}/<user_password>/'
6127xlg_irf.cfg
3-42
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
....
<HPE>system-view
shutdown
quit
irf-port 1/1
quit
3-43
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
undo shutdown
quit
save
irf-port-configuration active
4. Access the Access the InterConnect Bay2 6127XLG switch to re-number to IRF 2.
InterConnect
Bay2 6127XLG OA-FC15B41AEA05> connect interconnect 2
....
<HPE>system-view
[HPE]save
[HPE]quit
<HPE>reboot
System is starting...
3-44
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
shutdown
quit
irf-port 2/2
quit
undo shutdown
quit
save
irf-port-configuration active
6. Run "reboot"
<HPE>reboot
command on
Start to check configuration with next startup
both switches
configuration file, please wait.........DONE!
This command will reboot the device. Continue? [Y/N]:Y
Now rebooting, please wait...
System is starting...
3-45
Chapter 3
Initial Configuration - Prepare a Minimal Boot Strapping Environment
1 1 Ten-GigabitEthernet1/0/17 disable
Ten-GigabitEthernet1/0/18
Ten-GigabitEthernet1/0/19
Ten-GigabitEthernet1/0/20
2 2 disable Ten-
GigabitEthernet2/0/17
Ten-
GigabitEthernet2/0/18
Ten-
GigabitEthernet2/0/19
Ten-
GigabitEthernet2/0/20
[HPE]
3-46
Chapter 3
Bastion Host Installation
<HPE>system-view
[<switch_name>]save flash:/startup.cfg
[<switch_name>]
3-47
Chapter 3
Bastion Host Installation
Bastion Host. After the Bastion Host is provisioned, it is used to complete the installation of
OCCNE.
Prerequisites
• All procedures in OCCNE 1.0 - Installation Procedure : OCCNE Initial Configuration are
complete.
• The Utility USB is available containing the necessary files as mentioned in OCCNE 1.0
Installation PreFlight checklist.
3-48
Chapter 3
Bastion Host Installation
Procedures
Table 3-8 Procedure to install the OL7 image onto the RMS2 via the installer bootstrap
host
$ mkdir /var/occne
$ mkdir /var/occne/<cluster_name>
$ mkdir /var/occne/<cluster_name>/yum.repos.d
2. Mount the Utility USB.
Note: Instructions for mounting a USB in Linux are at: OCCNE
Installation of Oracle Linux 7.5 on Bootstrap Host : Install
Additional Packages. Only follow steps 1-4 to mount the USB.
3. Copy the hosts.ini file (created using procedure: OCCNE
Inventory File Preparation) into the /var/occne/<cluster_name>/
directory. This hosts.ini file defines RMS2 to the OS Installer
Container running the os-install image downloaded from the
repo.
$ cp /media/usb/hosts.ini /var/occne/
<cluster_name>/hosts.ini
4. Update the hosts.ini file to include the ToR host_net (vlan3) VIP
for NTP clock synchronization. Use the ToR VIP address as
defined in procedure: OCCNE 1.0 Installation PreFlight
Checklist : Complete OA and Switch IP SwitchTable as the NTP
source.
$ vim /var/occne/<cluster_name>/hosts.ini
$ cp /media/usb/ol7-mirror.repo /var/occne/
<cluster_name>/yum.repos.d/ol7-mirror.repo
$ cp /media/usb/ol7-mirror.repo /etc/yum.repos.d/
ol7-mirror.repo
$ cp /media/usb/docker-ce-stable.repo /etc/
yum.repos.d/docker-ce-stable.repo
6. If still enabled from procedure: OCCNE Installation of Oracle
Linux 7.5 on Bootstrap Host, the /etc/yum.repos.d/Media.repo is
to be disabled.
3-49
Chapter 3
Bastion Host Installation
Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host
$ mv /etc/yum.repos.d/Media.repo /etc/yum.repos.d/
Media.repo.disable
7. Copy the updated version of the kickstart configuration file
to /var/occne/<cluster_name> directory.
$ cp /media/usb/occne-ks.cfg.j2.new /var/occne/
<cluster_name>/occne-ks.cfg.j2.new
2. Copy the OL7 ISO The iso file should be accessible from a Customer Site Specific
to the Installer repository. This file should be accessible because the ToR switch
Bootstrap Host configurations were completed in procedure: OCCNE Configure Top
of Rack 93180YC-EX Switches.
Copy from RMS1, the OL7 ISO file to the /var/occne directory. The
example below uses OracleLinux-7.5-x86_64-disc1.iso. Note: If the
user copies this ISO from their laptop then they must use an
application like WinSCP pointing to the Management Interface IP.
$ scp <usr>@<site_specific_address>:/<path_to_iso>/
OracleLinux-7.5-x86_64-disc1.iso /var/occne/
OracleLinux-7.5-x86_64-disc1.iso
3. Install Docker onto Use YUM to install docker-ce onto the installer Bootstrap Host. YUM
the Installer should use the existing <customer_specific_repo_file>.repo in
Bootstrap Host the /etc/yum.repos.d directory.
$ yum install docker-ce-18.06.1.ce-3.el7.x86_64
3-50
Chapter 3
Bastion Host Installation
Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host
"insecure-registries":
["<occne_private_registry>:<occne_private_registry
_port>"]
Example:
cat /etc/docker/daemon.json
"insecure-registries": ["reg-1:5000"]
To Verify:
ping <occne_private_registry>
Example:
# ping reg-1
3-51
Chapter 3
Bastion Host Installation
Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host
$ mkdir -p /etc/systemd/system/docker.service.d/
$ vi /etc/systemd/system/docker.service.d/http-
proxy.conf
[Service]
Environment="NO_PROXY=<occne_private_registry_addr
ess>,<occne_private_registry>,
127.0.0.1,localhost"
Example:
[Service]
Environment="NO_PROXY=10.75.200.217,reg-1,127.0.0.
1,localhost"
4. Start the docker daemon
$ systemctl daemon-reload
$ systemctl restart docker
$ systemctl enable docker
5. Setup NFS on the Run the following commands (assumes nfs-utils has already been
Installer Bootstrap installed in procedure: OCCNE Installation of Oracle Linux 7.5 on
Host Bootstrap Host : Install Additional Packages).
Note: The IP address used in the echo command is the Platform
VLAN IP Address (VLAN 3) of the Bootstrap Host (RMS 1) as given
in: OCCNE 1.0 Installation PreFlight Checklist : Complete Site
Survey Host Table.
$ echo'/var/occne
172.16.3.4/24(ro,no_root_squash)'>> /etc/exports
$ systemctl start nfs-server
$ systemctl enable nfs-server
Verify nfs is running:
$ ps -elf | grep nfs
$ systemctl status nfs-server
3-52
Chapter 3
Bastion Host Installation
Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host
8. Disable DHCP and The TFTP and DHCP services running on the Installer Bootstrap Host
TFTP on the may still be running. These services must be disabled.
Installer Bootstrap
Host $ systemctl stop dhcpd
$ systemctl disable dhcpd
$ systemctl stop tftp
$ systemctl disable tftp
3-53
Chapter 3
Bastion Host Installation
Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host
Example:
Example:
3-54
Chapter 3
Bastion Host Installation
Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host
$sudo su
$ cd /etc/sysconfig/network-scripts/
$ cp /tmp/ifcfg-vlan ifcfg-team0.4
$ sed -i 's/{BRIDGE_NAME}/vlan4-br/g' ifcfg-
team0.4
$ sed -i 's/{PHY_DEV}/team0/g' ifcfg-team0.4
$ sed -i 's/{VLAN_ID}/4/g' ifcfg-team0.4
$ sed -i 's/{IF_NAME}/team0.4/g' ifcfg-team0.4
$ echo "BRIDGE=vlan4-br" >> ifcfg-team0.4
$ cp /tmp/ifcfg-bridge ifcfg-vlan4-br
$ sed -i 's/{BRIDGE_NAME}/vlan4-br/g' ifcfg-vlan4-
br
$ sed -i 's/DEFROUTE=no/DEFROUTE=yes/g' ifcfg-
vlan4-br
$ sed -i 's/{IP_ADDR}/<vlan_4_ip_address>/g'
ifcfg-vlan4-br
$ sed -i 's/{PREFIX_LEN}/29/g' ifcfg-vlan4-br
$ sed -i 's/DEFROUTE=no/DEFROUTE=yes/g' ifcfg-
vlan4-br
$ echo "GATEWAY=<ToRswitch_CNEManagementNet_VIP>"
>> ifcfg-vlan4-br
Example:
3-55
Chapter 3
Bastion Host Installation
Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host
add=NET_ADMIN -v /var/occne/rainbow/:/host -
v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit
db-2.rainbow.lab.us.oracle.com,localhost --tags
yum_update" reg-1:5000/os_install:1.0.1
6. Check the /etc/yum.repos.d directory on RMS2 for non-disabled
repo files. These files should be disabled. The only file that
should be enabled is the customer specif .repo file that was set in
the /var/occne/<cluster_name>/yum.repos.d directory on RMS1.
If any of these files are not disabled then each file must be
renamed as <filename>.repo.disabled.
$ cd /etc/yum.repos.d
$ ls
$ mv <filename>.repo <filename>.repo.disabled
7. Execute datastore using docker.
$ docker run --rm --network host --cap-
add=NET_ADMIN -v /var/occne/<cluster_name>/:/host
-v /var/occne/:/var/occne:rw -e "OCCNEARGS=--
limit <RMS2 db node from hosts.ini
file>.oracle.com,localhost --tags datastore"
<image_name>:<image_tag>
Example:
Example:
3-56
Chapter 3
Bastion Host Installation
Table 3-8 (Cont.) Procedure to install the OL7 image onto the RMS2 via the installer
bootstrap host
Prerequisites
1. Procedure OCCNE 1.0 - Installation Procedure : Install OL7 onto the Management Host
has been completed
2. All the hosts servers where this VM is created are captured in OCCNE Inventory File
Template
3. Host names and IP Address, network information assigned to this VM is captured in the
OCCNE 1.0 Installation PreFlight Checklist
4. The Utility USB is available containing the necessary files as per: OCCNE 1.0 Installation
PreFlight checklist : Miscellaneous Files
References
1. https://linux.die.net/man/1/virt-install
2. https://linuxconfig.org/how-to-create-and-manage-kvm-virtual-machines-from-cli
3. https://www.cyberciti.biz/faq/kvm-install-centos-redhat-using-kickstart-ks-cfg/
4. https://opensource.com/business/16/9/linux-users-guide-lvm
3-57
Chapter 3
Bastion Host Installation
2. Install Install the following files from the ISO USB onto RMS2.
Necessary
RPMs $ yum install qemu-kvm libvirt libvirt-python
libguestfs-tools virt-install -y
3-58
Chapter 3
Bastion Host Installation
3-59
Chapter 3
Bastion Host Installation
$ sed -i 's/HTTP_PROXY/http:\/\/
<http_proxy>/g' /tmp/bastion_host.ks
$ sed -e '/PUBLIC_KEY/{' -e 'r /home/admusr/.ssh/
authorized_keys' -e 'd' -e '}' -i /tmp/
bastion_host.ks
4. Configure The networking required to interface with the Bastion Host is all handled
Networking by executing the following command set:
$ sudo su
$ cd /etc/sysconfig/network-scripts/
$ cp /tmp/ifcfg-bridge ifcfg-teambr0
$ sed -i 's/{BRIDGE_NAME}/teambr0/g' ifcfg-teambr0
$ sed -i 's/{IP_ADDR}/172.16.3.5/g' ifcfg-teambr0
$ sed -i 's/{PREFIX_LEN}/24/g' ifcfg-teambr0
$ sed -i '/NM_CONTROLLED/d' ifcfg-teambr0
$ cp /tmp/ifcfg-vlan ifcfg-team0.2
$ sed -i 's/{BRIDGE_NAME}/vlan2-br/g' ifcfg-team0.2
$ sed -i 's/{PHY_DEV}/team0/g' ifcfg-team0.2
$ sed -i 's/{VLAN_ID}/2/g' ifcfg-team0.2
$ sed -i 's/{IF_NAME}/team0.2/g' ifcfg-team0.2
$ echo "BRIDGE=vlan2-br" >> ifcfg-team0.2
$ cp /tmp/ifcfg-bridge ifcfg-vlan2-br
$ sed -i 's/{BRIDGE_NAME}/vlan2-br/g' ifcfg-vlan2-br
$ sed -i 's/{IP_ADDR}/192.168.20.12/g' ifcfg-vlan2-br
$ sed -i 's/{PREFIX_LEN}/24/g' ifcfg-vlan2-br
3-60
Chapter 3
Bastion Host Installation
3-61
Chapter 3
Bastion Host Installation
Update fields:
# Some examples of valid values are:
#
# user = "qemu" # A user named "qemu"
# user = "+0" # Super user (uid=0)
# user = "100" # A user named "100" or a
user with uid=100
#
user = "root"
To Verify:
$ systemctl status libvirtd
3-62
Chapter 3
Bastion Host Installation
8. Un-mount the Use the umount command to un-mount the Utility USB and extract it
Utility USB from the USB port.
$ umount /media/usb
Prerequisites
1. Procedure OCCNE Installation of the Bastion Host has been completed.
2. All the hosts servers where this VM is created are captured in OCCNE Inventory File
Preparation.
3. Host names and IP Address, network information assigned to this VM is captured in the
OCCNE 1.0 Installation PreFlight Checklist
4. Yum repository mirror is setup and accessible by Bastion host.
5. Http server is setup and has kubernetes binaries, helm charts on a server with address that
is accessible by Bastion Host.
6. Docker registry is setup to an address that is reachable by the Bastion host.
7. This document is based on the assumption that an apache http server (as part of the mirror
creation) is created outside of bastion host that supports yum mirror, helm charts and
3-63
Chapter 3
Bastion Host Installation
Kubernetes Binaries. (This can be different so directories to copy static content to Bastion
host must be verified before starting the rsync procedure ).
References
1. https://docs.docker.com/registry/deploying/
2. https://computingforgeeks.com/how-to-configure-ntp-server-using-chrony-on-rhel-8/
Procedure
These procedures detail the steps required to configure the existing Bastion Host (Management
VM).
The current sample hosts.ini file requires a "/" to be added to the entry for the
occne_helm_images_repo.
vim (or use vi) and edit the hosts.ini file and add the"/"to
the occne_helm_images_repo entry.
occne_helm_images_repo='bastion-1:5000 ->
occne_helm_images_repo='bastion-1:5000/
3-64
Chapter 3
Bastion Host Installation
To verify:
$ systemctl status firewalld
3-65
Chapter 3
Bastion Host Installation
3-66
Chapter 3
Bastion Host Installation
$ reposync -g -l -d -m --repoid=local_ol7_x86_64_addons --
newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/
$ reposync -g -l -d -m --repoid=local_ol7_x86_64_UEKR5 --
newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/
$ reposync -g -l -d -m --repoid=local_ol7_x86_64_developer
--newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/
$ reposync -g -l -d -m --
repoid=local_ol7_x86_64_developer_EPEL --newest-only --
download-metadata --download_path=/var/www/html/yum/
OracleLinux/OL7/
$ reposync -g -l -d -m --repoid=local_ol7_x86_64_ksplice --
newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/
$ reposync -g -l -d -m --repoid=local_ol7_x86_64_latest --
newest-only --download-metadata --download_path=/var/www/
html/yum/OracleLinux/OL7/
After the above execution, you will be able to see the directory structure in
with all the repo id's in /var/www/html/yum/OracleLinux/OL7/. Rename the
repositories in OL7/ directory:
Note: download_path can be changed according to the folder structure
required. Change the names of the copied over folders to match the base url.
$ cd /var/www/html/yum/OracleLinux/OL7/
$ mv local_ol7_x86_64_addons addons
$ mv local_ol7_x86_64_UEKR5 UEKR5
$ mv local_ol7_x86_64_developer developer
$ mv local_ol7_x86_64_developer_EPEL developer_EPEL
$ mv local_ol7_x86_64_ksplice ksplice
$ mv local_ol7_x86_64_latest latest
Run following createrepo commands to create repo data for each repository
channel on Bastion host yum mirror:
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/addons
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/UEKR5
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/developer
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/
developer_EPEL
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/ksplice
$ createrepo -v /var/www/html/yum/OracleLinux/OL7/latest
5. Get Docker-ce and gpg key from mirror
Execute the following rsync command:
$ rsync -avzh <login-username>@<IP address of repo
server>:<centos folder directory path> /var/www/html/yum/
6. Create http repository configuration to retrieve Kubernetes binaries and helm
binaries/charts on Bastion Host if on a different server
Given that the Kubernetes binaries have been created outside of bastion host
as part of the procedure of setting up artifacts and repositories, kubernetes/
3-67
Chapter 3
Bastion Host Installation
helm binaries and helm charts have to be copied using rsync command to the
bastion host. Example below should copy all of the contents from a folder to
the static content render folder of the http server on the bastion host:
$ rsync -avzh <login-username>@<IP address of repo
server>:<copy from directory address> /var/www/html
3-68
Chapter 3
Bastion Host Installation
While creating the docker registry on a server outside of the bastion host,
there is no tag added to the registry image and the image is also not added to
the docker registry repository of that server. Manually tag the registry image
and push it as one of the repositories on the docker registry server:
$ docker tag registry:<tag>
docker_registry_address>:<port>/registry:<tag>
3. Push the tagged registry image customer to docker registry repository on
server accessible by Bastion Host:
$ docker push <docker_registry_address>:<port>/
registry:<tag>
4. Login into Bastion host and pull the registry image onto Bastion Host from
customer registry setup on server outside of bastion host
$ docker pull --all-tags <docker_registry_address>:<port>/
registry
5. Run Docker registry on Bastion Host
$ docker run -d -p 5000:5000 --restart=always --name
registry registry:<tag>
This runs the docker registry local to Bastion host on port 5000.
6. Get docker images from docker registry to Bastion Host docker registry
Pull all the docker images from Docker Repository Requirements to the local
Bastion Host repository:
$ docker pull --all-tags <docker_registry_address>:<port>/
<image_names_from_attached_list>
3-69
Chapter 3
Bastion Host Installation
7. Tag Images
$ docker tag <docker_registry_address>:<port>/
<imagename>:<tag>
<bastion_host_docker_registry_address>:<port>/
<image_names_from_attached_list>
Example:
$ docker tag 10.75.207.133:5000/jaegertracing/jaeger-
collector:1.9.0 10.75.216.125:5000/jaegertracing/jaeger-
collector
8. Push the images to local Docker Registry created on the Bastion host
Create a daemon.json file in /etc/docker directory and add the following to it:
{
"insecure-registries" :
["<bastion_host_docker_registry_address>:<port>"]
}
Restart docker:
$ systemctl daemon-reload
$ systemctl restart docker
$ systemctl enable docker
To verify:
$ systemctl status docker
$ docker push
<bastion_host_docker_registry_address>:<port>/
<image_names_from_attached_list>
3-70
Chapter 3
Bastion Host Installation
chrony was installed in the first step of this procedure. Enable the service.
$ systemctl enable --now chronyd
$ systemctl status chronyd
chrony was installed in the first step of this procedure. Enable the service.
$ systemctl enable --now chronyd$ systemctl status chronyd
Execute the chronyc sources -v command to display the current status of NTP on
the Bastion Host. The S field should be set to * indicating NTP sync.
$ chronyc sources -v
210 Number of sources = 1
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-'
= not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' =
time too variable.
|| .- xxxx
[ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx =
adjusted offset,
|| Log2(Polling interval) --. | | yyyy =
measured offset,
|| \ | | zzzz =
estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last
sample
===============================================================
================
^* 172.16.3.1 4 9 377 381
-1617ns[ +18us] +/- 89ms
3-71
Chapter 3
Software Installation Procedures - Automated Installation
3-72
Chapter 3
Software Installation Procedures - Automated Installation
Example:
ntp_server='172.16.3.1'
occne_private_registry=registry
occne_private_registry_address='10.75.207.133'
occne_private_registry_port=5000
occne_k8s_binary_repo='http://10.75.207.133/
binaries/'
occne_helm_stable_repo_url='http://10.75.207.133/
helm/'
occne_helm_images_repo='10.75.207.133:5000/'
docker_rh_repo_base_url=http://10.75.207.133/yum/
centos/7/updates/x86_64/
docker_rh_repo_gpgkey=http://10.75.207.133/yum/
centos/RPM-GPG-CENTOS
3-73
Chapter 3
Software Installation Procedures - Automated Installation
3. Copy the OL7 The iso file is normally accessible from a Customer Site Specific
ISO to the repository. It is accessible because the ToR switch configurations were
Bastion Host completed in procedure: OCCNE Configure Top of Rack 93180YC-EX
Switches. For this procedure the file has already been copied to the /var/
occne directory on RMS2 and can be copied to the same directory on the
Bastion Host.
Copy from RMS2, the OL7 ISO file to the /var/occne directory. The
example below uses OracleLinux-7.5-x86_64-disc1.iso.
Note: If the user copies this ISO from their laptop then they must use an
application like WinSCP pointing to the Management Interface IP.
$ scp root@172.16.3.5:/var/occne/OracleLinux-7.5-x86_64-
disc1.iso /var/occne/OracleLinux-7.5-x86_64-disc1.iso
$ mkdir -p /var/occne/pxelinux
$ mount -t iso9660 -o loop /var/occne/OracleLinux-7.5-
x86_64-disc1.iso /mnt
$ cp /mnt/isolinux/initrd.img /var/occne/pxelinux
$ cp /mnt/isolinux/vmlinuz /var/occne/pxelinux
5. Verify and Set Each file configured in the step above must be open for read and write
the PXE permissions.
Configuration
File $ chmod 777 /var/occne/pxelinux
Permissions on $ chmod 777 /var/occne/pxelinux/vmlinuz
the Bastion $ chmod 777 /var/occne/pxelinux/initrd.img
Host
3-74
Chapter 3
Software Installation Procedures - Automated Installation
Example:
[local_ol7_x86_64_UEKR5]
name=Unbreakable Enterprise Kernel Release 5 for
Oracle Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/
UEKR5/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_
3-75
Chapter 3
Software Installation Procedures - Automated Installation
3-76
Chapter 3
Software Installation Procedures - Automated Installation
Example:
Example:
3-77
Chapter 3
Software Installation Procedures - Automated Installation
localhost]
changed: [k8s-1.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-2.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-5.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-4.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-6.rainbow.lab.us.oracle.com ->
localhost]
changed: [k8s-7.rainbow.lab.us.oracle.com ->
localhost]
changed: [db-1.rainbow.lab.us.oracle.com ->
localhost]
Accessing the install process via a KVM or via VSP should display
the following last few lines.
.
.
.
[ OK ] Stopped Remount Root and Kernel File
Systems.
Stopping Remount Root and Kernel File
Systems...
[ OK ] Stopped Create Static Device Nodes in /dev.
Stopping Create Static Device Nodes in /
dev...
[ OK ] Started Restore /run/initramfs.
[ OK ] Reached target Shutdown.
Reboot the "stuck" blades using the KVM, the ssh session to the HP
ILO, or via the power button on the blade. This example shows how
to use the HP ILO to reboot the blade (using blade 1 or k8s-4)
Login to the blade ILO:
$ ssh root@192.168.20.121 using the root credentials
3-78
Chapter 3
Software Installation Procedures - Automated Installation
</>hpiLO->
Example:
$ mv <filename>.repo <filename>.repo.disabled
Example:
$ mv oracle-linux-ol7.repo oracle-linux-
ol7.repo.disabled
$ mv uek-ol7.repo uek-ol7.repo.disabled
$ mv virt-ol7.repo virt-ol7.repo.disabled
6. Execute the OS Install datastore on the Hosts from the Bastion Host
3-79
Chapter 3
Software Installation Procedures - Automated Installation
Example:
Example:
3-80
Chapter 3
Software Installation Procedures - Automated Installation
Prerequisites
1. Management VM is present in Storage Host(RMS2).
2. Storage Host(RMS1) is reinstalled using the os-install container.
3. First and Second Storage Hosts are defined in OCCNE Inventory File Template.
4. Host names and IP Address, network information assigned to this Management VM is
captured in the OCCNE 1.0 Installation PreFlight checklist
Expectations
1. Management VM in first Storage Host is a backup for Management VM in Second Storage
Host.
2. All the required config files and data configured in the Backup Management VM in first
Storage Host(RMS1) is copied from the Management VM in second Storage Host(RMS2).
References
1. https://linux.die.net/man/1/virt-install
2. https://linuxconfig.org/how-to-create-and-manage-kvm-virtual-machines-from-cli
3. https://www.cyberciti.biz/faq/kvm-install-centos-redhat-using-kickstart-ks-cfg/
4. https://opensource.com/business/16/9/linux-users-guide-lvm
3-81
Chapter 3
Software Installation Procedures - Automated Installation
Prerequisites
Below are list of prerequisites required for creating the VM's and installing the MySQL Cluster.
1. VM's needed for installing the MySQL Cluster will be created as part of the VM creation
procedures OCCNE Install VMs for MySQL Nodes and Management Server.
2. SSH keys generated during the host provisioning in /var/occne/<cluster_name> directory,
these SSH keys will be configured in these VM's as part of the OCCNE Install VMs for
3-82
Chapter 3
Software Installation Procedures - Automated Installation
MySQL Nodes and Management Server, so that db-install container can install these VM's
with the MySQL Cluster software.
3. The host running the docker image must have docker installed.
4. A defined and installed site hosts.ini inventory file should also be present..
5. Download MySQL Cluster Manager software as specified in OCCNE 1.0 Installation
PreFlight Checklist and place it in the /var/occne directory in bastion host(Management
VM).
References
1. MySQL NDB Cluster : https://dev.mysql.com/doc/refman/5.7/en/mysql-cluster.html
2. MySQL Cluster Manager: https://dev.mysql.com/doc/mysql-cluster-manager/1.4/en/
Note: This container will be used in the next step while running the db
install container which will install MySQL Cluster
3-83
Chapter 3
Software Installation Procedures - Automated Installation
For Example:
$ docker run -it --network host --cap-add=NET_ADMIN
\
-v /var/occne/rainbow/:/host \
-v /var/occne:/var/occne:rw \
reg-1:5000/db_install:1.0.1
2. Using docker-compose
a. Create a docker-compose.yaml file in the /var/occne/
<cluster_name> directory.
Using docker-compose
$ vi docker-compose.yaml
db_install_<cluster_name>:
net: host
stdin_open: true
tty: true
image: <customer_repo_location>/
<db_install_container_name>
container_name: <cluster_name>_db_installer
cap_add:
- NET_ADMIN
volumes:
- /var/occne/<cluster_name>:/host
- /var/occne:/var/occne:rw
For example:
If the directory name created as OccneCluster then
cluster_name should be replaced with "OccneCluster".
3-84
Chapter 3
Software Installation Procedures - Automated Installation
5 Test the MySQL Test the MySQL Cluster by executing the following command:
Cluster
$ docker run -it --network host --cap-add=NET_ADMIN \
-v /var/occne/<cluster_name>:/host \
-v /var/occne:/var/occne:rw \
<customer_repo_location>/<db_install_container_name> \
/test/cluster_test 0
For Example:
$ docker run -it --network host --cap-add=NET_ADMIN \
-v /var/occne/rainbow:/host \
-v /var/occne:/var/occne:rw \
reg-1:5000/db_install:1.0.1 \
/test/cluster_test
6 Login to the each of As part of the installation of the MySQL Cluster, db_install container
the MySQL SQL will generate the random password and marked as expired in the MySQL
nodes and change SQL nodes. This password is stored in /var/occnedb/
the MySQL root mysqld_expired.log file. so we need to login to the each of the
user password MySQL SQL nodes and change the MySQL root user password.
1. Login to MySQL SQL Node VM.
2. Login to mysql client as a root user.
$ sudo su
$ mysql -h 127.0.0.1 -uroot -p
3. Enter expired random password for mysql root user stored in
the /var/occnedb/mysqld_expired.log file:
3-85
Chapter 3
Software Installation Procedures - Automated Installation
Prerequisites
1. All the hosts servers where this VM is created are captured in OCCNE Inventory File
Preparation.
2. Host names and IP Address, network information assigned to this VM is captured in the
OCCNE 1.0 Installation PreFlight Checklist
3. Cluster Inventory File and SSH Keys are present in the cluster_name folder in var/occne
directory
4. A docker image for 'k8s_install' must be available in the docker registry accesible by
Bastion host. OCCNE 1.0 - Installation Procedure
3-86
Chapter 3
Software Installation Procedures - Automated Installation
- hosts: k8s-cluster
tasks:
- name: Clean artifact path
file:
state: absent
path: "/etc/yum.repos.d/docker.repo"
3-87
Chapter 3
Software Installation Procedures - Automated Installation
PATH=$PATH:$HOME/bin
export PATH
PATH=$PATH:$HOME/bin:var/occne/<cluster_name>/artifacts
source /root/.bash_profile
%% Execute the following to verify the $PATH has been
updated.
4 Run Kubernetes For verification of k8s installation, run docker command in the
Cluster Tests k8s_install /test/cluster_test.
3-88
Chapter 3
Software Installation Procedures - Automated Installation
Introduction
Common Services typically refers to the collection of various components deployed to
OCCNE. The Common services are major functions in action which are able to perform
logging, tracing, and metric collection of the cluster. To Monitor the cluster and to raise alerts
when an anomaly occurs or when a potential failure is round the corner. The below procedure
are used to install the common services.
Prerequisites
1. All procedures in OCCNE Kubernetes Installer is complete.
2. The host running the docker image must have docker installed. Refer to Install VMs for
MySQL Nodes and Management Server for more information.
3. A defined and installed site hosts.ini file should also be present. Check OCCNE Inventory
File Preparation for instructions for developing this file.
4. A docker image named 'occne/configure' must be available in the customer repository.
OCCNE 1.0 - Installation Procedure
Procedure Steps
Run Run the Configure image using the below command. After "configure:"
2. configure keyword put the tag of your latest pulled image.
image
$ docker run --rm -v /<PATH_TO_CLUSTER>/<CLUSTER_NAME>:/
host <CUSTOMER-PROVIDED_REPOSITORY_LOCATION>/occne/
configure:<RELEASE_TAG>
Example:
Note: Replace the <release_tag> after "configure:" image name with the
latest build tag.
3-89
Chapter 3
Software Installation Procedures - Automated Installation
Example:
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
service --all-namespaces
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
service --namespace=default
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
service --namespace=kube-system
$ /var/occne/<cluster_name>/artifacts/kubectl.sh get
srvices --namespace=occne-infra
3-90
Chapter 3
Software Installation Procedures - Automated Installation
3-91
Chapter 3
Software Installation Procedures - Automated Installation
pushgateway ClusterIP
10.233.49.128 <none> 9091/
TCP 93s
occne-infra occne-prometheus-
server LoadBalancer
10.233.19.225 10.75.163.131 80:31511/
TCP 93s
occne-infra occne-tracer-jaeger-
agent ClusterIP 10.233.45.62
<none> 5775/UDP,6831/UDP,6832/UDP,5778/TCP 99s
occne-infra occne-tracer-jaeger-
collector ClusterIP 10.233.58.112
<none> 14267/TCP,14268/TCP,9411/TCP 99s
occne-infra occne-tracer-jaeger-
query LoadBalancer 10.233.43.110
10.75.163.129 80:31319/TCP 99
Example:
$ docker run --rm --network host --cap-add=NET_ADMIN -
v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/
occne/:/var/occne:rw -e "OCCNEARGS=--limit
host_hp_gen_10[0:7],localhost --tags remove"
10.75.200.217:5000/configure:1.0.1
3-92
4
Post Installation Activities
Prerequisities
1. Common services has been installed on all nodes hosting the cluster.
2. Gather list of cluster names and version tags for docker images that were used during
install.
3. All cluster nodes and services pods should be up and running.
4. Commands are required to be run on Management server.
5. Any Modern browser(HTML5 compliant) with network connectivity to CNE.
4-1
Chapter 4
Post Install Verification
4-2
Chapter 4
Post Install Verification
4-3
Chapter 4
Post Install Verification
Verify Alerts are 1. Navigate to alerts tab of Prometheus server GUI or navigate using
6. configured URL http://$PROMETHEUS_LOADBALANCER_IP:
$PROMETHEUS_LOADBALANCER_PORT/
alertsFor<PROMETHEUS_LOADBALANCER_IP>and<PROMETH
EUS_LOADBALANCER_PORT>
2. If below alerts are seen in " Alerts" tab of prometheus GUI, then
Alerts are configured properly.
4-4
Chapter 4
Post Install Verification
4-5
A
Artifacts
The following appendices outline procedures referenced by one or more install procedures.
These procedures may be conditionally executed based on customer requirements or to address
certain deployment environments.
Repository Artifacts
OL YUM Repository Requirements
The following manifest includes the current list of RPMs that have been tested with a fully
configured system.
A-1
Appendix A
Repository Artifacts
A-2
Appendix A
Repository Artifacts
A-3
Appendix A
Repository Artifacts
A-4
Appendix A
Repository Artifacts
A-5
Appendix A
Repository Artifacts
A-6
Appendix A
Repository Artifacts
A-7
Appendix A
Repository Artifacts
A-8
Appendix A
Repository Artifacts
A-9
Appendix A
Repository Artifacts
A-10
Appendix A
Repository Artifacts
A-11
Appendix A
Repository Artifacts
A-12
Appendix A
Docker Repository Requirements
A-13
Appendix A
Docker Repository Requirements
[local_ol7_x86_64_UEKR5]
name=Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/UEKR5/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
A-14
Appendix A
Docker Repository Requirements
proxy=_none_
[local_ol7_x86_64_latest]
name=Oracle Linux 7 Latest (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/latest/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_
[local_ol7_x86_64_addons]
name=Oracle Linux 7 Addons (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/addons/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_
[local_ol7_x86_64_ksplice]
name=Ksplice for Oracle Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/ksplice/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_
[local_ol7_x86_64_developer]
name=Packages for creating test and development environments for Oracle
Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/developer/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_
[local_ol7_x86_64_developer_EPEL]
name=EPEL Packages for creating test and development environments for Oracle
Linux 7 (x86_64)
baseurl=http://10.75.155.195/yum/OracleLinux/OL7/developer/EPEL/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
proxy=_none_
2. Create Docker CE repository repo.
Below is an example of a repository file providing the details on a repository with the
necessary docker-ce package.
/etc/yum.repos.d/docker-ce-stable.repo
[local_docker-ce-stable]
name=Docker CE Stable (x86_64)
baseurl=http://10.75.155.195/yum/centos/7/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
enabled=1
The docker RPM placed into the repository should be docker-ce version 18.06.1.ce-3.el7
for x86_64 downloaded from: https://download.docker.com/linux/centos/7/x86_64/stable
gpg-key for the rpm can be downloaded from: https://download.docker.com/linux/
centos/gpg
A-15
Appendix A
Docker Repository Requirements
More information can be found out on configuring and installing Nginx u sing docker here:
https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/
OR
Use the html directory of Apache http server created during setting up yum mirror to
perform the tasks listed below. Note: Create new directories for kubernetes binaries and
helm charts in html folder
A-16
Appendix A
Docker Repository Requirements
Procedure Steps
#!/bin/bash
###############################################
#################################
#
#
# Copyright (c) 2019 Oracle and/or its
affiliates. All rights reserved. #
#
#
###############################################
#################################
usage() {
echo "Retrieve kubespray binaries for a
private HTTP repo." 2>&1
echo "Expected 1 argument: webroot-
directory " 2>&1
exit 1
}
#
# Kubespray Binaries
kube_version='v1.12.5' # k8s_install/
kubespray/roles/download/defaults/main.yaml
kubeadm_version=$kube_version # k8s_install/
kubespray/roles/download/defaults/main.yaml
image_arch='amd64' # k8s_install/
kubespray/roles/download/defaults/main.yaml
etcd_version='v3.2.24' # k8s_install/
kubespray/roles/download/defaults/main.yaml
cni_version='v0.6.0' # k8s_install/
kubespray/roles/download/defaults/main.yaml
startdir=$pwd
A-17
Appendix A
Docker Repository Requirements
mkdir -p $1/binaries/$kube_version
A-18
Appendix A
Docker Repository Requirements
###############################################
#################################
#
#
# Copyright (c) 2019 Oracle and/or its
affiliates. All rights reserved. #
#
#
###############################################
#################################
# chart-name chart-version
stable/elasticsearch 1.27.2
stable/elasticsearch-curator 1.2.1
stable/elasticsearch-exporter 1.1.2
stable/fluentd-elasticsearch 2.0.7
stable/grafana 3.3.8
stable/kibana 3.0.0
stable/metallb 0.8.4
stable/prometheus 8.8.0
stable/prometheus-node-exporter 1.3.0
stable/metrics-server 2.5.1
incubator/jaeger 0.8.3
A-19
Appendix A
Docker Repository Requirements
#!/bin/bash
###############################################
#################################
#
#
# Copyright (c) 2019 Oracle and/or its
affiliates. All rights reserved. #
#
#
###############################################
#################################
usage() {
echo "Retrieve helm charts for a private
HTTP repo." 2>&1
echo "Expected 1 argument: webroot-
directory " 2>&1
echo "run with image list piped in: $0
webroot-directory < helm_images.txt" 2>&1
exit 1
}
startdir=$pwd
mkdir -p $1/charts
# helm_version='v2.11.0' #
k8s_install/kubespray/roles/download/defaults/
main.yaml configure/readme.md
helm_version='v2.9.1' # configure/
Dockerfile (and in environment)
A-20
Appendix A
Docker Repository Requirements
A-21
Appendix A
Docker Repository Requirements
...
[occne:vars]
...
occne_k8s_binary_repo='http://winterfell:8082/
binaries'
occne_helm_stable_repo_url='http://winterfell:
8082/charts/'
...
Prerequisites
1. Docker is installed and docker commands can be run
2. Creating a local docker registry accessible by the target of the installation
$ docker run -d -p <port>:<port> --restart=always --name <registryname>
registry:2
References
https://docs.docker.com/registry/deploying/
https://docs.docker.com/registry/configuration/
A-22
Appendix A
Docker Repository Requirements
Procedure Steps
#
# Kubespray Images
k8s.gcr.io/addon-resizer:1.8.3
coredns/coredns:1.2.6
gcr.io/google_containers/cluster-proportional-
autoscaler-amd64:1.3.0
quay.io/calico/kube-controllers:v3.1.3
quay.io/calico/node:v3.1.3
quay.io/calico/cni:v3.1.3
quay.io/calico/ctl:v3.1.3
gcr.io/google-containers/kube-apiserver:v1.12.5
gcr.io/google-containers/kube-controller-
manager:v1.12.5
gcr.io/google-containers/kube-proxy:v1.12.5
gcr.io/google-containers/kube-scheduler:v1.12.5
nginx:1.13
quay.io/external_storage/local-volume-
provisioner:v2.2.0
gcr.io/kubernetes-helm/tiller:v2.11.0
lachlanevenson/k8s-helm:v2.11.0
quay.io/jetstack/cert-manager-controller:v0.5.2
gcr.io/google-containers/pause:3.1
gcr.io/google_containers/pause-amd64:3.1
quay.io/coreos/etcd:v3.2.24
#
# Common Services Helm Chart Images
quay.io/pires/docker-elasticsearch-curator:
5.5.4
docker.elastic.co/elasticsearch/elasticsearch-
oss:6.7.0
justwatch/elasticsearch_exporter:1.0.2
grafana/grafana:6.1.6
A-23
Appendix A
Docker Repository Requirements
docker.elastic.co/kibana/kibana-oss:6.7.0
gcr.io/google-containers/fluentd-
elasticsearch:v2.3.2
metallb/controller:v0.7.3
metallb/speaker:v0.7.3
jimmidyson/configmap-reload:v0.2.2
quay.io/coreos/kube-state-metrics:v1.5.0
quay.io/prometheus/node-exporter:v0.17.0
prom/pushgateway:v0.6.0
prom/alertmanager:v0.15.3
prom/prometheus:v2.7.1
jaegertracing/jaeger-agent:1.9.0
jaegertracing/jaeger-collector:1.9.0
jaegertracing/jaeger-query:1.9.0
gcr.io/google_containers/metrics-server-
amd64:v0.3.1
A-24
Appendix A
Docker Repository Requirements
usage() {
echo "Pull, tag, and push images to a
private image repo." 2>&1
echo "Expected 1 argument: repo_name:port
" 2>&1
echo "run with image list piped in: $0
repo_name:port < docker_images.txt" 2>&1
exit 1
}
#
# Kubespray Images
A-25
Appendix A
Docker Repository Requirements
Sample Result:
$ {"repositories":["coredns/
coredns","docker.elastic.co/elasticsearch/
elasticsearch-oss","docker.elastic.co/kibana/
kibana-oss","gcr.io/google-containers/fluentd-
elasticsearch","gcr.io/google-containers/kube-
apiserver","gcr.io/google-containers/kube-
controller-manager","gcr.io/google-containers/
kube-proxy","gcr.io/google-containers/kube-
scheduler","gcr.io/google-containers/
pause","gcr.io/google_containers/cluster-
proportional-autoscaler-amd64","gcr.io/
google_containers/metrics-server-
amd64","gcr.io/google_containers/pause-
amd64","gcr.io/kubernetes-helm/
tiller","grafana/grafana","jaegertracing/
jaeger-agent","jaegertracing/jaeger-
collector","jaegertracing/jaeger-
query","jimmidyson/configmap-
reload","justwatch/
elasticsearch_exporter","k8s.gcr.io/addon-
resizer","lachlanevenson/k8s-helm","metallb/
controller","metallb/speaker","nginx","prom/
alertmanager","prom/prometheus","prom/
pushgateway","quay.io/calico/cni","quay.io/
calico/ctl","quay.io/calico/kube-
controllers","quay.io/calico/node","quay.io/
coreos/etcd","quay.io/coreos/kube-state-
metrics","quay.io/external_storage/local-
volume-provisioner","quay.io/jetstack/cert-
manager-controller","quay.io/pires/docker-
elasticsearch-curator","quay.io/prometheus/
node-exporter"]}
A-26
Appendix A
Docker Repository Requirements
...
[occne:vars]
...
occne_private_registry=winterfell
occne_private_registry_address='10.75.216.114'
occne_private_registry_port=5002
occne_helm_images_repo='winterfell:5002'
...
5. If error is In case a 500 error is encountered with message that states: 'no
encountered space left' during run of bash script listed above, please use
during execution following commands and re run to see if error is fixed:
of
retrieve_images.sh Docker clean up commands
script
$ docker ps --filter status=dead --filter
status=exited -aq | xargs -r docker rm -v
A-27
B
Reference Procedures
Template example
The inventory is composed of multiple groups (indicated by bracketed strings):
• local: OCCNE ansible use. Do not modify.
• occne: list of servers in the OCCNE cluster that will be installed by the os_install
container.
• k8s-cluster: list of servers in the kubernetes cluster.
• kube-master: list of servers that will be provisioned as kubernetes master nodes by the
k8s_install container.
• kube-node: list of servers that will be provisioned as kubernetes worker nodes by the
k8s_install container.
• etcd: list of servers that will be provisioned as part of kubernetes etcd cluster by the
k8s_install container.
• data_store: list of servers that will be host the VMs of the MySQL database cluster,
os_install container will install kvm on them.
• occne:vars: list of occne environment variables. Values for variables are required. See
below for description.
OCCNE Variables
Variable Definitions
occne_cluster_name k8s cluster name
nfs_host IP address OS install nfs host (host running the os_install container)
nfs_path path to mounted OS install media on nfs host. This should always be set
to /var/occne/
subnet_ipv4 subnet of IP addresses available for hosts in the OCCNE cluster
subnet_cidr subnet_ipv4 in cidr notation format
netmask subnet_ipv4 netmask
broadcast_address broadcast address on the OCCNE cluster on which pxe server will listen
default_route default router in the OCCNE cluster
next_server IP address of TFTP server used for pxe boot (host running the os_install
container)
name_server DNS name server for the OCCNE cluster
ntp_server NTP server for the OCCNE cluster
B-1
Appendix B
Inventory File Preparation
Variable Definitions
http_proxy HTTP Proxy server
https_proxy HTTPS Proxy server
occne_private_registry OCCNE private docker registry
occne_private_registry_add OCCNE private docker registry address
ress
occne_private_registry_port OCCNE private docker registry port
metallb_peer_address address of the BGP router peer that metalLB connects to
metallb_default_pool_proto protocol used to metalLB to announce allocated IP address
col
metallb_default_pool_addre range of IP address to be allocated by metalLB from the default pool
sses
pxe_install_lights_out_usr ILO user
pxe_install_lights_out_pass ILO user password
wd
pxe_config_metrics_persist (optional) Logical volume size for Metrics persistent storage, will override
_size default of 500G
pxe_config_es_data_persist (optional) Logical volume size for ElasticSearch data persistent storage,
_size will override default of 500G
pxe_config_es_master_pers (optional) Logical volume size for ElasticSearch master persistent storage,
ist_size will override default of 500G
B-2
Appendix B
Inventory File Preparation
Groups of groups are formed using the children keyword. For example, the [occne:children]
creates an occne group comprised of several other groups.
Inline comments are not allowed.
The OCCNE Inventory file is composed of several groups:
• host_hp_gen_10: list of all physical hosts in the OCCNE cluster. Each host in this group
must also have several properties defined (outlined below)
– ansible_host: The IP address for the host's teamed primary interface. The occne/
os_install container uses this IP to configure a static IP for a pair of teamed interfaces
when the hosts are provisioned.
– ilo: The IP address of the host's iLO interface. This IP is manually configured as part
of the OCCNE Configure Addresses for RMS iLOs, OA, EBIPA process.
– mac: The MAC address of the host's network bootable interface. This is typically eno5
for Gen10 RMS hardware and eno1 for Gen10 bladed hardware. MAC addresses must
use all lowercase alphanumeric values with a dash as the separator
• host_kernel_virtual:list of all virtual hosts in the OCCNE cluster. Each host in this group
must have the same properties defined as above with the exception of the ilo property
• occne:children: Do not modify the children of the occne group
• occne:vars: This is a list of variables representing configurable site-specific data. While
some variables are optional, the ones listed in the boilerplate should be defined with valid
values. If a given site does not have applicable data to fill in for a variable, the OCCNE
installation or engineering team should be consulted. Individual variable values are
explained in subsequent sections.
• data_store: list of Storage Hosts
• kube-master: list of Master Node hosts where kubernetes master components run.
• etcd: set to the same list of nodes as the kube-master group. list of hosts that compose the
etcd server. Should always be an odd number.
• kube-node: list of Worker Nodes. Worker Nodes are where kubernetes pods run and should
be comprised of the bladed hosts.
• k8s-cluster:children: do not modify the children of k8s-cluster
Data Tier Groups
The MySQL service is comprised of several nodes running on virtual machines on RMS hosts.
This collection of hosts is referred to as the MySQL Cluster. Each host in the MySQL Cluster
requires a NodeID parameter. Each host in the MySQL cluster is required to have a NodeID
value that is unique across the MySQL cluster. Additional parameter range limitations are
outlined below.
• mysqlndb_mgm_nodes: list of MySQL Management nodes. In OCCNE 1.0 this group
consists of three virtual machines distributed equally among the kube-master nodes. These
nodes must have a NodeId parameter defined
– NodeId: Parameter must be unique across the MySQL Cluster and have a value
between 49 and 255.
• mysqlndb_data_nodes: List of MySQL Data nodes. In OCCNE 1.0 this group consists of
four virtual machines distributed equally among the Storage Hosts. Requires a NodeId
parameter.
B-3
Appendix B
Inventory File Preparation
– NodeId: Parameter must be unique across the MySQL Cluster and have a value
between 1 and 48.
• mysqlndb_sql_nodes: List of MySQL nodes. In OCCNE 1.0 this group consists of two
virtual machines distributed equally among the Storage Hosts. Requires a NodeId
parameters.
– NodeId: Parameter must be unique across the MySQL Clsuter and have a value
between 49 and 255.
• mysqlndb_all_nodes: Do not modify the children of the mysqlndb_all_nodes group.
• mysqlndb_all_nodes: Do not modify the variables in this group
Prerequisites
Prior to initiating the procedure steps, the Inventory Boilerplate should be copied to a system
where it can be edited and saved for future use. Eventually the hosts.ini file needs to be
transferred to OCCNE servers.
References
1. Ansible Inventory Intro
Procedure Steps
B-4
Appendix B
Inventory File Preparation
B-5
Appendix B
Inventory File Preparation
################################################################################
# EXAMPLE OCCNE Cluster hosts.ini file. Defines OCCNE deployment variables
# and targets.
################################################################################
# Definition of the host node local connection for Ansible control,
# do not change
[local]
127.0.0.1 ansible_connection=local
################################################################################
# This is a list of all of the nodes in the targeted deployment system with the
# IP address to use for Ansible control during deployment.
# For bare metal hosts, the IP of the ILO is used for driving reboots.
# Host MAC addresses is used to identify nodes during PXE-boot phase of the
# os_install process.
# MAC addresses must be lowercase and delimited with a dash "-"
[host_hp_gen_10]
k8s-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
k8s-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
db-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
db-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-
xx-xx-xx-xx
[host_kernel_virtual]
db-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-8.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-9.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-10.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
###############################################################################
# Node grouping of which nodes are in the occne system
[occne:children]
host_hp_gen_10
host_kernel_virtual
k8s-cluster
data_store
###############################################################################
# Variables that define the OCCNE environment and specify target configuration.
[occne:vars]
B-6
Appendix B
Inventory File Preparation
occne_cluster_name=foo.lab.us.oracle.com
nfs_host=10.75.216.xx
nfs_path=/var/occne
subnet_ipv4=10.75.216.0
subnet_cidr=/25
netmask=255.255.255.128
broadcast_address=10.75.216.127
default_route=10.75.216.1
next_server=10.75.216.114
name_server='10.75.124.245,10.75.124.246'
ntp_server='10.75.124.245,10.75.124.246'
http_proxy=http://www-proxy.us.oracle.com:80
https_proxy=http://www-proxy.us.oracle.com:80
occne_private_registry=bastion-1
occne_private_registry_address='10.75.216.xx'
occne_private_registry_port=5000
metallb_peer_address=10.75.216.xx
metallb_default_pool_protocol=bgp
metallb_default_pool_addresses='10.75.xxx.xx/xx'
pxe_install_lights_out_usr=root
pxe_install_lights_out_passwd=TklcRoot
occne_k8s_binary_repo='http://bastion-1:8082/binaries'
helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/'
occne_helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/'
occne_helm_images_repo='bastion-1:5000'/
docker_rh_repo_base_url=http://<bastion-1 IP addr>/yum/centos/7/updates/x86_64/
docker_rh_repo_gpgkey=http://<bastion-1 IP addr>/yum/centos/RPM-GPG-CENTOS
###############################################################################
# Node grouping of which nodes are in the occne data_store
[data_store]
db-1.foo.lab.us.oracle.com
db-2.foo.lab.us.oracle.com
###############################################################################
# Node grouping of which nodes are to be Kubernetes master nodes (must be at
least 2)
[kube-master]
k8s-1.foo.lab.us.oracle.com
k8s-2.foo.lab.us.oracle.com
k8s-3.foo.lab.us.oracle.com
################################################################################
# Node grouping specifying which nodes are Kubernetes etcd data.
# An odd number of etcd nodes is required.
[etcd]
k8s-1.foo.lab.us.oracle.com
k8s-2.foo.lab.us.oracle.com
k8s-3.foo.lab.us.oracle.com
################################################################################
# Node grouping specifying which nodes are Kubernetes worker nodes.
# A minimum of two worker nodes is required.
[kube-node]
k8s-4.foo.lab.us.oracle.com
k8s-5.foo.lab.us.oracle.com
k8s-6.foo.lab.us.oracle.com
k8s-7.foo.lab.us.oracle.com
B-7
Appendix B
OCCNE Artifact Acquisition and Hosting
kube-node
kube-master
################################################################################
# The following node groupings are for MySQL NDB cluster
# installation under control of MySQL Cluster Manager
################################################################################
# NodeId should be unique across the cluster, each node should be assigned with
# the unique NodeId, this id will control which data nodes should be part of
# different node groups. For Management nodes, NodeId should be between 49 to
# 255 and should be assigned with unique NodeId with in MySQL cluster.
[mysqlndb_mgm_nodes]
db-3.foo.lab.us.oracle.com NodeId=49
db-4.foo.lab.us.oracle.com NodeId=50
###############################################################################
# For data nodes, NodeId should be between 1 to 48, NodeId will be used to
# group the data nodes among different Node Groups.
[mysqlndb_data_nodes]
db-5.foo.lab.us.oracle.com NodeId=1
db-6.foo.lab.us.oracle.com NodeId=2
db-7.foo.lab.us.oracle.com NodeId=3
db-8.foo.lab.us.oracle.com NodeId=4
################################################################################
# For SQL nodes, NodeId should be between 49 to 255 and should be assigned with
# unique NodeId with in MySQL cluster.
[mysqlndb_sql_nodes]
db-9.foo.lab.us.oracle.com NodeId=56
db-10.foo.lab.us.oracle.com NodeId=57
################################################################################
# Node grouping of all of the nodes involved in the MySQL cluster
[mysqlndb_all_nodes:children]
mysqlndb_mgm_nodes
mysqlndb_data_nodes
mysqlndb_sql_nodes
################################################################################
# MCM and NDB cluster variables can be defined here to override the values.
[mysqlndb_all_nodes:vars]
occne_mysqlndb_NoOfReplicas=2
occne_mysqlndb_DataMemory=12G
B-8
Appendix B
Installation PreFlight Checklist
• A local Docker registry is needed to hold the proper Docker images to support the
containers that run Kubernetes and the common services that Kubernetes will manage
• A copy of the for OS installation
• A copy of the for database nodes
B-9
Appendix B
Installation PreFlight Checklist
B-10
Appendix B
Installation PreFlight Checklist
B-11
Appendix B
Installation PreFlight Checklist
The first switch in the solution will serve to connect each server's first NIC in their respective
NIC pairs to the network. The next switch in the solution will serve to connect each server's
redundant (2nd) NIC in their respective NIC pairs to the network.
B-12
Appendix B
Installation PreFlight Checklist
B-13
Appendix B
Installation PreFlight Checklist
B-14
Appendix B
Installation PreFlight Checklist
Procedure
B-15
Appendix B
Installation PreFlight Checklist
B-16
Appendix B
Installation PreFlight Checklist
Complete VM IP Table
Table values that are prefilled are fixed in the topology and do not need to be changed. Blank
values indicate that customer engagement is needed to determine the appropriate value.
B-17
Appendix B
Installation PreFlight Checklist
Table values that are prefilled are fixed in the topology and do not need to be changed. Blank
values indicate that customer engagement is needed to determine the appropriate value.
B-18
Appendix B
Installation PreFlight Checklist
B-19
Appendix B
Installation PreFlight Checklist
Table B-9 ToR and Enclosure Switches Variables Table (Switch Specific)
B-20
Appendix B
Installation PreFlight Checklist
Table B-9 (Cont.) ToR and Enclosure Switches Variables Table (Switch Specific)
B-21
Appendix B
Installation PreFlight Checklist
Note:
• The instructions listed here are for a linux host. Instructions to do this on a PC can
be obtained from the Web if needed. The mount instructions are for a Linux
machine.
• When creating these files on a USB from Windows (using notepad or some other
Windows editor), the files may contain control characters that are not recognized
when using in a Linux environment. Usually this includes a ^M at the end of each
line. These control characters can be removed by using the dos2unix command in
Linux with the file: dos2unix <filename>.
• When copying the files to this USB, make sure the USB is formatted as FAT32.
Miscellaneous Files
This procedure details any miscellaneous files that need to be copied to the Utility USB.
1. Copy the hosts.ini file from step 2.7 onto the Utility USB.
2. Copy the ol7-mirror.repo file from the customer's OL YUM mirror instance onto the
Utility USB. Reference procedure: OCCNE YUM Repository Configuration
3. Copy the docker-ce-stable.repo file from procedure: OCCNE YUM Repository
Configuration onto the Utility USB.
4. Copy the following switch configuration template files from OHC to the Utility USB:
a. 93180_switchA.cfg
b. 93180_switchB.cfg
c. 6127xlg_irf.cfg
d. ifcfg-vlan
e. ifcfg-bridge
5. Copy VM kickstart template file bastion_host.ks from OHC onto the Utility USB.
6. Copy the occne-ks.cfg.j2.new file from OHC into the Utility USB.
Copy and Edit the poap.py Script
This procedure is used to create the dhcpd.conf file that will be needed in procedure: OCCNE
Configure Top of Rack 93180YC-EX Switches.
1. Mount the Utility USB.
Note:
Instructions for mounting a USB in linux are at: OCCNE Installation of Oracle
Linux 7.5 on Bootstrap Server : Install Additional Packages. Only follow steps 1-3
to mount the USB.
B-22
Appendix B
Installation PreFlight Checklist
wget https://raw.githubusercontent.com/datacenter/nexus9000/master/nx-os/
poap/poap.py
on any linux server or laptop
4. Rename the poap.py script to poap_nexus_script.py.
mv poap.py poap_nexus_script.py
5. The switches' firmware version is handled before the installation procedure, no need to
handle it from here. Comment out the lines to handle the firmware at lines 1931-1944.
vi poap_nexus_script.py
# copy_system()
# if single_image is False:
# copy_kickstart()
# signal.signal(signal.SIGTERM, sig_handler_no_exit)
# # install images
# if single_image is False:
# install_images()
# else:
# install_images_7_x()
# cleanup_temp_images()
# see /usr/share/doc/dhcp*/dhcpd.conf.example
default-lease-time 10800;
max-lease-time 43200;
allow unknown-clients;
filename "poap_nexus_script.py";
B-23
Appendix B
Installation PreFlight Checklist
next-server 192.168.2.11;
default-lease-time 10800;
max-lease-time 43200;
allow unknown-clients;
next-server 192.168.20.11;
Note:
This file includes some variables that must be updated when used in procedure:
OCCNE Installation of the Bastion Host.
B-24
Appendix B
Installation PreFlight Checklist
Note:
The steps to update those variables are contained in that procedure.
#version=DEVEL
cdrom
text
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
# System language
lang en_US.UTF-8
# Network information
network --hostname=NODEHOSTNAME
# Root password
B-25
Appendix B
Installation PreFlight Checklist
$0FqnB.agxmnDqb.Bh0sSLhq7..t37RwUZr7SlVmIBvMmWVoUjb2DJJ2f4VlrW9RdfVi.IDXxd2/
Eeo41FCCJ01
# System services
services --enabled="chronyd"
skipx
# System timezone
#autopart --type=lvm
volgroup ol pv.11
%packages
@^minimal
@compat-libraries
@base
@core
@debugging
@development
chrony
kexec-tools
B-26
Appendix B
Installation PreFlight Checklist
%end
%end
%anaconda
%end
%post --log=/root/occne-ks.log
setenforce permissive
B-27
Appendix B
Installation PreFlight Checklist
PUBLIC_KEY
EOF
fi
if [ $? -ne 0 ]; then
B-28
Appendix B
Installation PreFlight Checklist
fi
fi
if [ $? -ne 0 ]; then
fi
fi
echo 'This site is for the exclusive use of Oracle and its authorized customers
and partners. Use of this site by customers and partners is subject to the Terms
of Use and Privacy Policy for this site, as well as your contract with Oracle.
Use of this site by Oracle employees is subject to company policies, including
the Code of Conduct. Unauthorized access or breach of these terms may result in
termination of your authorization to use this site and/or civil and criminal
penalties.' > /etc/issue
echo 'This site is for the exclusive use of Oracle and its authorized customers
and partners. Use of this site by customers and partners is subject to the Terms
of Use and Privacy Policy for this site, as well as your contract with Oracle.
Use of this site by Oracle employees is subject to company policies, including
the Code of Conduct. Unauthorized access or breach of these terms may result in
termination of your authorization to use this site and/or civil and criminal
penalties.' > /etc/issue.net
%end
reboot
B-29
Appendix B
Installation Use Cases and Repository Requirements
Requirements
• Installer notebooks may be used to access resources; however, the following limitations
will need to be considered:
– The installer notebook may not arrive on site with Oracle IP, such as source code or
install tools
– The installer notebook may not have customer sensitive material stored on it, such as
access credentials
• Initial install may require trained personnel to be on site; however, DR of any individual
component should not require trained software personnel to be local to the installing device
– Physical rackmounting and cabling of replacement equipment should be performed by
customer or contractor personnel; but software configuration and restoration of
services should not require personnel to be sent to site.
• Oracle Linux Yum repository, Docker registry, and Helm repository is configured and
available to the CNE frame for installation activities. Oracle will define what artifacts need
to be in these repositories. It will be the customer responsibility to pull the artifacts into
repositories reachable by the OCCNE frame.
CNE Overview
B-30
Appendix B
Installation Use Cases and Repository Requirements
Problem Statement
A solution is needed to initialize the frame with an OS, a Kubernetes cluster, and a set of
common services for 5G NFs to be deployed into. How the frame is brought from
manufacturing default state to configured and operational state is the topic of this page.
Manufacturing Default State characteristics/assumptions:
• Frame components are "racked and stacked", with power and network connections in place
• Frame ToR switches are not connected to the customer network until they are configured
(alternatively, the links can be disabled from the customer side)
• An installer is on-site
• An installer has a notebook and a USB flash drive with which to configure at the first
server in the frame
• An installer's notebook has access to the repositories setup by the customer
B-31
Appendix B
Installation Use Cases and Repository Requirements
The installer notebook is considered to be an Oracle asset. As such, it will have limitations
applied as mentioned above. The notebook will be used to access the customer instantiated
repositories to pull down the OL iso and apply it to a USB flash drive. Steps involved in
creating the bootable USB drive will be dependent upon the OS on the notebook (for example,
Rufus can be used for a Windows PC, or "dd" command can be used for a Linux PC).
B-32
Appendix B
Installation Use Cases and Repository Requirements
– Until the ToR switches are configured, there is no connection to the customer
repositories.
• The red server is special in that it has connections to ToR out of band interfaces (not
shown).
• The red server is installed via USB flash drive and local KVM (Keyboard, Video, Mouse).
B-33
Appendix B
Installation Use Cases and Repository Requirements
dependency in the customer repositories and will be delivered by USB to the bootstrap server,
similar to the OL iso.
B-34
Appendix B
Installation Use Cases and Repository Requirements
B-35
Appendix B
Installation Use Cases and Repository Requirements
B-36
Appendix B
Installation Use Cases and Repository Requirements
B-37
Appendix B
Installation Use Cases and Repository Requirements
Package Update
At this point, server's host OS is installed, hopefully from the latest OL release. If this was done
from a released ISO, then this step involves updating to the latest Errata. If the previous step
already involved grabbing the latest package offering, then this step is already taken care of.
Ansible triggers servers to do a Yum update
Ansible playbooks interact with servers to instruct them to perform a Yum update.
B-38
Appendix B
Installation Use Cases and Repository Requirements
B-39
Appendix B
Installation Use Cases and Repository Requirements
Harden the OS
Ansible instructs the servers to run a script to harden the OS.
B-40
Appendix B
Installation Use Cases and Repository Requirements
B-41
Appendix B
Installation Use Cases and Repository Requirements
Install MySQL
Execute Ansible Playbooks from DB Installer Container
Show simple picture of Ansible touching the DB nodes.
B-42
Appendix B
Installation Use Cases and Repository Requirements
B-43
Appendix B
Installation Use Cases and Repository Requirements
B-44
Appendix B
Installation Use Cases and Repository Requirements
B-45
Appendix B
Topology Connection Tables
OA Connections
The Enclosure's Onboard Administrator (OA) will be deployed as a redundant pair, with each
connecting with 1GE copper connection to the respective ToR switches' SFP+ ports.
Topology Connections
B-46
Appendix B
Topology Connection Tables
B-47
Appendix B
Topology Connection Tables
This section contains the point to point connections for the switches. The switches in the
solution will follow the naming scheme of "Switch<series number>", i.e. Switch1, Switch2,
etc; where Switch1 is the first switch in the solution, and switch2 is the second. These two form
a redundant pair. The switch datasheet is linked here: https://www.cisco.com/c/en/us/products/
collateral/switches/nexus-9000-series-switches/datasheet-c78-736651.html.
The first switch in the solution will serve to connect each server's first NIC in their respective
NIC pairs to the network. The next switch in the solution will serve to connect each server's
redundant (2nd) NIC in their respective NIC pairs to the network.
B-48
Appendix B
Topology Connection Tables
B-49
Appendix B
Topology Connection Tables
B-50
Appendix B
Network Redundancy Mechanisms
• 4x1GE LOM: For most servers in the solution, their 4x1GE LOM ports will be unused.
The exception is the first server in the first frame. This server will serve as the
management server for the ToR switches. In this case, the server will use 2 of the LOM
ports to connect to ToR switches' respective out of band ethernet management ports. These
connections will be 1GE RJ45 (CAT 5e or CAT 6).
• 2x10GE FLOM: Every server will be equipped with a 2x10GE Flex LOM card (or
FLOM). These will be for in-band, or application and solution management traffic. These
connections are 10GE fiber (or DAC) and will terminate to the ToR switches' respective
SFP+ ports.
All RMS in the frame will only use the 10GE FLOM connections, except for the "management
server", the first server in the frame, which will have some special connections as listed below:
B-51
Appendix B
Network Redundancy Mechanisms
The rackmount servers (RMS) will be configured with a base quad 1GE NICs that will be
mostly unused (except for switch management connections on the management server). The
RMS will also be equipped with a 10GE Flex LOM (FLOM) card. The FLOM NIC ports will
be connected to the ToR switches. The RMS OS configuration must pair the FLOM NIC ports
in an active/active configuration.
Production Use-Case
The production environment will use LACP mode for active/active NIC pairing. This is so the
NICs can form one logical interface, using a load-balancing algorithm involving a hash of
source/dest MAC or IP pairs over the available links. For this to work, the upstream switches
need to be "clustered" as a single logical switch. LACP mode will not work if the upstream
switches are operating as independent switches (not sharing the same switching fabric). The
B-52
Appendix B
Network Redundancy Mechanisms
current projected switches to be used in the solution are capable of a "clustering" technology,
such as HP's IRF, and Cisco's vPC.
Lab Use-Case
Some lab infrastructure will be able to support the production use-case. However, due to its
dependence on switching technology, and the possibility that much of the lab will not have the
dependent switch capabilities, the NIC pairing strategy will need to support an active/active
mode that does not have dependence on switch clustering technologies (adaptive load
balancing, round-robin), active/standby that does not have dependence on switch clustering
technologies, or a simplex NIC configuration for non-redundant topologies.
To support LACP mode of NIC teaming, the Enclosure switches will need to be clustered
together as one logical switch. This will involve the switches to be connected together with the
Enclosure's internal pathing between the switches. Below is an Enclosure switch interconnect
table for reference.
Each blade server will form a 2x10GE Link Aggregation Group (LAG) to the upstream
enclosure switches. Up to 16 blades will be communicating through these enclosure switches.
Without specific projected data rates, the enclosure uplinks to the Top of Rack (ToR) switches
will be sized to a 4x10GE LAG each. Thus, with the switches logically grouped together, an
8x10GE LAG will be formed to the ToR.
For simplicity's sake, the figure below depicts a single black line for each connection. This
black line may represent one or more links between the devices. Consult the interconnect tables
for what the connection actually represents.
B-53
Appendix B
Network Redundancy Mechanisms
B-54
Appendix B
Network Redundancy Mechanisms
were to become unreachable, the static default route to OAM subnet 1 would still be active, as
there is no intelligence at play to converge to a different default route. This is an area in need of
further exploration and development.
The signaling network uplinks are expected to use OSPF routing protocol with the customer
switches to determine optimal and available route paths to customer signaling router interfaces.
This implementation will require tuning with the customer network for optimal performance.
If the ToR switches are able to cluster together as one logical switch, then there is no need for
an OSPF relationship between the ToR switches. In this case, they would share a common route
table and have two possible routes out to the customer router interfaces. If, however, they do
not cluster together as a single logical unit, then there would be an OSPF relationship between
the two to share route information.
B-55
Appendix B
Network Redundancy Mechanisms
Cloud Native networks are typically flat networks, consisting of a single subnet within a
cluster. The Kubernetes networking technology is natively a single network IP space for the
cluster. This presents difficulty when deploying in telecom networks that still maintain a strict
OAM and Signaling separation in the customer infrastructure. The OC-CNE will provide
metalLB load-balancer with BGP integration to the ToR switches as a means to address this
problem. Each service end-point will configure itself to use a specific pool of addresses
configured within metalLB. There will be two address pools to choose from, OAM and
Signaling. As each service is configured with an address from a specific address pool, BGP will
share a route to that service over the cluster network. At the ToR, some method of advertising
these address pools to OAM and signaling paths will be needed. OAM service endpoints will
likely be addressed through static route provisioning. Signaling service endpoints will likely
redistribute just one address pool (signaling) into the OSPF route tables.
OAM and Signaling configurations and test results are in the following page:
B-56
Appendix B
Network Redundancy Mechanisms
B-57
Appendix B
Install VMs for MySQL Nodes and Management Server
OAM type common services, such as EFK, Prometheus, and Grafana will have their service
endpoints configured from the OAM address pool. Signaling services, like the 5G NFs, will
have their service endpoints configured from the signaling address pool.
B-58
Appendix B
Install VMs for MySQL Nodes and Management Server
Virtual Machine
MySQL Cluster is installed on Virtual machines, so the number of VM's required in the Storage
Hosts and K8 Master Nodes are as shown below. Each k8 master node is used to create 1 VM
for installing the MySQL Management node, so there are 3 MySQL management nodes in the
MySQL Cluster. In each storage nodes, 4 VM's are created, i.e. 2 VM's for data nodes, 1 VM
for SQL nodes and 1 VM for Management node VM.
No Of MySQL Management Nodes: 3
No Of Data Nodes: 4
No of SQL nodes: 2
No Of Bastion Hosts: 2
Below table shows VM's Created in Host servers:
B-59
Appendix B
Install VMs for MySQL Nodes and Management Server
System Details
IP Address, host names for VM's, Network information for creating the VM's are captured in
OCCNE 1.0 Installation PreFlight Checklist
Prerequisites
1. All the hosts servers where VM's are created are captured in OCCNE Inventory File
Preparation, The kubernetes master nodes are mentioned under [kube-master] and Storage
Hosts are mentioned under [data_store].
2. All Hosts should be provisioned using os-install container as defined and installed site
hosts.ini file.
3. Oracle Linux 7.5 iso (OracleLinux-7.5-x86_64-disc1.iso) is copied in /var/occne in the
bastion host as specified in OCCNE Oracle Linux OS Installer procedure. This "/var/
occne" path is shared to other hosts as specified in OCCNE Configuration of the Bastion
Host.
4. Host names and IP Address, network information assigned to these VM's should be
captured in the Pre-flight Checklist.
5. Bastion Host should be installed in Storage Host(RMS2).
6. SSH keys configured in host servers by os-install container is stored in Management Node.
7. Storage Host(RMS1) and Storage Host(RMS2) should be configured with same SSH keys.
8. SSH keys should be configured in these VM's, so that db-install container can install these
VM's with the MySQL Cluster software, these SSH keys are configured in the VM's using
the kickstart files while creating VM's.
B-60
Appendix B
Install VMs for MySQL Nodes and Management Server
References
1. https://linux.die.net/man/1/virt-install
2. https://linuxconfig.org/how-to-create-and-manage-kvm-virtual-machines-from-cli
3. https://www.cyberciti.biz/faq/kvm-install-centos-redhat-using-kickstart-ks-cfg/
4. https://opensource.com/business/16/9/linux-users-guide-lvm
Note:
Below steps should be performed to create the bridge interface (teambr0 and vlan5-br)
in each storage hosts and bridge interface(teambr0) in each kubernetes Master nodes
one at a time.
B-61
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 Procedure to install VMs for MySQL Nodes and Management Server
IP ADDRESS
GATEWAY IP
DNS
B-62
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
id 5
$ ip link set team0.5 up
$ brctl addbr vlan5-br
$ ip link set vlan5-br up
$ brctl addif vlan5-br team0.5
4. Create Signal bridge(vlan5-br) interface configuration file in
Storage Hosts to keep these interfaces persistent over reboot,
update below variables in ifcfg config files using the below sed
commands.
a. PHY_DEV
b. VLAN_ID
c. BRIDGE_NAME
==============================================
==============================================
================================
Create ifcfg-team0.5 and ifcfg-vlan5-br file
in /etc/sysconfig/network-scripts directory
to keep these interfaces up over reboot.
==============================================
==============================================
================================
[root@db-2 network-scripts]# vi /tmp/ifcfg-
team0.VLAN_ID
VLAN=yes
TYPE=Vlan
PHYSDEV={PHY_DEV}
VLAN_ID={VLAN_ID}
REORDER_HDR=yes
GVRP=no
MVRP=no
PROXY_METHOD=none
BROWSER_ONLY=no
DEFROUTE=no
IPV4_FAILURE_FATAL=no
DEVICE={PHY_DEV}.{VLAN_ID}
NAME={PHY_DEV}.{VLAN_ID}
ONBOOT=yes
BRIDGE={BRIDGE_NAME}
NM_CONTROLLED=no
B-63
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME={BRIDGE_NAME}
DEVICE={BRIDGE_NAME}
ONBOOT=yes
NM_CONTROLLED=no
$ cp /tmp/ifcfg-BRIDGE_NAME /etc/sysconfig/
network-scripts/ifcfg-vlan5-br
$ chmod 644 /etc/sysconfig/network-scripts/
ifcfg-vlan5-br
$ sed -i 's/{BRIDGE_NAME}/vlan5-br/g' /etc/
sysconfig/network-scripts/ifcfg-vlan5-br
$ cp /tmp/ifcfg-team0.VLAN_ID /etc/sysconfig/
network-scripts/ifcfg-team0.5
$ chmod 644 /etc/sysconfig/network-scripts/
ifcfg-team0.5
$ sed -i 's/{BRIDGE_NAME}/vlan5-br/g' /etc/
sysconfig/network-scripts/ifcfg-team0.5
$ sed -i 's/{PHY_DEV}/team0/g' /etc/sysconfig/
network-scripts/ifcfg-team0.5
$ sed -i 's/{VLAN_ID}/5/g' /etc/sysconfig/
network-scripts/ifcfg-team0.5
5. Reboot the host
$ reboot
Perform above steps in all the K8 Master nodes and Storage Hosts,
where MySQL VM's are created.
B-64
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-65
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-66
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
Perform same steps for /dev/sdd to set the partition type to Linux
LVM(8e).
$ fdisk /dev/sdd
Welcome to fdisk (util-linux 2.23.2).
B-67
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-68
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-69
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-70
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-71
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
ACTUAL_HTTP_PROXY/g' /tmp/DB_MGMNODE_1.ks
$ sed -e '/PUBLIC_KEY/{' -e 'r /home/
admusr/.ssh/authorized_keys' -e 'd' -e
'}' -i /tmp/DB_MGMNODE_1.ks
B-72
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
Retrieving file
vmlinuz...
Retrieving file
initrd.img...
Allocating
'ndbmgmnodea1.qcow2'
B-73
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-74
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-75
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
$ sed -i 's/VLAN3_GATEWAYIP/
ACTUAL_GATEWAY_IP/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/VLAN3_IPADDRESS/
ACTUAL_IPADDRESS/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/NAMESERVERIPS/
ACTUAL_NAMESERVERIPS/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/VLAN3_NETMASKIP/
ACTUAL_NETMASKIP/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/NODEHOSTNAME/
ACTUAL_NODEHOSTNAME/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/NTPSERVERIPS/
ACTUAL_NTPSERVERIPS/g' /tmp/DATANODEVM_1.ks
$ sed -i 's/HTTP_PROXY/
ACTUAL_HTTP_PROXY/g' /tmp/DATANODEVM_1.ks
$ sed -e '/PUBLIC_KEY/{' -e 'r /home/
admusr/.ssh/authorized_keys' -e 'd' -e '}' -
i /tmp/DATANODEVM_1.ks
Similarly generate
DATANODEVM_2.ks, DATANODEVM_3.ks,
DATANODEVM_4.ks kickstart files, which are used for creating
MySQL Data node VM's.
3. After updating DATANODEVM_1.ks kickstart file, use below
command to start the creation of MySQL Data node VM. This
command will use the "/tmp/DATANODEVM_1.ks" kickstart file
for creating the VM and configuring the MySQL Data node VM,
update <DATANODEVM_NAME> as specified in hosts.ini
invetory file(created using procedure: OCCNE Inventory File
Preparation) and <DATANODEVM_NODE_DESC>in the below
command.
For Creating ndbdatanodea1 Data Node VM in DB Storage Node
1:
$ virt-install --name <DATANODEVM_NAME> --memory
51200 --memorybacking hugepages=yes --vcpus 10 \
--metadata
description=<DATANODEVM_DESC> --autostart --
location /mnt/nfsoccne/OracleLinux-7.5-x86_64-
disc1.iso \
--initrd-inject=/tmp/
DATANODEVM_1.ks --os-variant=ol7.5 \
--extra-args="ks=file:/
DATANODEVM_1.ks console=tty0
console=ttyS0,115200n8" \
--disk path=/var/lib/libvirt/
images/<DATANODEVM_NAME>.qcow2,size=100 --disk
path=/dev/mapper/strip_vga-strip_lva \
--network bridge=teambr0 --
nographics
B-76
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-77
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
path=/dev/mapper/strip_vgb-strip_lvb \
--network bridge=teambr0 --
nographics
4. After Installation is complete, prompt for login.
5. To Exit from the virsh console Press CTRL+ '5' keys, after logout
from VM.
$ exit
press CTRL+'5' keys to exit from the virsh
console.
Repeat these steps for creating all the MySQL Data node VM's in
Storage Hosts.
B-78
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
$ ifconfig vlan5-br
vlan5-br:
flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu
1500
inet6 fe80::645e:5cff:febf:fbd6
prefixlen 64 scopeid 0x20<link>
ether 48:df:37:7a:40:48 txqueuelen 1000
(Ethernet)
RX packets 150600 bytes 7522366 (7.1 MiB)
RX errors 0 dropped 0 overruns 0 frame
0
TX packets 7 bytes 626 (626.0 B)
TX errors 0 dropped 0 overruns 0
carrier 0 collisions 0
2. Create Kickstart file for creating MySQL SQL Node VM
a. Change to root user
$ sudo su
B-79
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-80
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
B-81
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 (Cont.) Procedure to install VMs for MySQL Nodes and Management
Server
--extra-args
"ks=file:/DB_SQLNODE_1.ks console=tty0
console=ttyS0,115200" \
--disk path=/var/lib/
libvirt/images/<NDBSQL_NODE_NAME>.qcow2,size=600 \
--network
bridge=teambr0 --network bridge=vlan5-br --
graphics none
4. After Installation is complete, prompt for login.
5. To Exit from the virsh console Press CTRL+ '5' keys, after logout
from VM.
$ exit
press CTRL+'5' keys to exit from the virsh
console.
Repeat these steps for creating MySQL SQL node VM's in Storage
Hosts.
Unmount Linux After all the MySQL node VM's are created in all kubernetes master
10. ISO nodes and Storage Hosts, unmount "/mnt/nfsoccne" and delete this
directory.
1. Login to host.
2. Unmount "/mnt/nfsoccne" in host
$ umount /mnt/nfsoccne
3. Delete directory
$ rm -rf /mnt/nfsoccne
Perform above steps in all the K8 Master nodes and Storage Hosts.
B-82
Appendix B
Install VMs for MySQL Nodes and Management Server
Table B-14 Procedure to install VMs for MySQL Nodes and Management Server
B-83