En Pca 3 0 Install 302
En Pca 3 0 Install 302
F70777-02
January 2023
Oracle Private Cloud Appliance Installation Guide for Release 3.0.2,
F70777-02
2 Site Requirements
Space Requirements 2-1
Receiving and Unpacking Requirements 2-1
Maintenance Access Requirements 2-2
Flooring Requirements 2-2
Electrical Power Requirements 2-3
Facility Power Requirements 2-4
Circuit Breaker Requirements 2-4
Grounding Guidelines 2-5
Temperature and Humidity Requirements 2-5
Ventilation and Cooling Requirements 2-6
3 Network Requirements
Network Connection Requirements 3-1
Network Overview 3-1
Device Management Network 3-1
Data Network 3-1
Uplinks 3-1
Administration Network 3-2
Reserved Network Resources 3-2
iii
Network Configuration Requirements 3-2
DNS Configuration for Oracle Private Cloud Appliance 3-2
Zone Delegation (preferred) 3-3
Manual Configuration 3-3
Data Center Switch Configuration Notes 3-5
Default System IP Addresses 3-5
iv
6 Cabling Reference
Management Switch Ethernet Connections 6-1
Spine and Leaf Switch Data Network Connections 6-2
Data and Spine Switch Interconnects 6-4
Spine Switch to Data Switch Connections 6-4
Data Switch to Data Switch Connections 6-4
Spine Switch to Spine Switch Connections 6-5
7 Site Checklists
System Components Checklist 7-1
Data Center Room Checklist 7-1
Data Center Environmental Checklist 7-3
Access Route Checklist 7-4
Facility Power Checklist 7-8
Safety Checklist 7-10
Logistics Checklist 7-10
Network Specification Checklist 7-13
Initial Installation Checklist 7-15
v
Preface
Preface
This publication is part of the customer documentation set for Oracle Private Cloud
Appliance Release 3.0.2. Note that the documentation follows the release numbering
scheme of the appliance software, not the hardware on which it is installed. All Oracle
Private Cloud Appliance product documentation is available at https://
docs.oracle.com/en/engineered-systems/private-cloud-appliance/index.html.
Oracle Private Cloud Appliance Release 3.x is a flexible general purpose Infrastructure
as a Service solution, engineered for optimal performance and compatibility with
Oracle Cloud Infrastructure. It allows customers to consume the core cloud services
from the safety of their own network, behind their own firewall.
Audience
This documentation is intended for owners, administrators and operators of Oracle
Private Cloud Appliance. It provides architectural and technical background
information about the engineered system components and services, as well as
instructions for installation, administration, monitoring and usage.
Oracle Private Cloud Appliance has two strictly separated operating areas, known as
enclaves. The Compute Enclave offers a practically identical experience to Oracle
Cloud Infrastructure: It allows users to build, configure and manage cloud workloads
using compute instances and their associated cloud resources. The Service Enclave is
where privileged administrators configure and manage the appliance infrastructure that
provides the foundation for the cloud environment. The target audiences of these
enclaves are distinct groups of users and administrators. Each enclave also provides
its own separate interfaces.
It is assumed that readers have experience with system administration, network and
storage configuration, and are familiar with virtualization technologies. Depending on
the types of workloads deployed on the system, it is advisable to have a general
understanding of container orchestration, and UNIX and Microsoft Windows operating
systems.
Feedback
Provide feedback about this documentation at https://www.oracle.com/goto/
docfeedback.
Conventions
The following text conventions are used in this document:
vi
Preface
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, code in
examples, text that appears on the screen, or text that you enter.
$ prompt The dollar sign ($) prompt indicates a command run as a non-root
user.
# prompt The pound sign (#) prompt indicates a command run as the root user.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at https://www.oracle.com/corporate/accessibility/.
For information about the accessibility of the Oracle Help Center, see the Oracle Accessibility
Conformance Report at https://www.oracle.com/corporate/accessibility/templates/
t2-11535.html.
vii
1
Oracle Private Cloud Appliance Installation
Overview
This document provides information about the Oracle Private Cloud Appliance installation at
your site, and describes the pre-installation preparation required for your site. This chapter
provides an overview of the Oracle Private Cloud Appliance system installation process.
For a comprehensive overview of the Oracle Private Cloud Appliance system, see the Oracle
Private Cloud Appliance Concepts Guide.
The following table lists the procedures you need to complete to install Oracle Private Cloud
Appliance at your site.
1-1
2
Site Requirements
This chapter describes the site requirements for the Oracle Private Cloud Appliance.
Note:
For site checklists, refer to Site Checklists.
Space Requirements
Oracle Private Cloud Appliance racks have the following space requirements:
• Height: 42U - 2000 mm (78.74 inches)
• Width: 600 mm with side panels (23.62 inches)
• Depth (front door handle to rear door handle): 1197 mm (47.12 inches)
• Depth (doors removed): 1112 mm (43.78 inches)
• Weight (base rack, fully populated): 1000 kg (2204 lbs)
The minimum ceiling height for the cabinet is 2914 mm (114.72 inches), measured from the
true floor or raised floor, whichever is higher. This includes an additional 914 mm (36 inches)
of space required above the rack height for maintenance access. The space surrounding the
cabinet must not restrict the movement of cool air between the air conditioner and the front of
the systems within the cabinet, or the movement of hot air coming out of the rear of the
cabinet.
2-1
Chapter 2
Flooring Requirements
material to reduce particles before entering the data center. The entire access route to
the installation site should be free of raised-pattern flooring that can cause vibration.
Allow enough space for unpacking the system from its shipping cartons. Ensure that
there is enough clearance and clear pathways for moving the Oracle Private Cloud
Appliance from the unpacking location to the installation location. The following table
lists the access route requirements for the Oracle Private Cloud Appliance.
Flooring Requirements
Oracle recommends that the Oracle Private Cloud Appliance be installed on raised
flooring. The site floor and the raised flooring must be able to support the total weight
of the system as specified in Space Requirements.
The following table lists the floor load requirements.
Description Requirement
Maximum allowable weight of installed 952.54 kg (2100 lbs)
rack equipment
Maximum allowable weight of installed 52.16 kg (115 lbs)
power distribution units
Maximum dynamic load (maximum 1004.71 kg (2215 lbs)
allowable weight of installed equipment
including power distribution units)
2-2
Chapter 2
Electrical Power Requirements
2-3
Chapter 2
Electrical Power Requirements
Note:
Circuit breakers are supplied by the customer. One circuit breaker is required
for each power cord.
2-4
Chapter 2
Temperature and Humidity Requirements
Grounding Guidelines
The cabinets for the Oracle Private Cloud Appliance are shipped with grounding-type power
cords (three-wire). Always connect the cords to grounded power outlets. Because different
grounding methods are used, depending on location, check the grounding type, and refer to
documentation, such as IEC documents, for the correct grounding method. Ensure that the
facility administrator or qualified electrical engineer verifies the grounding method for the
building, and performs the grounding work.
Set conditions to the optimal temperature and humidity ranges to minimize the chance of
downtime due to component failure. Operating an Oracle Private Cloud Appliance for
extended periods at or near the operating range limits, or installing it in an environment when
it remains at or near non-operating range limits could significantly increase hardware
component failure.
The ambient temperature range of 21 ° to 23 ° Celsius (69.8 ° to 73.4 ° Fahrenheit) is
optimal for server reliability and operator comfort. Most computer equipment can operate in a
wide temperature range, but near 22 ° Celsius (71.6 ° Fahrenheit) is desirable because it is
easier to maintain safe humidity levels. Operating in this temperature range provides a safety
buffer in the event that the air conditioning system goes down for a period of time.
2-5
Chapter 2
Ventilation and Cooling Requirements
The ambient relative humidity range of 45 to 50 percent is suitable for safe data
processing operations. Most computer equipment can operate in a wide range (20 to
80 percent), but the range of 45 to 50 percent is recommended for the following
reasons:
• Optimal range helps protect computer systems from corrosion problems
associated with high humidity levels.
• Optimal range provides the greatest operating time buffer in the event of air
conditioner control failure.
• This range helps to avoid failures or temporary malfunctions caused by intermittent
interference from static discharges that may occur when relative humidity is too
low.
Note:
Electrostatic discharge (ESD) is easily generated, and hard to dissipate in
areas of low relative humidity, such as below 35 percent. ESD becomes
critical when humidity drops below 30 percent. It is not difficult to maintain
humidity in a data center because of the high-efficiency vapor barrier and low
rate of air changes normally present.
2-6
Chapter 2
Ventilation and Cooling Requirements
The Oracle Private Cloud Appliance has been designed to function while installed in a natural
convection air flow. The following requirements must be followed to meet the environmental
specification:
• Ensure that there is adequate airflow through the system.
• Ensure that the system has front-to-back cooling. The air intake is at the front of the
system, and the air outlet is at the rear of the system.
• Allow a minimum clearance of 1219.2 mm (48 inches) at the front of the system, and 914
mm (36 inches) at the rear of the system for ventilation.
Use perforated tiles, approximately 400 CFM/tile, in front of the rack for cold air intake. The
tiles can be arranged in any order in front of the rack, as long as cold air from the tiles can
flow into the rack. Inadequate cold airflow could result in a higher intake temperature in the
system due to exhaust air recirculation. The following is the recommended number of floor
tiles:
• Four floor tiles for an Oracle Private Cloud Appliance with up to 20 compute nodes (fully
loaded)
• Three floor tiles for an Oracle Private Cloud Appliance with up to 16 compute nodes (half
loaded)
• One floor tile for an Oracle Private Cloud Appliance with 8 compute nodes (quarter
loaded)
Figure 2-1 shows a typical installation of the floor tiles in a data center for Oracle Private
Cloud Appliance with more than 16 compute nodes.
Figure 2-1 Typical Data Center Configuration for Perforated Floor Tiles
2-7
3
Network Requirements
Oracle Private Cloud Appliance network architecture relies on physical high speed Ethernet
connectivity.
The networking infrastructure in Oracle Private Cloud Appliance is integral to the appliance
and shall not be altered. The networking does not integrate into any data center management
or provisioning frameworks such as Cisco ACI, Network Director, or the like, with the
exception of the ability to query the switches using SNMP in read-only mode. However,
Oracle Private Cloud Appliance can communicate with the Cisco ACI fabric in your
datacenter using the L3Out functionality (static routes or eBGP) provided by Cisco ACI. For
more information about this Cisco feature, see the Cisco ACI Fabric L3Out Guide.
Caution:
No changes to the networking switches in Oracle Private Cloud Appliance are
supported unless directed to do so by a KM note or Oracle Support.
Network Overview
For overview information regarding network infrastructure, see the following sections in the
Hardware Overview chapter of the Oracle Private Cloud Appliance Concepts Guide.
Data Network
The appliance data connectivity is built on redundant 100Gbit switches in two-layer design
similar to a leaf-spine topology. An Oracle Private Cloud Appliance rack contains two leaf and
two spine switches. The leaf switches interconnect the rack hardware components, while the
spine switches form the backbone of the network and provide a path for external traffic.
Uplinks
Uplinks are the connections between the Oracle Private Cloud Appliance and the customer
data center. For external connectivity, 5 ports are reserved on each spine switch. Four ports
3-1
Chapter 3
Network Connection Requirements
are available to establish the uplinks between the appliance and the data center
network; one port is reserved to optionally segregate the administration network from
the data traffic. Use this section to plan your network topology and logical connection
options.
Administration Network
You can optionally segregate administrative appliance access from the data traffic.
3-2
Chapter 3
Network Connection Requirements
Manual Configuration
Manually add DNS records for all labels or host names required by the appliance.
In the examples it is assumed that the data center DNS domain is example.com, that the
appliance is named mypca, and that the management node cluster virtual IP address is
192.0.2.102.
Note:
For object storage you must point the DNS label to the Object Storage Public IP.
This is the public IP address you assign specifically for this purpose when setting up
the data center public IP ranges during Initial Setup. Refer to the Public IPs step
near the end of the section "Complete the Initial Setup".
3-3
Chapter 3
Network Connection Requirements
Appliance Infrastructure Service Appliance DNS Label and Data Center DNS
Records
Admin service admin.mypca.example.com
admin A 192.0.2.102
Note:
Use the Object
Storage Public
IP from the
Appliance
Initial Setup.
API api.mypca.example.com
api A 192.0.2.102
3-4
Chapter 3
Default System IP Addresses
Appliance Infrastructure Service Appliance DNS Label and Data Center DNS
Records
Grafana grafana.mypca.example.com
grafana A 192.0.2.102
Prometheus prometheus.mypca.example.com
prometheus A 192.0.2.102
Prometheus-gw prometheus-gw.mypca.example.com
prometheus-gw A 192.0.2.102
3-5
Chapter 3
Default System IP Addresses
Caution:
For hardware management, Oracle Private Cloud Appliance uses a network
internal to the system. It is not recommended to connect the management
ports or the internal administration network switches to the data center
network infrastructure.
The table in this section lists the default management IP addresses assigned to
servers and other hardware components in an Oracle Private Cloud Appliance base
configuration rack.
3-6
Chapter 3
Default System IP Addresses
assigned to a host, these IP addresses are stored and persisted in the DHCP database.
3-7
4
Installing the Oracle Private Cloud Appliance
Rack
This chapter explains how to prepare for the installation of your Oracle Private Cloud
Appliance and how to install the system at your site.
4-1
Chapter 4
Unpack the Oracle Private Cloud Appliance
• Do not install the system near a photocopy machine, air conditioner, welding
machine, or any other equipment that generates loud, electronic noises.
• Avoid static electricity at the installation location. Static electricity transferred to the
system can cause malfunctions. Static electricity is often generated on carpets.
• Confirm that the supply voltage and frequency match the electrical ratings
indicated on your Oracle Private Cloud Appliance.
• Do not insert anything into any Oracle Private Cloud Appliance opening, unless
doing so is part of a documented procedure. The system contains high-voltage
parts. If a metal object or other electrically conductive object enters an opening in
the system, then it could cause a short circuit. This could result in personal injury,
fire, electric shock, and equipment damage.
See also:
• Important Safety Information for Sun Hardware Systems (816-7190) included with
the rack
• Oracle Rack Cabinet 1242 Safety and Compliance Guide
• Oracle Rack Cabinet 1242 Power Distribution Units User's Guide
• Oracle Engineered System Safety and Compliance Guide (non-Nordic)
• Oracle Engineered System Safety and Compliance Guide (Nordic)
4-2
Chapter 4
Unpack the Oracle Private Cloud Appliance
4-3
Chapter 4
Unpack the Oracle Private Cloud Appliance
Caution:
Carefully unpack the rack from the packaging and shipping pallet. Rocking or
tilting the rack can cause it to fall over and cause serious injury or death. You
should always use professional movers when unpacking and installing this
rack.
Note:
After unpacking the rack from the packaging, save the shipping brackets
used to secure the rack to the shipping pallet. You can use these shipping
brackets to secure the rack permanently to the installation site floor. Do not
dispose of these brackets, because you cannot order replacement brackets.
Caution:
Shipping brackets are not for use for bracing or anchoring the rack during
seismic events.
Figure 4-1 Unpack the Oracle Private Cloud Appliance From the Packaging
4-4
Chapter 4
Install Oracle Private Cloud Appliance in Its Allocated Space
NOT_SUPPORTED:
Never attempt to move an Oracle Private Cloud Appliance by pushing on the
rack side panels. Pushing on the rack side panels can tip over the rack. This
action can cause serious personal injury or death, as well as damage to the
equipment.
The front casters of the rack are fixed; they do not pivot. When moving your Oracle
Private Cloud Appliance to the installation site, you must steer the unit using the rear
casters. You can safely maneuver the system by carefully pushing it from behind. See the
figure below.
It is preferred that at least three people push and guide the rack: one person in front and
two persons in back to help guide the rack and keep people out of the path of the moving
rack. When transporting configured racks from one location to another, take care to move
them slowly, 0.65 meters per second (2.13 feet per second) or slower.
Carefully examine the transportation path. Avoid obstacles such as doorways or elevator
thresholds that can cause abrupt stops or shocks. Go around obstacles by using ramps
or lifts to enable smooth transport.
Caution:
Never tip or rock the Oracle Private Cloud Appliance because the rack can fall
over.
4-5
Chapter 4
Install Oracle Private Cloud Appliance in Its Allocated Space
Figure 4-2 Carefully Push the Oracle Private Cloud Appliance From the
Back of the Rack
4. When the rack is at the installation site, verify that no components or connections
have become dislodged or disconnected during transport. If necessary, re-attach
components and cables properly.
Caution:
Shipping brackets are not for use for bracing or anchoring the rack during
seismic events.
To secure the rack to the installation floor using the shipping brackets, you must drill
the appropriate holes in the floor, re-attach the shipping brackets to the rack, position
the rack over the mounting holes, and attach the shipping brackets to the floor firmly
4-6
Chapter 4
Install Oracle Private Cloud Appliance in Its Allocated Space
with bolts and washers that suit the specific environment. Oracle does not provide mounting
bolts and washers for the shipping brackets, because different floors require different bolt
types and strengths.
(Optional) If you plan to route data or power distribution unit (PDU) power cords down
through the bottom of the rack, you will need to cut a hole in the installation site floor. Cut a
rectangular hole below the rear portion of the rack, between the two rear casters and behind
the RETMA (Radio Electronics Television Manufacturers Association) rails.
Caution:
Do not create a hole where the rack casters or leveling feet brackets will be placed.
When the rack is in position, the leveling feet must be deployed. The rack contains four
leveling feet that can be lowered to share the load with the casters. This increases the
footprint of the rack, which improves stability and helps prevent rack movement. The leveling
feet must be used even when the rack is permanently secured to the floor. To adjust the
leveling feet, do the following:
1. Locate the four leveling feet at the bottom four corners of the rack.
2. Using a 6mm hex wrench, lower the leveling feet to the floor.
4-7
Chapter 4
Install Oracle Private Cloud Appliance in Its Allocated Space
4-8
Chapter 4
Install Oracle Private Cloud Appliance in Its Allocated Space
When lowered correctly, the four leveling feet share the load with the casters to increase
footprint, improve stability, and help support the full weight of the Oracle Private Cloud
Appliance.
Caution:
When the rack needs to be moved to a different location, including repacking, verify
that the leveling feet have been retracted before moving the rack. Otherwise the
leveling feet may become bent, or the rack could tip over.
4-9
Chapter 4
Connect the Appliance to Your Network
Caution:
The PDU power input lead cords and the ground cable must reference a
common earth ground. If they do not, then a difference in ground potential
can be introduced. If you are unsure of your facility's PDU receptacle
grounding, then do not install a ground cable until you confirm that there is a
proper PDU receptacle grounding. If a difference in ground potential is
apparent, then you must take corrective action.
Note:
A grounding cable is not shipped with the Oracle Private Cloud Appliance.
1. Ensure that the installation site has a properly grounded power source in the data
center. The facility PDU must have earth ground.
2. Ensure that all grounding points, such as raised floors and power receptacles,
reference the facility ground.
3. Ensure that direct, metal-to-metal contact is made for this installation. During
manufacturing, the ground cable attachment area might have been painted or
coated.
4. Attach the ground cable to one of the attachment points located at the bottom rear
of the system frame. See Figure 4-6.
The attachment point is an adjustable bolt that is inside the rear of the system
cabinet on the right side.
4-10
Chapter 4
Connect the Appliance to Your Network
For external connectivity, 5 ports are reserved on each spine switch. Four ports are available
to establish the uplinks between the appliance and the data center network; one port is
reserved to optionally segregate the administration network from the data traffic.
On each spine switch, ports 1-4 can be used for uplinks to the data center network. For
speeds of 10Gbps or 25Gbps, the spine switch port must be split using a 4-way splitter or
breakout cable. For higher speeds of 40Gbps or 100Gbps each switch port uses a single
direct cable connection. For overview information, see "Uplinks" in the Network Infrastructure
section of the Hardware Overview.
At a minimum, you must connect 1 port on each spine switch, which provides a single high
bandwidth, high availability network for the administration and data traffic.
Note:
The administration network and the data network can be configured at different
speeds. For example, you can configure your administration network to operate at
10Gbit, and your data network to operate at 40Gbit.
1. Connect 1-4 high-speed Ethernet ports on each spine switch to your data center public
Ethernet network.
Use the following table to determine the correct configuration for your environment.
Caution:
It is critical that both spine switches have a connection to a pair of next-level
data center switches. This configuration provides redundancy and load splitting
at the level of the spine switches and the data center switches. The cabling
pattern plays a key role in the continuation of service during failover scenarios.
4-11
Chapter 4
Power On for the First Time
Note:
Oracle Private Cloud Appliance is preconfigured by Oracle as a self-
contained system. You should not move any equipment or add any
unsupported hardware to the system.
4. Check that all cable connections are secure and firmly in place as follows:
a. Check the power cables. Ensure that the correct connectors have been
supplied for the data center facility power source.
b. Check the network data cables.
5. Check the site location tile arrangement for cable access and airflow.
6. Check the data center airflow that leads in to the front of the system.
For more information, see Ventilation and Cooling Requirements.
4-12
Chapter 4
Power On for the First Time
Figure 4-7 Power Cord Routing From the Bottom of the Rack
4-13
Chapter 4
Power On for the First Time
Figure 4-8 Power Cord Routing From the Top of the Rack
6. Plug the power distribution unit (PDU) power cord connectors into the facility
receptacles. Ensure the breaker switches are in the OFF position before
connecting the power cables.
Note:
You can connect to your Oracle Private Cloud Appliance using a network
connection to monitor the system power-on procedure. For instructions, see
Connect a Workstation to the Appliance.
4-14
Chapter 4
Power On for the First Time
1. Make sure that the power switches located on the rear left and right side power supplies
of the Oracle Storage Drive Enclosure DE3-24C and Oracle Storage Drive Enclosure
DE3-24P are in the ON (|) position.
Oracle Storage Drive Enclosure DE3-24P and Oracle Storage Drive Enclosure DE3-24C
Power Switches
2. Switch on the power distribution unit (PDU) circuit breakers located on the rear of PDU A
and B inside the Oracle Private Cloud Appliance.
The circuit breakers are on the rear of the system cabinet as shown below. Press the ON
(|) side of the toggle switch.
4-15
Chapter 4
Power On for the First Time
After power is applied, the LEDs on the all of the compute nodes and storage
server heads will start to blink after approximately two minutes. From the rear of
the rack, you can see the green LEDs on the power supply units (PSUs) on the
compute nodes turn on instantly after power is applied. In addition, from the rear of
the rack, you can see the display on the power distribution units (PDUs) illuminate
once power is available.
Note:
Allow 20 minutes for the storage controllers to come online before
powering on each management node.
3. Press the Power button located on the front of each management node.
The first management node is located in rack unit 5 (U5). The second
management node is located in rack unit 6 (U6), and the third management node
is located in rack unit 7 (U7).
A management nodes take approximately five to ten minutes to power on
completely. Once complete, the Power/OK LED illuminates and remains a steady
green.
4-16
Chapter 4
Emergency Procedures for Oracle Private Cloud Appliance
The management nodes will verify all components within the system. The management
nodes ensure that the correct networking switches and storage devices are installed in
the system, and search for compute nodes to power on and add to the compute fabric.
Depending on your system configuration, powering on the compute nodes and bringing
them to the ready-to-provision state should take approximately 10 minutes per compute
node. Do not power cycle the management nodes during the discovery period. Proceed
to configuring the appliance.
Caution:
Once powered on, do not power down the management nodes until you have
completed the initial configuration process described in Complete the Initial
Setup.
Note:
If you use the compute instance high availability feature, disable this feature before
a system shutdown:
PCA-ADMIN> disableVmHighAvailability
Remember to enable this feature once the system and computes nodes are
restarted.
PCA-ADMIN> enableVmHighAvailability
4-17
Chapter 4
Emergency Procedures for Oracle Private Cloud Appliance
4. Power on the compute nodes from Oracle ILOM, using the start /SYS command.
Wait until the compute nodes are in the ready state, running, and Kubernetes
micro services PODs are running on all compute nodes before you proceed.
5. If you use the VM High Availability feature, you must re-enable this feature when
restarting your Oracle Private Cloud Appliance.
PCA-ADMIN> enableVmHighAvailability
4-18
Chapter 4
Emergency Procedures for Oracle Private Cloud Appliance
For GUI instructions, see the Managing the Lifecycle of an Instance section in Compute
Instance Deployment.
4-19
5
Configuring Oracle Private Cloud Appliance
This chapter explains how to complete the initial configuration of your Oracle Private Cloud
Appliance.
First, gather the information you need for the configuration process by completing the Initial
Installation Checklist.
Before you connect to the Oracle Private Cloud Appliance for the first time, ensure that you
have made the necessary preparations for external network connections. Refer to Network
Requirements.
Note:
You access the initial configuration wizard through the Service Web UI using a web
browser. For support information, please refer to the Oracle software web browser
support policy.
1. Connect a workstation with a web browser directly to the management network using an
Ethernet cable connected to port 2 in the management switch.
2. Configure the wired network connection of the workstation to use the static IP address
100.96.3.254/23. You can also add 100.96.1.254/23 as another IP address if
needed.
3. Using the web browser on the workstation, connect to the Oracle Private Cloud Appliance
initial configuration interface on the active management node at https://
100.96.2.32:30099.
100.96.2.32 is the predefined virtual IP address of the management node cluster for
configuring Oracle Private Cloud Appliance.
4. Configure the Appliance using the the UI or the Service CLI.
5-1
Chapter 5
Complete the Initial Setup
Complete the Initial Installation Checklist, if you have not already done so and ensure
the web browser on your workstation is connected to the Oracle Private Cloud
Appliance initial configuration interface on the active management node at https://
100.96.2.32:30099.
Caution:
Do not power down the management nodes during the initial configuration
process.
1. From the Private Cloud Appliance First Boot page, create the primary
administrative account for your appliance, which is used for initial configuration
and will persist after the first boot process. Additional accounts can be added later.
a. Enter an Administrative Username.
b. Enter and confirm the Administrative Password.
Note:
Passwords must contain a minimum of 12 characters with at least
one of each: uppercase character, lowercase character, digit, and
any punctuation character (expect for double quote ('"') characters,
which are not allowed).
Important:
At the Service Enclave Sign In page, Do not sign in and do not
refresh your browser.
2. Open a terminal to access the Service CLI and unlock the system.
a. Log into one of the management nodes using the primary administrative
account details you just created.
Note:
Management nodes are named pcamn01, pcamn02 and pcamn03 by
default. You change these names later in the configuration process.
5-2
Chapter 5
Complete the Initial Setup
Note:
You might need to accept the self-signed SSL certificate again before signing
in.
4. Provide the following appliance details. Required entries are marked with an asterisk.
• System Name*
• Domain*
• Rack Name
• Description
5-3
Chapter 5
Complete the Initial Setup
5. Confirm the parameters you just entered are correct. Once System Name
and Domain are set, they cannot be changed. Click Save Changes when you
are ready to proceed.
6. Refresh your web browser and sign in to the system with the primary
administrative account.
Note:
You might need to accept the self-signed SSL certificate again before
signing in.
5-4
Chapter 5
Complete the Initial Setup
5-5
Chapter 5
Complete the Initial Setup
9. Enter a shared virtual IP and associated host name for the management node
cluster; add an IP address and host name for each of the three individual
management nodes; and then click Next.
5-6
Chapter 5
Complete the Initial Setup
10. Enter the following data center uplink information and then click Next.
5-7
Chapter 5
Complete the Initial Setup
11. Enter the NTP configuration details and then click Next.
5-8
Chapter 5
Complete the Initial Setup
12. If you elected to segregate administrative appliance access from the data traffic,
configure the administration network by entering the following information and then click
Next.
• Enable Admin Networking
• Admin Management VIP, IPs 1, 2, and 3
• Admin Management VIP hostname, hostnames 1, 2 and 3
• At least one Admin DNS server
• Admin Port Speed, Port Count, and HSRP Group
• Admin VLAN, MTU, Port FEC, and Gateway IP
• Admin Netmask and CIDR
• Admin IP Address for Spine Switch 1 and 2, and a shared Virtual IP
5-9
Chapter 5
Complete the Initial Setup
13. Enter up to three DNS servers in the respective fields and then click Next.
5-10
Chapter 5
Complete the Initial Setup
14. Enter the data center IP addresses that the appliance can assign to resources as public
IPs.
• Public IP list of CIDRs in a comma-separated list
• Object Storage Public IP (must be outside the public IP range)
5-11
Chapter 5
Complete the Initial Setup
15. Use the Previous/Next buttons to recheck that the information you entered is
correct and then click Save Changes.
Your network configuration information does not persist until you commit your
changes in the following step. If you need to change any parameters after testing
begins, you must re-enter all information.
Caution:
Once you click Save Changes,network configuration and testing begins
and can take up to 15 minutes. Do not close the browser window during
this time.
Caution:
Once you click Commit Changes, system initialization begins and can
take up to 15 minutes. Do not close the browser window during this
time.
18. To continue configuration, connect to the Service Web UI at the new virtual IP
address of the management node cluster: https://<virtual_ip>:30099.
Note:
You might need to accept the self-signed SSL certificate again before
signing in.
• From the Dashboard, click Appliance to view the system details and click
Network Environement to view the network configuration.
• Alternatively, you can log in to the Service CLI as an administrator and run the
following commands to confirm your entries.
# ssh 100.96.2.32 -l admin -p 30006
Password:
PCA-ADMIN> show pcaSystem
5-12
Chapter 5
Complete the Initial Setup
[...]
PCA-ADMIN> show networkConfig
[...]
For details about the software configuration process, and for advanced configuration and
update options, refer to What Next and the Oracle Private Cloud Appliance Administrator
Guide.
100.96.2.32 is the predefined virtual IP address of the management node cluster for
configuring Oracle Private Cloud Appliance.
4. Confirm you are logged in as the initial user, where System Config State = Config
User.
PCA-ADMIN> show pcaSystem
Command: show pcasystem
Status: Success
Time: 2022-01-20 14:20:01,069 UTC
Data:
Id = o780c522-fkl5-43b1-8g30-eea90263f2e9
Type = PcaSystem
System Config State = Config User
5-13
Chapter 5
Complete the Initial Setup
Note:
Management nodes are named mn01, mn02 and mn03 unless you
change these names later in the configuration process.
b. Enter systemStateunlock.
c. Verify the system is unlocked.
PCA-ADMIN> show pcaSystem
Command: show pcaSystem
Status: Success
Time: 2022-09-16 12:24:28,232 UTC
Data:
Id = 5709f72b-c439-4c3a-8959-758df94eff25
Type = PcaSystem
system state locked = false
8. Log out, then log back in with the new credentials you just created.
PCA-ADMIN> exit
# ssh new-admin-account@100.96.2.32 -p 30006
Password authentication
Password:
PCA-ADMIN>
9. Confirm the system is ready for configuration, when the System Config State =
Config System Params.
PCA-ADMIN> show pcaSystem
Command: show pcasystem
Status: Success
Time: 2022-01-20 14:26:01,069 UTC
Data:
Id = o780c522-fkl5-43b1-8g30-eea90263f2e9
Type = PcaSystem
System Config State = Config System Params
[…]
10. Configure the system name and domain name, then confirm the settings.
5-14
Chapter 5
Complete the Initial Setup
11. Configure the network parameters. Once you enter these details, network initialization
begins and can take up to 15 minutes.
• For a dynamic network configuration, enter the parameters on a single line.
PCA-ADMIN> setDay0DynamicRoutingParameters \
uplinkPortSpeed=100 \
uplinkPortCount=2 \
uplinkVlanMtu=9216 \
spine1Ip=10.nn.nn.17 \
spine2Ip=10.nn.nn.25 \
uplinkNetmask=255.255.255.252 \
mgmtVipHostname=apac01-vip \
mgmtVip=10.nn.nn.8 \
ntpIps=10.nn.nn.1 \
peer1Asn=50000 \
peer1Ip=10.nn.nn.18 \
peer2ASN=50000 \
peer2Ip=10.nn.nn.22 \
objectStorageIp=10.nn.nn.1
12. Confirm the network parameters are configured. You can monitor the process using the
show NetworkConfig command. When the process is complete, the Network Config
Lifecycyle State = ACTIVE.
PCA-ADMIN> show NetworkConfig
Command: Success
Time: 2022-01-15 14:28:47,781 UTC
Data:
uplinkPortSpeed=100
uplinkPortCount=2
[…]
BGP Holddown Timer = 180
Netowrk Config Lifecycle State = ACTIVE
When this process is complete, the System Config State changes from Wait for
Networking Service to Config_Network_ Params.
PCA-ADMIN> show pcasystem
Command: show pcasystem
Status: Success
Time: 2022-01-20 14:29:07,069 UTC
Data:
Id = o780c522-fkl5-43b1-8g30-eea90263f2e9
Type = PcaSystem
5-15
Chapter 5
Optional Bastion Host Uplink
15. Enter the list of public IPs the appliance can access from your datacenter, in a
comma-separated list on one line.
edit NetworkConfig publicIps=10.nn.nn.2/31,10.nn.nn.4/30,10.nn.nn.8/29, \
10.nn.nn.16/28,10.nn.nn.32/27,10.nn.nn.64/26,10.nn.nn.128/26,10.nn.nn.192/27,
\
10.nn.nn.224/28,10.nn.nn.240/29,10.nn.nn.248/30,10.nn.nn.252/31,10.nn.nn.254/
32
Caution:
Do not make any changes to anything on this network unless directed to do
so by Oracle Support.
5-16
Chapter 5
Optional Connection to Exadata
Caution:
Connect port 2 on the management switch.
Make sure that the data center Ethernet switch used in this connection is configured
to prevent DHCP leakage to the 100.96.0.0/22 subnet used by Oracle Private Cloud
Appliance. Do not connect to any network with any kind of broadcast services in
addition to DHCP.
For the bastion host, which is the name used to describe the machine that is
permanently connected to the data center administration network, use the IP
address 100.96.3.254/23 and assign it statically to its network interface. Make sure
there is no other machine on the same subnet using the same IP address and
causing IP conflicts.
Both the ILOM and internal management network are configured on the same management
switch. In order to communcate with both networks, you must configure the bastion host with
two paths to the switch. You can choose one of two configuration options:
• Configure two IP addresses on the bastion host.
For example, add 100.96.1.254/23 as a second IP address.
# cat ifcfg-eth1
NAME=eth1
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no
DEFROUTE=no
IPV6INIT=no
IPADDR1=100.96.3.254
PREFIX1=23
IPADDR2=100.96.1.254
PREFIX2=23
and on the 100.96.2.0/23 network, if the bastion host is configured with the IP
100.96.1.254 for subnet 100.96.0.0/23, add this route:
ip route add 100.96.2.0/23 via 100.96.0.1 dev eth1
5-17
Chapter 5
What Next
Exadata racks. For more information, see "Exadata Integration" in the Network
Infrastructure section of Hardware Overview.
To cable the Oracle Private Cloud Appliance to the Exadata rack use breakout cables,
with a QSFP28 transceiver on the spine switch end and four SFP28 transceivers on
the other end, to connect from ports 7 - 10 on the Oracle Private Cloud Appliance
spine switches to the Exadata database servers.
Once the cable connections are in place, you must configure an Exadata network,
which enables traffic between the connected database nodes and a set of compute
instances. Refer to Creating and Managing Exadata Networks in Hardware
Administration.
What Next
Once the initial installation of your Oracle Private Cloud Appliance is complete, you
can begin to customize the appliance for use.
Note:
Ensure you provision the compute nodes before you hand off a newly
created tenancy to the tenancy administrator. Unprovisioned compute nodes
can cause VCN creation to fail.
5-18
Chapter 5
What Next
5-19
6
Cabling Reference
This section provides a reference for the cabling between components of Oracle Private
Cloud Appliance.
From Rack Unit From Component and Port To Management Switch Rack
Unit and Port
-- bastion U26: port 2
U41* compute node: NET0 U26: port 42
U40* compute node: NET0 U26: port 41
U39* compute node: NET0 U26: port 40
U38* compute node: NET0 U26: port 39
U37* compute node: NET0 U26: port 38
U36* compute node: NET0 U26: port 37
U35* compute node: NET0 U26: port 36
U34* compute node: NET0 U26: port 35
U32 leaf switch: NET MGMT U26: port 14
U31 leaf switch: NET MGMT U26: port 13
U25 spine switch: NET MGMT U26: port 34
U24 spine switch: NET MGMT U26: port 33
U23* compute node: NET0 U26: port 23
U22* compute node: NET0 U26: port 22
U21* compute node: NET0 U26: port 21
U20* compute node: NET0 U26: port 20
U19* compute node: NET0 U26: port 19
U18* compute node: NET0 U26: port 18
U17* compute node: NET0 U26: port 17
U16* compute node: NET0 U26: port 16
U15 compute node: NET0 U26: port 15
U14 compute node: NET0 U26: port 12
U13 compute node: NET0 U26: port 11
U12 compute node: NET0 U26: port 10
U11 compute node: NET0 U26: port 9
U10 compute node: NET0 U26: port 8
6-1
Chapter 6
Spine and Leaf Switch Data Network Connections
From Rack Unit From Component and Port To Management Switch Rack
Unit and Port
U09 compute node: NET0 U26: port 7
U08 compute node: NET0 U26: port 6
U07 management node: NET0 U26: port 5
U06 management node: NET0 U26: port 32
U05 management node: NET0 U26: port 31
U03 ZFS storage appliance U26: port 28
controller: NET0
U03 ZFS storage appliance U26: port 29
controller: PCIe 6-2
U01 ZFS storage appliance U26: port 25
controller: NET0
U01 ZFS storage appliance U26: port 26
controller: PCIe 6-2
N/A PDU-A U26: port 24
N/A PDU-B U26: port 43
N/A Oracle Support U26: port 1
* Indicates a rack unit designated as a flex bay. These rack units can contain compute
nodes or storage nodes. The rack unit and port numbers assigned to flex bays only
apply when a compute node is installed in that location.
From Rack Unit From Component and Port To Spine or Leaf Switch
Rack Unit and Port
U41 compute node: port 1 U24: port 31
U41 compute node: port 2 U25: port 31
U40 compute node: port 1 U24: port 30
U40 compute node: port 2 U25: port 30
U39 compute node: port 1 U24: port 29
U39 compute node: port 2 U25: port 29
U38 compute node: port 1 U24: port 28
U38 compute node: port 2 U25: port 28
U37 compute node: port 1 U24: port 27
U37 compute node: port 2 U25: port 27
U36 compute node: port 1 U24: port 26
U36 compute node: port 2 U25: port 26
U35 compute node: port 1 U24: port 25
6-2
Chapter 6
Spine and Leaf Switch Data Network Connections
From Rack Unit From Component and Port To Spine or Leaf Switch
Rack Unit and Port
U35 compute node: port 2 U25: port 25
U34 compute node: port 1 U24: port 24
U34 compute node: port 2 U25: port 24
U23 compute node: port 1 U24: port 23
U23 compute node: port 2 U25: port 23
U22 compute node: port 1 U24: port 22
U22 compute node: port 2 U25: port 22
U21 compute node: port 1 U24: port 21
U21 compute node: port 2 U25: port 21
U20 compute node: port 1 U24: port 20
U20 compute node: port 2 U25: port 20
U19 compute node: port 1 U24: port 19
U19 compute node: port 2 U25: port 19
U18 compute node: port 1 U24: port 18
U18 compute node: port 2 U25: port 18
U17 compute node: port 1 U24: port 17
U17 compute node: port 2 U25: port 17
U16 compute node: port 1 U24: port 16
U16 compute node: port 2 U25: port 16
U15 compute node: port 1 U24: port 15
U15 compute node: port 2 U25: port 15
U14 compute node: port 1 U24: port 14
U14 compute node: port 2 U25: port 14
U13 compute node: port 1 U24: port 13
U13 compute node: port 2 U25: port 13
U12 compute node: port 1 U24: port 12
U12 compute node: port 2 U25: port 12
U11 compute node: port 1 U24: port 11
U11 compute node: port 2 U25: port 11
U10 compute node: port 1 U24: port 10
U10 compute node: port 2 U25: port 10
U09 compute node: port 1 U24: port 9
U09 compute node: port 2 U25: port 9
U08 compute node: port 1 U24: port 8
U08 compute node: port 2 U25: port 8
U07 management node: port 1 U24: port 7
U07 management node: port 2 U25: port 7
U06 management node: port 1 U24: port 6
U06 management node: port 2 U25: port 6
6-3
Chapter 6
Data and Spine Switch Interconnects
From Rack Unit From Component and Port To Spine or Leaf Switch
Rack Unit and Port
U05 management node: port 1 U24: port 5
U05 management node: port 2 U25: port 5
U03 Oracle ZFS Storage U31: port 35
Appliance ZS9-2 controller:
PCIE3 port 1
U03 Oracle ZFS Storage U32: port 35
Appliance ZS9-2 controller:
PCIE10 port 1
U01 Oracle ZFS Storage U31: port 33
Appliance ZS9-2 controller:
PCIE3 port 1
U01 Oracle ZFS Storage U32: port 33
Appliance ZS9-2 controller:
PCIE10 port 1
6-4
Chapter 6
Data and Spine Switch Interconnects
From Rack Unit From Component and To Rack Unit To Component and
Port Port
U31 spine switch: port 28 U32 spine switch: port 28
U31 spine switch: port 29 U32 spine switch: port 29
U31 spine switch: port 30 U32 spine switch: port 30
U31 spine switch: port 31 U32 spine switch: port 31
U31 spine switch: port 32 U32 spine switch: port 32
6-5
7
Site Checklists
This section contains site checklists to help you ensure that your site is prepared for installing
the Oracle Private Cloud Appliance.
7-1
Chapter 7
Data Center Room Checklist
7-2
Chapter 7
Data Center Environmental Checklist
7-3
Chapter 7
Access Route Checklist
7-4
Chapter 7
Access Route Checklist
7-5
Chapter 7
Access Route Checklist
7-6
Chapter 7
Access Route Checklist
7-7
Chapter 7
Facility Power Checklist
7-8
Chapter 7
Facility Power Checklist
7-9
Chapter 7
Safety Checklist
Safety Checklist
Complete the following checklist to ensure that the safety requirements are met. For
information about safety, see Emergency Procedures for Oracle Private Cloud
Appliance and Ventilation and Cooling Requirements.
Logistics Checklist
Complete the following checklist to ensure that the logistics requirements are met. For
information about unpacking and space requirements, see Space Requirements.
7-10
Chapter 7
Logistics Checklist
7-11
Chapter 7
Logistics Checklist
7-12
Chapter 7
Network Specification Checklist
7-13
Chapter 7
Network Specification Checklist
7-14
Chapter 7
Initial Installation Checklist
Items noted in the table with an asterisk (*) are required fields for all configurations. Fields
marked with a (†) are required for static network configuration, and fields marked with a (‡)
are required for dynamic network configuration.
7-15
Chapter 7
Initial Installation Checklist
7-16
Chapter 7
Initial Installation Checklist
7-17
Chapter 7
Initial Installation Checklist
7-18
Chapter 7
Initial Installation Checklist
7-19
Chapter 7
Initial Installation Checklist
7-20
Chapter 7
Initial Installation Checklist
7-21
Chapter 7
Initial Installation Checklist
7-22