0% found this document useful (0 votes)
92 views60 pages

Field Installation Guide v4 - 5

Uploaded by

elcaso34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views60 pages

Field Installation Guide v4 - 5

Uploaded by

elcaso34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Foundation 4.5.

Field Installation Guide


October 1, 2020
Contents

1. Field Installation Overview...................................................................................4

2. Foundation Considerations..................................................................................5
Foundation Use Case Matrix.................................................................................................................................5
CVM vCPU and vRAM Allocation....................................................................................................................... 6

3. Prepare Factory-Imaged Nodes for Imaging............................................... 9


Node Discovery and Foundation Launch...................................................................................................... 10
Discovering Nodes in the Same Broadcast Domain......................................................................10
Discovering Nodes in a VLAN-Segmented Network...................................................................... 11
Launching Foundation............................................................................................................................... 12
CVM Foundation Upgrade....................................................................................................................................13
Upgrading the CVM Foundation by Using the GUI........................................................................13
Upgrading the CVM Foundation by Using the Foundation Java Applet............................... 13

4. Prepare Bare-Metal Nodes for Imaging........................................................ 15


Prepare Bare-Metal Nodes for Imaging.......................................................................................................... 15
Considerations for Bare Metal Node Imaging.............................................................................................. 15
Preparing the Workstation................................................................................................................................... 16
Installing the Foundation VM.............................................................................................................................. 17
Uploading Installation Files to the Foundation VM....................................................................................21
Setting Up the Network........................................................................................................................................ 21
Foundation VM Upgrade......................................................................................................................................24
Upgrading the Foundation VM by Using the GUI......................................................................... 24
Upgrading the Foundation VM by Using the CLI.......................................................................... 25

5. Foundation App for Imaging........................................................................... 26


Installing Foundation App on macOS............................................................................................................ 26
Installing Foundation App on Windows........................................................................................................ 26
Uninstalling Foundation App on macOS....................................................................................................... 27
Uninstalling Foundation App on Windows................................................................................................... 27
Upgrading Foundation App................................................................................................................................ 28

6. Node Configuration and Foundation Launch............................................ 29


Node Configuration and Foundation Launch.............................................................................................. 29
Configuring the Foundation GUI Automatically......................................................................................... 29
Configuring Foundation VM by Using the Foundation GUI.................................................................... 31

7. Post-Installation Steps.........................................................................................36
Configuring a New Cluster in Prism................................................................................................................36

8. Hypervisor ISO Images.......................................................................................38

ii
Verify Hypervisor Support...................................................................................................................................39
Updating an iso_whitelist.json File on Foundation VM...........................................................................40

9. Network Requirements........................................................................................41

10. Hyper-V Installation Requirements..............................................................43

11. Setting IPMI Static IP Address........................................................................47

12.  Troubleshooting................................................................................................... 49
Fixing IPMI Configuration Issues...................................................................................................................... 49
Fixing Imaging Issues............................................................................................................................................50
Frequently Asked Questions (FAQ)..................................................................................................................51

Appendix A: Appendix: Single-Node Configuration (Phoenix)................58

Copyright...................................................................................................................59
License......................................................................................................................................................................... 59
Conventions............................................................................................................................................................... 59
Default Cluster Credentials................................................................................................................................. 59
Version.........................................................................................................................................................................60

iii
1
FIELD INSTALLATION OVERVIEW
For a node to join a Nutanix cluster, it must have a hypervisor and AOS combination that Nutanix supports. AOS
is the operating system of the Nutanix Controller VM, which is a VM that must be running in the hypervisor to
provide Nutanix-specific functionality. Find the complete list of supported hypervisor/AOS combinations at https://
portal.nutanix.com/page/documents/compatibility-matrix.
Foundation is the official deployment software of Nutanix. Foundation allows you to configure a pre-imaged node, or
image a node with a hypervisor and an AOS of your choice. Foundation also allows you to form a cluster out of nodes
whose hypervisor and AOS versions are the same, with or without re-imaging. Foundation is available for download
at https://portal.nutanix.com/#/page/Foundation.
If you already have a running cluster and want to add nodes to it, you must use the Expand Cluster option in Prism,
instead of using Foundation. Expand Cluster allows you to directly re-image a node whose hypervisor/AOS version
does not match the cluster's version, or a node that is only running DiscoveryOS. More details on DiscoveryOS are
provided below.
Nutanix and its OEM partners install some software on a node at the factory, before shipping it to the customer. For
shipments inside the USA, this software is a hypervisor and an AOS. For Nutanix factory nodes, the hypervisor is
AHV. In case of the OEM factories, it is up to the vendor to decide what hypervisor to ship to the customer. However,
they always install AOS, regardless of the hypervisor.
For shipments outside the USA, Nutanix installs a light-weight software called DiscoveryOS, which allows the node
to be discovered in Foundation or in the Expand Cluster option of Prism.
Since a node with DiscoveryOS is not pre-imaged with a hypervisor and an AOS, it must go through imaging
first before joining a cluster. Both Foundation and Expand Cluster allow you to directly image it with the correct
hypervisor and AOS.
Vendors who do not have an OEM agreement with Nutanix ship a node without any software (not even DiscoveryOS)
installed on it. Foundation supports bare-metal imaging of such nodes. In contrast, Expand Cluster does not support
direct bare-metal imaging. Therefore, if you want to add a software-less node to an existing cluster, you must first
image it using Foundation, then use Expand Cluster.
This document only explains procedures that apply to NX and OEM nodes. For non-OEM nodes, you must perform
the bare-metal imaging procedures specifically adapted for those nodes. For those procedures, see the vendor-specific
field installation guides, available on the Nutanix Support portal at https://portal.nutanix.com/page/documents/list?
type=compatibilityList.

Note: Mixed-vendor cluster is not supported. For more information, see the Product Mixing Restrictions in the NX and
SX Series Hardware Administration Guide.

• See Prepare Factory-Imaged Nodes for Imaging on page 9 to re-image factory-prepared nodes, or create a
cluster from these nodes, or both.
• See Prepare Bare-Metal Nodes for Imaging on page 15 to image bare-metal nodes and optionally configure
them into a cluster.
2
FOUNDATION CONSIDERATIONS
This section provides the various guidelines, compatibility list, limitations, and capabilities of Foundation.

Foundation Use Case Matrix


The following matrix provides a list of use cases and its supportability with the various Foundation bits
available for download:

Table 1: Foundation Use Case Matrix

CVM Foundation Portable Foundation Standalone Foundation


(Windows, macOS) VM

Function
• Factory-imaged nodes • Factory-imaged nodes • Factory-imaged nodes
• Bare-metal nodes • Bare-metal nodes

Hardware Any, if you image


• Any • Any
discovered nodes.
If you image nodes without
discovery, the hardware
support is limited as
follows:

• Nutanix: Only G4 and


above
• Dell
• HPE
• Lenovo: Only Cascade
Lake and above

If IPv6 is disabled Cannot image nodes. IPMI IPv4 required on IPMI IPv4 required on
the nodes the nodes

Can configure the VLAN No. Manually configure No. Manually configure Yes
of Foundation in the vSwitch of the in Windows or macOS.
host.

Can configure the VLAN Yes Yes Yes


of CVM/hosts

LACP support Yes Yes Yes

Multi-homing support Yes Yes Yes

Foundation | Foundation Considerations | 5


CVM Foundation Portable Foundation Standalone Foundation
(Windows, macOS) VM

RDMA support Yes Yes Yes

How to use? Access using http:// Launch the executable Deploy as a VM on


CVM_IP:8000/ for Windows 10+ or VirtualBox, Fusion,
macOS 10.13.1+ Workstation, AHV, ESX,
and so on

CVM vCPU and vRAM Allocation


vCPU Allocation
The number of vCPUs are set to the number of physical cores in the CVM NUMA node. The NUMA node is
connected to the SSD storage controller. The minimum or maximum values are as follows:

Table 2: vCPU Allocation

Platform vCPU
AMD Naples, AMD Rome, or PowerPC
• Fixed to 8 for AMD Naples
• Fixed to 12 for AMD Rome
• Fixed to 6 for PowerPC

If the platform has dense storage, that is, 2+ NUMA Up to 14


nodes and one of the following:

• 32+ TB of HDD
• OR 48+ TB of SSD (excluding NVMe)

If the platform is high performance, that is, 2+


NUMA nodes and 8+ physical cores in the CVM • Between 8 and 12 (inclusive)
NUMA node and one of the following: • Between 12 and 16 (inclusive) if:
• 2+ NVMe drives • Hyperthreading is enabled
• RDMA is enabled • Or 12+ physical cores in the CVM NUMA node

If the platform is generic, that is, 8+ physical cores Between 8 and 12 (inclusive)
in the CVM NUMA node OR both of the following:

• 6+ physical cores in the CVM NUMA node


• Hyper-threading is enabled

For low end platforms Equal to the number of physical cores in the CVM
NUMA node

Note:

• Foundation GUI and Prism do not provide an option to override vCPU defaults.

Foundation | Foundation Considerations | 6


vRAM Allocation
Every layout module of the platform in Phoenix defines a hardware attribute called "default_workload" that classifies
the default purpose of that platform into one of the following categories:

Table 3: vRAM Allocation

Platform vRAM

“vdi”: general / VDI / server virtualization 20 GB

“storage_heavy” or “minimal_compute_node” 28 GB

Unless the platform has hardware attribute 32 GB


“all_flash_low_perf” (rare), the above 2 rules are
ignored if the platform is All Flash

“high_perf” 32 GB

“dense”, or if the node has 2+ NUMA nodes and 40 GB


one of the following:

• 32+ TB of HDD
• OR 48+ TB of SSD and NVMe combined

If there is a hardware attribute called The value defined by default_cvm_memory_in_gb


“default_cvm_memory_in_gb” (rare), all the above
are ignored

If AOS is older than 5.1, or total memory is <62GB Subtract 4 GB from the previous value, only if the
value is < 32 GB

If the determined value leaves < 6 GB of remaining As much memory that leaves 6 GB of remaining
memory, all of the above are ignored. memory

Features like dedupe and compression require vRAM beyond the default values. For such cases, you can override the
default values by performing one of the following tasks:

• Manually changing the value in the Foundation GUI. If this value leaves <6 GB of remaining memory, then it is
ignored and instead the default values mentioned earlier is used.
• The allocation can also be changed from Prism after installation, by going to Configure CVM from the gear
icon in the web console.

Exceptions for Old Platforms


The preceding vCPU and vRAM policies do not apply to the following old platforms:

• All NX platforms before Gen 5


• All Dell platforms before Gen 14
• Lenovo HX3500, HX5500, HX7500
• HPE DL380p Gen8
For these platforms, vCPU allocation is fixed to 8.
The default vRAM allocation is 20 GB, except the following platforms:

Foundation | Foundation Considerations | 7


Table 4: vRAM Values

Platform vRAM

NX-6035C, XC730xd-12C, HX5500, HX7500 28 GB

NX-8150, NX-8150-G4, NX-9040 32 GB

If AOS is older than 5.1, or total memory is <62 GB Subtract 4 GB from the previous value, only if the
value is <32GB.

If the determined value leaves < 6 GB of remaining As much memory that leaves 6 GB of remaining
memory, all of the above are ignored. memory.

NUMA Pinning
Pinning is enabled only when the following conditions are met:

• vRAM allocation <= RAM of the CVM NUMA node


This is regardless of whether the allocation is one of the default values or one provided by you.
• vCPU allocation <= logical cores of the CVM NUMA node

Note:

• “CVM NUMA node” means the NUMA node that connects to the SSD storage controller.
• When pinning is enabled, all vCPUs is pinned to this single NUMA node. Pinning vCPUs to a NUMA
node enables the CVM to maximize its I/O performance under heavy load.
• vRAM allocation is not pinned to any NUMA node to ensure that enough memory is available for the
CVM when it starts after shutdowns like maintenance mode.
3
PREPARE FACTORY-IMAGED NODES
FOR IMAGING
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on discovered
nodes and how to configure the nodes into a cluster. "Discovered nodes" are factory prepared nodes that
are not currently part of any cluster, and are reachable within the same subnet. This procedure runs the
Foundation tool through the Nutanix Controller VM (Controller VM–based Foundation).

Before you begin

• Make sure that the nodes that you want to image are factory-prepared nodes that have not been configured in any
way and are not part of a cluster.
• Physically install the Nutanix nodes at your site. For general installation instructions, see "Mounting the Block"
in the Getting Started Guide. For installation instructions specific to your model type, see "Rack Mounting" in the
NX Series Hardware Administration Guide.
Your workstation must be connected to the network on the same subnet as the nodes you want to image.
Foundation does not require an IPMI connection or any special network port configuration to image discovered
nodes. See Network Requirements for general information about the network topology and port access required
for a cluster.
• Determine the appropriate network (gateway and DNS server IP addresses), cluster (name, virtual IP address), and
node (Controller VM, hypervisor, and IPMI IP address ranges) parameter values needed for installation.

Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to
Controller VMs.

Note: Nutanix uses an internal virtual switch to manage network communications between the Controller VM
and the hypervisor host. This switch is associated with a private network on the default VLAN and uses the
192.168.5.0/24 address space. For the hypervisor, IPMI interface, and other devices on the network (including
the guest VMs that you create on the cluster), do not use a subnet that overlaps with the 192.168.5.0/24 subnet on
the default VLAN. If you want to use an overlapping subnet for such devices, make sure that you use a different
VLAN.

• Download the following files from Nutanix Support portal:

• AOS installer named nutanix_installer_package-version#.tar.gz from the AOS (NOS) download page.
• Hypervisor ISO if you want to instal Hyper-V or ESXi. The user must provide the supported Hyper-V or ESXi
ISO (see Hypervisor ISO Images on page 38); Hyper-V and ESXi ISOs are not available on the support
portal.
It is not necessary to download AHV because the AOS bundle includes an AHV installation bundle. However,
you can download an AHV installation bundle if you want to install a non-default version.
• Make sure that IPv6 is enabled on the network to which the nodes are connected and IPv6 multicast is supported.
• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the nodes.
If the nodes contain only SEDs, you can enable encryption after you image the nodes. If the nodes contain both

Foundation | Prepare Factory-Imaged Nodes for Imaging | 9


regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs at any time during the lifetime
of the cluster.
For information about enabling and disabling encryption, see the "Data-at-Rest Encryption" chapter in the Prism
Web Console Guide.

About this task

Note: This method can image discovered nodes or create a single cluster from discovered nodes or both. This method
is limited to factory prepared nodes running AOS 4.5 or later. If you want to image factory prepared nodes running an
earlier AOS (NOS) version or image bare-metal nodes, see Prepare Bare-Metal Nodes for Imaging on page 15.

To image the nodes and create a cluster, do the following:

Procedure

1. Run discovery and launch Foundation (see Node Discovery and Foundation Launch on page 10).

2. Update Foundation to the latest version (see Upgrade CVM Foundation by Using the Foundation Java Applet).

Note:

• This step is optional for platforms other than HPE DX.


• For HPE DX platform, the minimum supported version of Foundation is 4.4.1. If the Foundation
version installed on the discovered nodes is older than 4.4.1, upgrade Foundation to 4.4.1 or a newer
version. Perform the upgrade process with a Linux workstation. Do not use a Windows workstation
to perform the upgrade as it is not supported.

3. Run CVM Foundation (see Configuring Foundation VM by Using the Foundation GUI on page 31).

4. After the cluster is created successfully, begin configuring the cluster (see Configuring a New Cluster in Prism on
page 36).

Node Discovery and Foundation Launch


The process of discovering nodes and launching Foundation requires the Nutanix nodes and the workstation that you
use for imaging are in the same broadcast domain or in a VLAN-segmented network.

Discovering Nodes in the Same Broadcast Domain

About this task


To discover nodes in a network that does not use VLANs, do the following:

Procedure

1. Access the Nutanix Support portal (https://my.nutanix.com).

2. Browse to Downloads > Foundation, and then click FoundationApplet-offline.zip.

Foundation | Prepare Factory-Imaged Nodes for Imaging | 10


3. Extract the contents of the downloaded ZIP file into the workstation that you want to use for imaging, and then
double-click nutanix_foundation_applet.jnlp.
The discovery process begins and a window appears with a list of discovered nodes.

Note: A security warning message may appear to indicate that the list of nodes is from an unknown source. To run
the application, click Accept and Run.

Figure 1: Foundation Launcher Window

Discovering Nodes in a VLAN-Segmented Network


Nutanix nodes running AHV version 20160215 or later include a network configuration tool. You can
use the tool to assign a VLAN tag to the public interface on the Controller VM and one or more physical
interfaces on the host. You can also use the tool to assign an IP address to the Controller VM and
hypervisor. After the network configuration is complete, start the Foundation service running on the
Controller VM of that host to discover and image other Nutanix nodes. Foundation uses the VLAN sniffer
provided in CVM to detect free Nutanix nodes and nodes in other VLANs. The VLAN sniffer uses the
Neighbor Discovery protocol for IP version 6. Therefore, the VLAN sniffer requires that the physical
switch to which the nodes are connected, supports IPv6 broadcast and multicast. During the imaging
process, Controller VM–based Foundation also assigns the specified VLAN tag (assumed to be that of the
production VLAN) to the corresponding interfaces on the selected nodes, eliminating the need to perform
additional VLAN assignment tasks for those nodes.

Before you begin


Connect the Nutanix nodes to a switch.

About this task

Note: Use the network configuration tool only on factory-prepared nodes that are not part of a cluster. Using the tool
on a node that is part of a cluster makes the node inaccessible to the other nodes in the cluster. If there are issues, the
only way to resolve an issue is to reconfigure the node to the previous IP addresses by using the network configuration
tool again.

To configure the network for a node, do the following:

Foundation | Prepare Factory-Imaged Nodes for Imaging | 11


Procedure

1. Connect a console to one of the nodes and log on to the Acropolis host by using the root credentials.

2. Change your working directory to /root/nutanix-network-crashcart/, and then start the network
configuration utility.
root@ahv# ./network_configuration

3. In the network configuration utility, do the following:

a. Review the network card details to ascertain interface properties and identify connected interfaces.
b. Use the arrow keys to go to the interface that you want to configure, and then use the Spacebar key to select
the interface.
Repeat this step for each interface that you want to configure.
c. Use the arrow keys to navigate through the user interface and specify values for the following parameters:

• VLAN Tag. VLAN tag to use for the selected interfaces.


• Netmask. Network mask of the subnet to which you want to assign the interfaces.
• Gateway. Default gateway for the subnet.
• Controller VM IP. IP address for the Controller VM.
• Hypervisor IP. IP address for the hypervisor.
d. Use the arrow keys to click Done, and then click Enter.
The network configuration utility configures the interfaces.

Launching Foundation
Launching Foundation depends on whether you used the Foundation Applet to discover nodes in the same
broadcast domain or the crash cart user interface to discover nodes in a VLAN-segmented network.

About this task


To launch the Foundation GUI, do one of the following:

Procedure

• If you used the Foundation Applet to discover nodes in the same broadcast domain, do the following:

a. Select the node on which you want to run Foundation.


The selected node is imaged first and then the other nodes. Select nodes only with a status field value of Free,
which indicates that it is not currently part of a cluster. A value of Unavailable indicates that the node is part of
an existing cluster or otherwise unavailable. To rerun the discovery process, click Retry discovery.

Note: A warning message stating that the value provided is not the highest available version of Foundation
found in the discovered nodes may appear. If you select a node using an earlier Foundation version (one that
does not recognize one or more of the node models), installation may fail when Foundation attempts to image
a node of an unknown model. Therefore, select the node with the highest Foundation version among the nodes

Foundation | Prepare Factory-Imaged Nodes for Imaging | 12


to be imaged. If you do not intend to select any of the nodes that have the higher Foundation version, ignore the
warning and proceed.

b. (Optional but recommended) Upgrade Foundation on the selected node to the latest version. See Upgrade
CVM Foundation by Using the Foundation Java Applet.
c. With the node having the latest Foundation version selected, click the Launch Foundation button.
Foundation searches the network subnet for unconfigured Nutanix nodes (factory prepared nodes that are
not part of a cluster) and then displays information about the discovered blocks and nodes in the Discovered
Nodes screen. (It does not display information about nodes that are powered off or in a different subnet.) The
discovery process normally takes just a few seconds.

Note: If you want Foundation to image nodes from an existing cluster, you must first either remove the target
nodes from the cluster or destroy the cluster.

• If you used the crash cart user interface to discover nodes in a VLAN-segmented network, in a browser on your
workstation, enter the following URL: http://CVM_IP_address:8000
Replace CVM_IP_address with the IP address that you assigned to the Controller VM when using the network
configuration tool.

CVM Foundation Upgrade


You can upgrade the CVM Foundation from version 3.12 or later to 4.5.x by using the Foundation GUI,
Foundation Java applet, or Prism web console.
For information about upgrading Foundation by using the Prism web console, see "Upgrading Foundation" chapter in
the Prism Web Console Guide.

Note: Ensure that you use the minimum version of Foundation required by your hardware platform. To determine
whether Foundation needs an upgrade for a hardware platform, see the respective System Specifications guide. If
the nodes you want to include in the cluster are of different models, determine which of their minimum Foundation
versions is the most recent version, and then upgrade Foundation on all the nodes to that version.

Upgrading the CVM Foundation by Using the GUI


You can update the CVM Foundation by using the Foundation GUI.

About this task


To update the CVM Foundation by using the Foundation GUI:

Procedure
Click the version link in the Foundation GUI.

Note: Update only the foundation-platforms submodule. This update enables Foundation to support the latest hardware
models or components qualified after the release of installed Foundation version.

Upgrading the CVM Foundation by Using the Foundation Java Applet


The Foundation Java applet includes an option to upgrade or downgrade the CVM Foundation on a
discovered node. Nutanix recommends CVM Foundation update but it is optional. Ensure that the node
is not already configured. Upgrade the CVM Foundation on any one node and use this node to upgrade
the CVM Foundation on the other nodes for imaging. If the node is configured, do not use the Java applet.
Instead, update the CVM Foundation by using the Prism web console (see the "Upgrading Foundation"
chapter in the Prism Web Console Guide).

Foundation | Prepare Factory-Imaged Nodes for Imaging | 13


Before you begin
1. Download the Foundation .tar file from the Nutanix Support portal to the workstation on which you plan to run
the Foundation Java applet.
2. Download and start the Foundation Java applet.

About this task


To upgrade Foundation on a discovered node by using the Foundation Java applet, do the following:

Procedure

1. In the Foundation Java applet, select the node on which you want to upgrade the CVM Foundation.

2. Click Upgrade Foundation.

3. Browse to the folder where you downloaded the Foundation .tar file and double-click the .tar file.
The upgrade process begins. After the upgrade completes, Genesis restarts on the node, and that in turn restarts the
Foundation service. After the Foundation service becomes available, the upgrade process reports the status of the
upgrade.
4
PREPARE BARE-METAL NODES FOR
IMAGING
Prepare Bare-Metal Nodes for Imaging
You can perform bare-metal (also referred to as standalone) imaging from a workstation with access to
the IPMI interfaces of the nodes. Imaging a cluster in the field requires installing tools (such as Oracle VM
VirtualBox, VMware Fusion ) on the workstation and setting up the environment to run these tools. This
chapter describes how to install a selected hypervisor and the Nutanix Controller VM on bare-metal nodes
and configure the nodes into one or more clusters. "Bare metal" nodes are not factory-prepared and cannot
be detected through discovery. However, you can also use this method to image factory-prepared nodes.

Before you begin

• Physically install the nodes at your site. For installing Nutanix hardware platforms, see the NX and SX
Series Hardware Administration and Reference for your model type. For installing hardware from any other
manufacturer, see that manufacturer's documentation.
• Set up the installation environment (see Preparing the Workstation on page 16).
• Ensure that you have the appropriate global, node, and cluster parameter values needed for installation. The use of
a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to Controller VMs.

Note: If the Foundation VM is configured with an IP address that is different from other clusters that require
imaging in a network (for example Foundation VM is configured with a public IP address while the cluster resides
in a private network), repeat step 8 in Installing the Foundation VM on page 17 to configure a new static IP
address for the Foundation VM.

• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the nodes. If
the nodes contain only SEDs, enable encryption after you image the nodes. If the nodes contain both regular hard
disk drives (HDDs) and SEDs, do not enable encryption on the SEDs at any time during the lifetime of the cluster.
For information about enabling and disabling encryption, see the "Data-at-Rest Encryption" chapter in the Prism
Web Console Guide.

Note: After you prepare the bare-metal nodes for Foundation, configure the Foundation VM by using the GUI. For
details, see Node Configuration and Foundation Launch on page 29.

Considerations for Bare Metal Node Imaging


Restrictions

• If you change the boot device order in the BIOS to boot from a USB flash drive, you will get a Foundation
timeout error unless the first boot device is set to CD-ROM in BIOS (under boot order menu).
• If Spanning Tree Protocol (STP) is enabled on the ports that are connected to the Nutanix host, Foundation might
time out during the imaging process. Therefore, ensure to disable STP by using PortFast or an equivalent feature
on the ports that are connected to the Nutanix host before starting Foundation.

Foundation | Prepare Bare-Metal Nodes for Imaging | 15


• Avoid connecting any device (for example, plugging a USB port on a node) that presents virtual media, such as a
CD-ROM. Connecting such devices could conflict with the installation when the Foundation tool tries to mount
the virtual CD-ROM hosting the install ISO.
• During bare-metal imaging, you assign IP addresses to the hypervisor host, the Controller VMs, and the IPMI
interfaces. Do not assign IP addresses from a subnet that overlaps with the 192.168.5.0/24 address space on the
default VLAN. Nutanix uses an internal virtual switch to manage network communications between the Controller
VM and the hypervisor host. This switch is associated with a private network on the default VLAN and uses the
192.168.5.0/24 address space. If you want to use an overlapping subnet, make sure that you use a different VLAN.

Recommendations

• Nutanix recommends that imaging and configuration of bare-metal nodes is performed or supervised by Nutanix
sales engineers or partners. If a Nutanix sales engineer or a partner is unavailable, contact Nutanix Support for
assistance.

Preparing the Workstation


A workstation is needed to host the Foundation VM during imaging. You can perform these steps either
before going to the installation site (if you use a portable laptop) or at the site (if an active internet
connection is available).

Before you begin


Get a workstation (laptop or desktop computer) that you can use for the installation. The workstation must have
at least 3 GB of memory (Foundation VM size plus 1 GB), 30 GB of disk space (preferably SSD), and a physical
(wired) network adapter.
To prepare the workstation, do the following:

Procedure

1. Go to the Nutanix Support portal and download the following files to a temporary directory on the workstation:

File Location Description


Foundation_VM_OVF-version#.tar On the Nutanix Support portal, go .tar file contains:
to Downloads > Foundation.
• Foundation_VM-version#.ovf.
This file is the Foundation VM
OVF configuration file for the
version# release. For example,
Foundation_VM-3.1.ovf

• Foundation_VM-version#-
disk1.vmdk. This file is the
Foundation VM VMDK file
for the version# release. For
example, Foundation_VM-3.1-
disk1.vmdk.

Foundation | Prepare Bare-Metal Nodes for Imaging | 16


File Location Description
nutanix_installer_package-version#.tar.gz
On the Nutanix Support portal, go File used for imaging the nodes
to Downloads > AOS (NOS). with desired AOS release.

Note:

• For post installation performance validation execute the Four Corners Microbenchmark
using X-ray. For more information, refer https://portal.nutanix.com/page/documents/list?
type=software&filterKey=software&filterVal=X-Ray.
• To install a hypervisor other than AHV, you must provide the ISO image of the hypervisor (see
Hypervisor ISO Images on page 38). Make sure that the hypervisor ISO image is available on
the workstation.
• Verify the support for the hypervisor and the corresponding version of the hypervisor. For details,
see Verify Hypervisor Support on page 39.

2. Download the installer for Oracle VM VirtualBox (a free open source tool used to create a virtualized
environment on the workstation) and install it with the default options. For installation and start-up instructions
(https://www.virtualbox.org/wiki/Documentation), see the Oracle VM VirtualBox User Manual.

Note: You can also use any other virtualization environment (VMware ESXi, AHV, and so on) instead of Oracle
VM VirtualBox.

3. Go to the location where you downloaded the Foundation .tar file and extract the contents.
$ tar -xf Foundation_VM_OVF-version#.tar

Note: If the tar utility is not available, use the appropriate utility for your environment.

4. Copy the extracted files to the VirtualBox VMs folder that you created.

Installing the Foundation VM


Import the Foundation VM into Oracle VM VirtualBox.

About this task


To install the Foundation VM on the workstation, do the following:

Procedure

1. Start Oracle VM VirtualBox.

2. Click the File menu and select Import Appliance... from the pull-down list.

3. In the Import Virtual Appliance dialog box, browse to the location of the Foundation .ovf file, and select the
Foundation_VM-version#.ovf file.

4. Click Next.

5. Click Import.

6. In the left pane, select Foundation_VM-version#, and click Start.


The VM operating system boots and the Foundation VM console launches.

Foundation | Prepare Bare-Metal Nodes for Imaging | 17


7. On the logon screen, log on as the nutanix user with password nutanix/4u.
The Foundation VM desktop appears.

8. (Optional) If you want to enable the file drag-and-drop functionality between your workstation and the Foundation
VM, install Oracle Additions as follows:

a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD
Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.
b. When the Open Autorun Prompt dialog box appears, click OK, and then click Run.

c. Enter the root password nutanix/4u and click Authenticate.

d. After the installation completes, press the return key to close the VirtualBox Guest Additions installation
window.
e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.

f. Restart the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.

g. After the Foundation VM restarts, select Devices > Drag 'n' Drop > Bidirectional from the menu.

Foundation | Prepare Bare-Metal Nodes for Imaging | 18


9. To determine if the Foundation VM was able to get an IP address from the DHCP server, open a terminal session
and run the ifconfig command.
If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP address as
follows:

Note: The Foundation VM must be on a public network to copy the selected ISO files to the Foundation VM. This
requirement requires setting a static IP address and setting it again when the workstation is on a different (typically
private) network for the installation.

a. Double-click the set_foundation_ip_address icon on the Foundation VM desktop.

Figure 2: Foundation VM: Desktop


b. In the dialog box, click Run in Terminal.

Figure 3: Foundation VM: Terminal Window


c. In the Select Action dialog box, select Device configuration.

Note: Use the indication keys only to perform selections in the terminal window. Mouse clicks do not work.

Foundation | Prepare Bare-Metal Nodes for Imaging | 19


Figure 4: Foundation VM: Action Box
d. In the Select A Devicedialog box, select eth0.

Figure 5: Foundation VM: Device Configuration Box


e. In the Network Configuration dialog box:

• 1. Remove the asterisk in the Use DHCP field (which is set by default).
2. Enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields.
3. Click the OK.

Figure 6: Foundation VM: Network Configuration Box


f. Click Save.

g. Click Save & Quit.

Foundation | Prepare Bare-Metal Nodes for Imaging | 20


The configuration is saved and the terminal window closes.

Uploading Installation Files to the Foundation VM


The file system on the Foundation VM includes hypervisor-specific directories. Copy the files that you
downloaded either from the Nutanix Support portal or obtained from a hypervisor vendor into the respective
directories.

About this task


To upload the installation and installation-related files to the Foundation VM, do the following:

Procedure

1. Copy nutanix_installer_package-version#.tar.gz to the /home/nutanix/foundation/nos


directory.
To install hypervisors other than AHV, copy the ISO files to the corresponding directory on the Foundation VM
as follows:

• ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx


• Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv

2. If you downloaded the diagnostics files for one or more hypervisors, copy them to the appropriate directories on
the Foundation VM. The directories for the diagnostic files are as follows:

» Diagnostic file for AHV (diagnostic.raw.img.gz): /home/nutanix/foundation/isos/diags/kvm

» Diagnostic file for ESXi (diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf): /


home/nutanix/foundation/isos/diags/esx
» Diagnostic file for Hyper-V (diagnostics_uvm.vhd.gz): /home/nutanix/foundation/isos/diags/
hyperv

Setting Up the Network


The nodes and workstation must have network access to each other through a switch at the site. Set up
the network onsite before imaging the nodes through the Foundation tool.

About this task


To set up the network connections, do the following:

Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing tables). Nutanix
recommends a flat switch to protect the production environment against configuration errors. Foundation includes a
multi-homing feature that allows you to image nodes by using the production IP addresses even when connected to a
flat switch. See Network Requirements on page 41 for information about the network topology and port access
required for a cluster.

Procedure

1. Enable IPv6 on the network to which the nodes are connected and ensure that IPv6 multicast is supported.

Foundation | Prepare Bare-Metal Nodes for Imaging | 21


2. Configure the IPMI port.

Note: If you use a shared IPMI port to reinstall the hypervisor, follow the instructions in KB 3834.

More information on specific models is as follows:

• Nutanix NX Series: Connect the dedicated IPMI port and any one of the data ports to the switch. Ensure that
you use the dedicated IPMI port instead of the shared IPMI port. In addition to the dedicated IPMI port, we
highly recommend that you use a 10G data port. You can use a 1G port instead of a 10G data port at the cost
of increased imaging time or imaging failure. If you use SFP+ 10G NICs and a 1G RJ45 switch for imaging,
connect the 10G data port to the switch by using one of our approved GBICs. If the BMC is configured to use
it, you can also use the shared IPMI/1G data port in place of the dedicated port. However, the shared IPMI/1G
data port is less reliable than the dedicated port. The IPMI LAN interfaces of the nodes must be in failover
mode (factory default setting).
If you choose to use the shared IPMI port on G4 and later platforms, make sure that the connected switch can
auto-negotiate to 100 Mbps. This auto-negotiation capability is required because the shared IPMI port can
support 1 Gbps throughput only when the host is online. If the switch cannot auto-negotiate to 100 Mbps when

Foundation | Prepare Bare-Metal Nodes for Imaging | 22


the host goes offline, make sure to use the dedicated IPMI port instead of the shared port (the dedicated IPMI
port support 1 Gbps throughput always). Older platforms support only 10/100 Mbps throughput.
Foundation does not support:

• Imaging nodes in an environment using LACP without fallback enabled.


• Configuring nodes' virtual switches to use LACP. Perform this configuration manually after imaging.
• Configuring network adapters to use jumbo frames during imaging. This configuration must be done
manually after imaging.
The exact location of the port depends on the model type. To determine the port location, see the respective
vendor hardware documentation. The following figure illustrates the location of the network ports in NX-3050
(middle RJ-45 interface):

Figure 7: Port Locations (NX-3050)


• Lenovo Converged HX Series: Connect both the system management (IMM) port and one of the 10 GbE ports.
The following figure illustrates the location of the network ports in HX3500 and HX5500:

Figure 8: Port Locations (HX System)


• Dell XC series: Connect the iDRAC port and one of the data ports. Some Dell XC Series systems, such as
the Dell XC430-4, support imaging over a 1 GbE network connection. Whereas other systems, such as the

Foundation | Prepare Bare-Metal Nodes for Imaging | 23


Dell XC640-10, require a 10 GbE connection for imaging. Nutanix recommends that you use a 10 GbE port
regardless of the model of your appliance. The following figure illustrates the location of the network ports:

Figure 9: Port Locations (XC System)


• IBM POWER Servers: Connect the dedicated IPMI port and a data port to the network to which the
Foundation VM is connected.
• HPE DX series: Connect the iLO port and the data ports to your network switch. Ensure that the connected
data ports are of the same data speed and not a combination of different data speeds. Also, ensure that the TCP
port 8000 is open between Foundation and the iLO port. If you use SFP+ or SFP28 data ports with a 1G RJ45
switch during imaging, connect the data port to the switch by using an approved transceiver.
In the DX360 and DX380 models, there are four data ports located to the right of the iLO port. These data
ports are not supported.

3. Connect the installation workstation to the same switch as the nodes.

What to do next
Update the Foundation VM service, if necessary. After you prepare the bare-metal nodes for Foundation,
configure the Foundation VM by using the GUI. For details, see Node Configuration and Foundation Launch
on page 29.

Foundation VM Upgrade
You can upgrade the Foundation VM either by using the GUI or CLI.

Note: Ensure that you use the minimum version of Foundation required by your hardware platform. To determine
whether Foundation needs an upgrade for a hardware platform, see the respective System Specifications guide. If
the nodes you want to include in the cluster are of different models, determine which of their minimum Foundation
versions is the most recent version, and then upgrade Foundation on all the nodes to that version.

Upgrading the Foundation VM by Using the GUI


The Foundation GUI enables you to perform one-click updates either over the air or from a .tar file that
you manually upload to the Foundation VM. The over-the-air update process downloads and installs the
latest Foundation version from the Nutanix Support portal. By design, the over-the-air update process
downloads and installs a .tar file that does not include Lenovo packages. Therefore, for Lenovo platforms,
update Foundation by using an uploaded .tar file.

Before you begin


If you want to install a .tar file of your choice (required for Lenovo platforms and optional for other
platforms), download the Foundation .tar file to the workstation that you use to access or run Foundation.
Installers are available on the Foundation download page at (https://portal.nutanix.com/#/page/foundation/list).

About this task


To update the Foundation VM by using the GUI, do the following:

Procedure

1. Open the Foundation GUI.

Foundation | Prepare Bare-Metal Nodes for Imaging | 24


2. On the main menu, select Settings from the pull-down list.

3. Click Upgrade Software.

4. Select the Foundation tab.


The screen displays the latest Foundation version for upgrade.

5. In the Update Foundation dialog box, do one of the following:

» (Do not use with Lenovo platforms) To perform a one-click over-the-air update, click Update.

» (For Lenovo platforms; optional for other platforms) To update Foundation by using an installer that you
downloaded to the workstation, click Browse, browse and select the .tar file, and then click Install.

Upgrading the Foundation VM by Using the CLI


You can upgrade the Foundation VM from version 3.1 or later to version 4.5.x by using the CLI.

About this task


Do the following:

Procedure

1. Download the Foundation upgrade bundle (foundation-version#.tar.gz) from the Nutanix Support portal to
the /home/nutanix/ directory.

2. Change your working directory to /home/nutanix/.

3. Upgrade Foundation.
$ ./foundation/bin/foundation_upgrade -t foundation-version#.tar.gz

Foundation | Prepare Bare-Metal Nodes for Imaging | 25


5
FOUNDATION APP FOR IMAGING
Foundation is available as a native Mac or Windows application that launches Foundation GUI in a browser. Unlike
standalone VM, Foundation app provides a simpler alternative that skips configuring and mounting a VM.

Installing Foundation App on macOS

About this task


Perform the following steps to install and launch Foundation app on macOS:

Procedure

1. Disable the "stealth mode" firewall option on macOS.

2. Download the Foundation .dmg file from the Nutanix portal.

3. Double-click the Foundation .dmg file.

4. To install the Foundation app, drag the Foundation app to the Application folder.
The Foundation app is installed.

5. Double-click the Foundation app in the Application folder.

Note: To upgrade the app, download and install a higher version of the app from the Nutanix Support portal.

The Foundation GUI is launched in the default browser or you can browse to http://localhost:8000.

6. Allow the app to accept incoming connections when prompted by your Mac computer.

7. To close the app, right-click on the Foundation icon in the launcher and click Force Quit.

Installing Foundation App on Windows

About this task


To install and launch Foundation app, perform the following steps on the Windows PC.

Note: The installation stops any running Foundation process. If you have initiated Foundation with a previously
installed app, ensure that it is complete before launching the installation again.

Procedure

1. Ensure that IPv6 is enabled on the network interface that connects the Windows PC to the switch.

2. Download the Foundation .msi installer file from the Nutanix Support portal.

Foundation | Foundation App for Imaging | 26


3. Perform the installation either through the wizard or a silent installation:

» Double-click the .msi file to run the installation wizard.

» Run the msiexec.exe /i portable_foundation.msi /qb /l*v install.log command.


/qb+: Displays the basic user interface and a modal dialog box when the installation is completed.
/q: Performs the silent installation.
This command saves the installation details in the install.log file that you can use for debugging a failed
installation.

4. Double-click the Foundation icon on the desktop or start menu.

Note: To upgrade the app, download a higher version of the app from the Nutanix portal and perform the
installation again. The new installation stops any running Foundation operation and updates the older version to the
higher version. If initiated Foundation with the older app, ensure that it is complete before doing a new installation
of the higher version.

The Foundation GUI is launched in the default browser or you can browse to http://localhost:8000/gui/index.html.

Uninstalling Foundation App on macOS

About this task


Perform the following steps to uninstall the Foundation app on macOS:

Procedure

1. If the Foundation app is running, right-click on the Foundation icon in launcher and click Force Quit.

2. Delete the downloaded Foundation .dmg installer file.

3. Drag the Foundation app from the Application folder to Trash.

Uninstalling Foundation App on Windows

About this task


Perform one of the following to uninstall the Foundation app on Windows:

Note: Foundation app does not remove the log and configuration files generated during uninstallation. For a clean
installation, Nutanix recommends that you delete these files manually.

Procedure
Do one of the following:

» Use the Apps & features option.

» Run the msiexec.exe /X{BCD56AA1-664C-4EE8-8E01-AED3F0368234} /qb+ /l*v uninstall.log


command.
/qb+: Displays the basic user interface and a modal dialog box when the uninstallation is completed.
/q: Performs the silent uninstallation.
This command saves the uninstallation details in the uninstall.log file that you can use for debugging a failed
uninstallation.

Foundation | Foundation App for Imaging | 27


Upgrading Foundation App
To upgrade the Foundation app, download a higher version of the app from the Nutanix Support portal and
perform a new installation. The new installation stops any running Foundation operation and updates the
older version to the higher version. If you initiated Foundation with the older app, ensure that it is complete
before doing a new installation of the higher version.
6
NODE CONFIGURATION AND
FOUNDATION LAUNCH
Node Configuration and Foundation Launch
Configure the Foundation VM with appropriate details. Perform the configurations either by populating the details
through a configuration file automatically or by using the GUI manually.
The configuration file stores the values to most inputs sought by the Foundation GUI that are mandatory. The
configuration file:

• Serves as a reusable baseline file that helps skip repeat manual entry of configuration details.
• Plan the configuration details in advance.
• Invite others to review and edit your planned configuration.
• Import NX nodes from a Salesforce order to avoid manually adding NX nodes.

Configuring the Foundation GUI Automatically


You can configure the Foundation GUI fields automatically using a configuration file. This file stores the
values to most inputs sought by the Foundation GUI that are mandatory.

Before you begin


To configure the Foundation GUI automatically, do the following:

Procedure

1. On the Foundation VM desktop, double-click the Nutanix Foundation icon.

2. In a web browser inside the Foundation VM, browse to the http://localhost:8000/gui/index.html URL.

3. After you assign an IP address to the Foundation VM, browse to the http://<foundation-vm-ip-
address>:8000/gui/index.html URL from a web browser outside the VM .

4. On the Start page, you can import a Foundation configuration file created on the https://install.nutanix.com
portal.
To create or edit this configuration file, log into https://install.nutanix.com with your Nutanix Support portal
credentials. You can populate this file with either partial or complete Foundation GUI configuration details.

Note: The configuration file only stores configuration details and not AOS or hypervisor images. You can upload
images only in the Foundation GUI.

a. Click Next.

Foundation | Node Configuration and Foundation Launch | 29


5. On the Nodes page, list the nodes within blocks in the table. The following fields are populated:

• BLOCK SERIAL
• NODE
• VLAN
• IPMI MAC
• IPMI IP
• HOST IP
• CVM IP
• HOSTNAME OFHOST
To configure values to the table, select the Tools drop-down list, and select one of the following options:

• Add Nodes Manually: To manually add nodes to the list if they are not populated automatically.

Note: You can manually add nodes only in standalone Foundation.


If you are manually adding multiple blocks in a single instance, all added blocks get the same
number of nodes. To add blocks with different numbers of nodes, add multiple blocks with highest
number of nodes and then delete nodes for each block, as applicable. Alternatively, you can also
repeat the add process to separately add blocks with different number of nodes

• Add Compute-Only Nodes: To add compute-only nodes. For more information about compute only
nodes, see Compute-Only Node Configuration (AHV Only) in the Prism Web Console Guide.
• Range Autofill: To bulk-assign the IP addresses and hostnames for each node.

Note: Unlike CVM Foundation, standalone Foundation does not validate these IP addresses by checking for
its uniqueness. Hence, manually cross-check and ensure that the IP addresses are unique and valid.

• Reorder Blocks: To match the order of IP addresses and hypervisor hostnames that you want to assign.
• Select Only Failed Nodes: To select all the failed nodes.
• Remove Unselected Rows: To remove a node from the Foundation process, de-select a node, and click
the Remove Unselected Rows option from the Tools drop-down.

a. Click Next.

6. On the Cluster page, provide the cluster details, configure cluster formation, or just image the nodes without
forming a cluster. You can also enable network segmentation to separate CVM network traffic from guest VMs
and hypervisors' network traffic.

Note:

• The Cluster Virtual IP field is essential for Hyper-V clusters but optional for ESXi and AHV
clusters.
• To provide multiple DNS or NTP servers, enter a comma-separated list of IP addresses. For best
practices in configuring NTP servers, see the Recommendations for Time Synchronization section
in the Prism Web Console Guide.

Foundation | Node Configuration and Foundation Launch | 30


7. On the AOS page, you can specify and upload AOS images and also view version of existing installed AOS
image on the nodes. If all discovered nodes' CVMs already have same AOS version that you want to use, skip
updating CVMs with AOS.

8. On the Hypervisor page, you can:

• Specify and upload hypervisor image files.


• View the version of existing installed hypervisors on the nodes.
• Upload the latest hypervisor whitelist .json file.

Note:

• You can select one or more nodes to be storage-only nodes that host AHV only. However, image
the remaining nodes with another hypervisor and form a multi-hypervisor cluster.
• For discovered nodes, if you skip updating CVMs with AOS, you can still reimage the hypervisors.
Hypervisor-only imaging is supported. However, do not image CVMs with AOS before imaging
the hypervisors.
• [Hyper-V only] If you choose Hyper-V from the Choose Hyper-V SKU list, select the SKU
that you want to use.
The following four Hyper-V SKUs are supported: Standard, Datacenter, Standard with
GUI, and Datacenter with GUI.

a. Click Next.

9. (For standalone Foundation only) On the IPMI page, provide the IPMI access credentials for each node.
Standalone Foundation needs IPMI remote access to all the nodes.
To provide credentials for all the nodes, select the Tools drop-down list to either use the Range Autofill
option or assign a vendor's default IPMI credentials to all nodes.

10. Click Start.


The Installation in Progress page displays the progress status and allows you to view the individual Log
details for in-progress or completed operations of all the nodes. Click Review Configuration to have a read-
only view of the configuration details while the installation is in progress.

Note: You can cancel an ongoing installation in standalone Foundation but not in CVM Foundation.

Results
After all the operations are completed, the Installation finished page appears.

Note: If you missed any configuration, want to reconfigure, or perform the installation again, click Reset to return to
the Start page.

Configuring Foundation VM by Using the Foundation GUI


Before you begin
To configure the Foundation VM by using the GUI, do the following:

• Assign IP addresses to the hypervisor host, the Controller VMs, and the IPMI interfaces. Do not assign IP
addresses from a subnet that overlaps with the 192.168.5.0/24 address space on the default VLAN. Nutanix uses

Foundation | Node Configuration and Foundation Launch | 31


an internal virtual switch to manage network communications between the Controller VM and the hypervisor host.
This switch is associated with a private network on the default VLAN and uses the 192.168.5.0/24 address space.
If you want to use an overlapping subnet, make sure that you use a different VLAN.
• Nutanix does not support mixed-vendor cluster. For restriction details in Nutanix nodes' cluster, see the Product
Mixing Restrictions section in the NX Series Hardware Administration Guide.
• For a single-imaged node that you must reimage or form a 1-node cluster by using CVM Foundation, ensure that
you launch the CVM Foundation from another node. CVM Foundation can configure its' own node only if the
operation includes one or more nodes along with its' own node.
• Upgrade Foundation to a higher or relevant version. You can also update the foundation-platforms submodule on
Foundation. Updating the submodule enables Foundation to support the latest hardware models or components
qualified after the release of installed Foundation version.

Procedure

1. On the Foundation VM desktop, double-click the Nutanix Foundation icon.

2. In a web browser inside the Foundation VM, browse to the http://localhost:8000/gui/index.html URL.

3. After you assign an IP address to the Foundation VM, browse to the http://<foundation-vm-ip-
address>:8000/gui/index.html URL from a web browser outside the VM .

4. On the Start page, configure the details either automatically or manually:

» To configure automatically:
1. Download the Foundation configuration file from the https://install.nutanix.com portal.

Note: The configuration file only stores configuration details and not AOS or hypervisor images. You
can upload images only in the Foundation GUI.

2. Update this file with either partial or complete Foundation VM configuration details.
3. To select the updated file, click the import the configuration file link.

» To configure manually, provide the following details:

• Specify whether you want RDMA to passthrough to the CVMs.


• (Optional) Configure LACP or LAG for network connections between the nodes and the switch.
• Assign VLANs to IPMI and CVM/host networks with standalone Foundation 4.3.2 or later.
• Select the workstation network adapter that connects to the nodes' network.
• Specify the subnets and gateway addresses for the cluster and the IPMI network.
• Create and assign two IP addresses to the Foundation application or standalone workstation for multi-
homing.

Foundation | Node Configuration and Foundation Launch | 32


5. On the Nodes page, select the Tools drop-down list, and select one of the following options:

Option Description

Add Nodes Manually Add nodes manually if they are not already populated.

Note: You can manually add nodes only in


standalone Foundation.
If you manually add multiple blocks in a
single instance, all added blocks get the
same number of nodes. To add blocks with
different numbers of nodes, add multiple
blocks with highest number of nodes
and then delete nodes for each block, as
applicable. Alternatively, you can also
repeat the add process to separately add
blocks with different number of nodes.

Add Compute-Only Nodes Add compute-only nodes. For more information about
compute only nodes, see the Compute-Only Node
Configuration (AHV Only) section in the Prism Web
Console Guide.

Range Autofill Assign the IP addresses and hostnames in bulk for


each node.

Note: Unlike CVM Foundation, standalone


Foundation does not validate these IP addresses
by checking for its uniqueness. Therefore,
manually cross-check and ensure that the IP
addresses are unique and valid.

Reorder Blocks (Optional) Reorder IP addresses and hypervisor


hostnames.

Select Only Failed Nodes Select all the failed nodes to debug the issues.

Remove Unselected Rows (Optional) De-select nodes and click Remove


Unselected Rows to remove nodes.

Foundation | Node Configuration and Foundation Launch | 33


6. On the Cluster page, you can:

• Provide the cluster details.


• Configure cluster formation.
• Image the nodes without creating a cluster.
• Enable network segmentation to separate CVM network traffic from guest VMs and hypervisors' network
traffic.

Note:

• The Cluster Virtual IP field is essential for Hyper-V clusters but optional for ESXi and AHV
clusters.
• To provide multiple DNS or NTP servers, enter a comma-separated list of IP addresses. For best
practices in configuring NTP servers, see the Recommendations for Time Synchronization section
in the Prism Web Console Guide.

7. On the AOS page, upload an AOS image or view the existing installed version of AOS image on each node.

Note: Skip updating CVMs with AOS if all discovered nodes' CVMs already have the same AOS version that
you want to use.

8. On the Hypervisor page, you can:

• Specify and upload hypervisor image files.


• View version of existing installed hypervisors on the nodes.
• Upload the latest hypervisor whitelist JSON file that can be downloaded from the Nutanix Support portal.
This file lists the supported hypervisors.

Note:

• You can select one or more nodes to be storage-only nodes, which host AHV only. You must
image rest of the nodes with another hypervisor and from a multi-hypervisor cluster.
• For discovered nodes, if you skip updating CVMs with AOS, you can still reimage the hypervisors.
However, imaging CVMs with AOS without imaging hypervisors are not supported.
• [Hyper-V only] If you choose Hyper-V, from the Choose Hyper-V SKU list, select the SKU
that you want to use.
Four Hyper-V SKUs are supported: Standard, Datacenter, Standard with GUI, and
Datacenter with GUI.

9. (For standalone Foundation only) On the IPMI page, provide the IPMI access credentials for each node.
Standalone Foundation needs IPMI remote access to all the nodes.
To provide credentials for all the nodes, select the Tools drop-down list to either use the Range Autofill
option or assign a vendor's default IPMI credentials to all nodes.

Foundation | Node Configuration and Foundation Launch | 34


10. Click Start.
The Installation in Progress page displays the progress status and allows you to view the individual Log
details for in-progress or completed operations of all the nodes. Click Review Configuration to have a read-
only view of the configuration details while the installation is in progress.

Note: You can cancel an ongoing installation in standalone Foundation but not in CVM Foundation.

Results
After all the operations are completed, the Installation finished page appears.

Note: If you missed any configuration, want to reconfigure, or perform the installation again, click Reset to return to
the Start page.

Foundation | Node Configuration and Foundation Launch | 35


7
POST-INSTALLATION STEPS

Configuring a New Cluster in Prism


About this task
Once the cluster is created it can be configured through the Prism Web console. A storage pool and a container are
provisioned automatically when the cluster is created, but many other options require user input. The following are
common cluster configuration tasks performed soon after creating a cluster. (All the sections cited in the following
steps can also be found Prism Web Console Guide.)

Procedure

1. Verify that the cluster has passed the latest Nutanix Cluster Check (NCC) tests.

a. Check the installed NCC version and update it if a recent version is available (see the "Software and Firmware
Upgrades" section).
b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Run NCC from a command line. Open a command window, log on to any Controller VM in the cluster.
Establish SSH session, and then run the following command:
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than PASS, resolve the reported issues before proceeding. If you are unable
to resolve the issues, contact Nutanix Support for assistance.
c. Configure NCC so that the cluster checks run and emailed according to your desired frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs
where num_hrs is a positive integer of at least 4 to specify how frequently NCC runs and results are emailed.
For example, to run NCC and email results every 12 hours, specify 12; or every 24 hours, specify 24, and so
on. For other commands related to automatically emailing NCC results, see "Automatically Emailing NCC
Results" in the Nutanix Cluster Check (NCC) Guide for your version of NCC.

2. Specify the timezone of the cluster.


While logged on to the Controller VM (see previous step), run the following commands:
nutanix@cvm$ ncli
ncli> cluster set-timezone timezone=cluster_timezone

Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles, Europe/
London, or Asia/Tokyo). Restart all Controller VMs in the cluster after changing the timezone. A cluster can
only tolerate outage of a single Controller VM at a time hence restart Controller VMs in a rolling fashion. Ensure
that a Controller VM is fully operational after a restart prior to proceeding with the restart of the next one. For
more information about using the nCLI, see the Command Reference.

3. Specify an outgoing SMTP server (see the "Configuring an SMTP Server" section).

Foundation | Post-Installation Steps | 36


4. If the site security policy allows Nutanix customer support to access the cluster, enable the remote support tunnel
(see the "Controlling Remote Connections" section).

CAUTION: Failing to enable remote support prevents Nutanix Support from directly addressing cluster issues.
Nutanix recommends that all customers allow email alerts at minimum because it allows proactive support of
customer issues.

5. If the site security policy allows Nutanix Support to collect cluster status information, enable the Pulse feature
(see the "Configuring Pulse" section).
This information is used by Nutanix Support to send automated hardware failure alerts, as well as diagnose
potential problems and assist proactively.

6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails (see the
"Configuring Email Alerts" section).
You can also specify email recipients for specific alerts (see the "Configuring Alert Policies" section).

7. If the site security policy permits automatic downloads of upgrade software packages for cluster components,
enable the feature (see the "Software and Firmware Upgrades" section).

Note: To ensure that automatic download of updates can function, allow access to the following URLs through
your firewall:

• *.compute-*.amazonaws.com:80
• release-api.nutanix.com:80

8. License the cluster (see the "License Management" section).

9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.

• vCenter: See the Nutanix vSphere Administration Guide.


• SCVMM: See the Nutanix Hyper-V Administration Guide.

Foundation | Post-Installation Steps | 37


8
HYPERVISOR ISO IMAGES
An AHV ISO image is included as part of Foundation. However, customers must provide ISO images for other
hypervisors. Check with your hypervisor manufacturer's representative, or download an ISO image from their support
site:

Note: For the Lenovo Converged HX Series platform, use the custom ISOs that are available on the VMware website
(www.vmware.com) at Downloads > Product Downloads > vSphere > Custom ISOs.

Ensure to list the MD5 checksum of the hypervisor ISO image in the ISO whitelist file used by Foundation. See
Verify Hypervisor Support on page 39.
The following table describes the fields that appear in the iso_whitelist.json file for each ISO image.

Table 5: iso_whitelist.json Fields

Name Description

(n/a) Displays the MD5 value for that ISO image.


min_foundation Displays the earliest Foundation version that supports this ISO image. For
example, "2.1" indicates you can install this ISO image using Foundation
version 2.1 or later (but not an earlier version).
hypervisor Displays the hypervisor type (ESX, Hyper-V, or AHV). The "AHV"
designation means AHV. Entries with a "linux" hypervisor are not
available; they are for Nutanix internal use only.
min_nos Displays the earliest AOS version compatible with this hypervisor ISO. A
null value indicates that there are no restrictions.
friendly_name Displays a descriptive name for the hypervisor version, for example "ESX
6.0" or "Windows 2012r2".
version Displays the hypervisor version, for example "6.0" or "2012r2".
unsupported_hardware Lists the Nutanix models that you cannot use on this ISO. A blank list
indicates that there are no model restrictions. However, conditional
restrictions such as the limitation that Haswell-based models support only
ESXi version 5.5 U2a or later does reflect in this field.
skus (Hyper-V only) Lists which Hyper-V types (datacenter and standard) are supported with
this ISO image.
compatible_versions Reflects through regular expressions the hypervisor versions that can co-
exist with the ISO version in an Acropolis cluster (primarily for internal
use).
deprecated (optional field) Indicates that this hypervisor image is not supported by the mentioned
Foundation version and higher versions. If the value is “null”, the image is
supported by all Foundation versions to date.

Foundation | Hypervisor ISO Images | 38


Name Description
filesize Displays the file size of the hypervisor ISO image.

The following sample entries are from the whitelist for an ESX and an AHV image:
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"filesize": 329611264,
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
},

"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},

Verify Hypervisor Support


The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate
ISO image files. The files are identified in the whitelist by their MD5 value (not file name), therefore verify
that the MD5 value of the ISO you want to use is listed in the whitelist file.

Before you begin


Download the latest whitelist file from the Foundation page on the Nutanix support portal (https://
portal.nutanix.com/#/page/Foundation). For information about the contents of the whitelist file, see Hypervisor
ISO Images on page 38.

About this task


To determine whether a hypervisor is supported, do the following:

Procedure

1. Obtain the MD5 checksum of the ISO that you want to use.

2. Open the downloaded whitelist file in a text editor and perform a search for the MD5 checksum.

What to do next
If the MD5 checksum is listed in the whitelist file, save the file to the workstation that hosts the Foundation
VM. If the whitelist file on the Foundation VM does not contain the MD5 checksum, you can replace that file
with the downloaded file before you begin installation.

Foundation | Hypervisor ISO Images | 39


Updating an iso_whitelist.json File on Foundation VM
About this task
To update an iso_whitelist.json file on Foundation VM:

Procedure

1. On the Foundation page, click Hypervisor and select a hypervisor from the drop-down list below Select a
hypervisor installer.

2. To upload a new whitelist.json file, click Manage Whitelist and click upload it.

3. After selecting the file, click Upload.

Note: To verify if the iso_whitelist.json is updated successfully, open the Manage Whitelist menu and check
for the date of the newly updated file.
9
NETWORK REQUIREMENTS
When configuring a Nutanix block a set of IP addresses is required to be allocated to the cluster. Ensure that chosen
IP addresses do not overlap with any hosts or services within the environment. You will also need to make sure
to open the software ports that are used to manage cluster components and to enable communication between
components such as the Controller VM, Web console, Prism Central, hypervisor, and the Nutanix hardware. Nutanix
recommends that you specify information such as a DNS server and NTP server even if the cluster is not connected to
the Internet or runs in a non-production environment.

Existing Customer Network


You will need the following information during the cluster configuration:

• Default gateway
• Network mask
• DNS server
• NTP server
Check whether a proxy server is in place in the network. To perform this check, you need the IP address and port
number of that server when enabling Nutanix Support on the cluster.

New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:

• IPMI interface
• Hypervisor host
• Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than the Controller VMs and
hypervisor hosts can be on this network.

Software Ports Required for Management and Communication


The following Nutanix network port diagrams show the ports that must be open for supported hypervisors. The
diagrams also show ports to open for infrastructure services.

Foundation | Network Requirements | 41


Figure 10: Nutanix Network Port Diagram for VMware ESXi

Figure 11: Nutanix Network Port Diagram for AHV

Figure 12: Nutanix Network Port Diagram for Microsoft Hyper-V


10
HYPER-V INSTALLATION
REQUIREMENTS
Ensure that the following requirements are met before installing Hyper-V:

Windows Active Directory Domain Controller


Requirements:

• The primary domain controller version must at least be 2008 R2.

Note: If you have Volume Shadow Copy Service (VSS) based back up tool (for example Veeam), functional level
of Active Directory must be 2008 or higher.

• Install and run Active Directory Web Services (ADWS). By default, connections are made over TCP port 9389
and firewall policies enable an exception on this port for ADWS.
To test that ADWS is installed and run on a domain controller, log on as a domain administrator to a Windows
host in the same domain with the RSAT-AD-Powershell feature installed. Execute the following PowerShell
command:
> (Get-ADDomainController).Name
If the command prints the primary name of the domain controller, then ADWS is installed and the port is open.

• The domain controller must run a DNS server.

Note: If any of the preceding requirements are not met, you must manually create an Active Directory computer
object for the Nutanix storage in the Active Directory, and add a DNS entry for the name.

• Ensure that the Active Directory domain is configured correctly for consistent time synchronization.
Accounts and Privileges:

• An Active Directory account with permission to create new Active Directory computer objects for either a storage
container or Organizational Unit (OU) where Nutanix nodes are placed. The credentials of this account are not
stored anywhere.
• An account that has sufficient privileges to join a Windows host to a domain. The credentials of this account are
not stored anywhere. These credentials are only used to join the hosts to the domain.
The following additional information are required:

• The IP address of the primary domain controller.

Note: The primary domain controller IP address is set as the primary DNS server on all the Nutanix hosts. It is also
set as the NTP server in the Nutanix storage cluster to synchronize time between Controller VMs, hosts and Active
Directory.

• The fully qualified domain name to which the Nutanix hosts and the storage cluster is going to be joined.

Foundation | Hyper-V Installation Requirements | 43


SCVMM

Note: Relevant only if you have SCVMM in your environment.

Requirements:

• The SCVMM version must be 2012 R2 or later. The tool must be installed on a Windows Server 2012 or a later
version.
• The SCVMM server must allow PowerShell remote execution.
To test this scenario, log on by using the SCVMM administrator account in a Windows host and run the following
PowerShell command on a Windows host that is different to the SCVMM host (for example, run the command
from the domain controller). If the command returns the name of the SCVMM server, then PowerShell remote
execution on the SCVMM server is permitted.
> Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN
\username

Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain name.

Note: If the SCVMM server does not allow PowerShell remote execution, you can perform the SCVMM setup
manually by using the SCVMM user interface.

• The ipconfig command must run in a PowerShell window on the SCVMM server. To verify run the following
command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username

Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory domain name.
• The SMB client configuration in the SCVMM server should have RequireSecuritySignature set to False. To
verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}

Replace scvmm_server_name with the SCVMM host name.


To change it, you can use the following command in the PowerShell on the SCVMM host by logging in as a
domain administrator.
Set-SMBClientConfiguration -RequireSecuritySignature $False -Force

If you are changing it from True to False, it is important to confirm that the policies on the SCVMM host have
the correct value. Changing the value of RequireSecuritySignature may not take effect if there is a policy with
the opposite value. On the SCVMM host, run rsop.msc to review the resultant set of policy details, and verify
the value by going to, Servername > Computer Configuration > Windows Settings > Security
Settings > Local Policies > Security Options: Policy Microsoft network client: Digitally sign
communications (always). The value displayed in RSOP must be, Disabled or Not Defined for the
change to persist. Also, if the RSOP shows it as Enabled, the group policies that are configured in the domain to
apply to the SCVMM server must be updated to Disabled. Otherwise, the RequireSecuritySignature will revert
back to the value of True. After setting the policy in Active Directory and propagating to the domain controllers,
refresh the SCVMM server policy by running the command gpupdate /force. Confirm in RSOP that the value
is Disabled.

Note: If security signing is mandatory, then you must enable Kerberos in the Nutanix cluster. In this case, it is
important to ensure that the time remains synchronized between the Active Directory server, the Nutanix hosts, and
the Nutanix Controller VMs. The Nutanix hosts and the Controller VMs use the Active Directory server as the NTP
server. So, ensure that Active Directory domain is configured correctly for consistent time synchronization.

Foundation | Hyper-V Installation Requirements | 44


Accounts and Privileges:

• When adding a host or a cluster to the SCVMM, the run-as account that is used to manage the host or the cluster
must be different from the service account that was used to install SCVMM.
• Run-as account must be a domain account and must have local administrator privileges on the Nutanix hosts.
This can be a domain administrator account. When the Nutanix hosts are joined to the domain, the domain
administrator account automatically takes administrator privileges on the host. If the domain account used as the
run-as account in SCVMM is not a domain administrator account, you must manually add the run-as account to
the list of local administrators on each host by running sconfig.

• SCVMM domain account with administrator privileges on SCVMM and PowerShell remote execution
privileges.
• If you want to install SCVMM server, a service account with local administrator privileges on the SCVMM
server.

IP Addresses

• One IP address for each Nutanix host.


• One IP address for each Nutanix Controller VM.
• One IP address for each Nutanix host IPMI interface.
• One IP address for the Nutanix storage cluster.
• One IP address for the Hyper-V failover cluster.

Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same subnet.

DNS Requirements

• Each Nutanix host must be assigned a name of 15 characters or less, which gets automatically added to the DNS
server during domain joining.
• The Nutanix storage cluster must be assigned a name of 15 characters or less. Then, add the name to the DNS
server when the storage cluster joins the domain.
• The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets automatically added to
the DNS server when the failover cluster is created.
• After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix hosts, the SCVMM server
(if applicable), or any other host that needs access to the Nutanix storage, for example, a host running the Hyper-V
Manager.

Storage Access Requirements

• Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by FQDN and not the
external IP address. If you use the IP address, it directs all the I/O to a single node in the cluster and compromises
performance and scalability.

Note: For external non-Nutanix hosts that require access to Nutanix SMB shares, see Nutanix SMB Shares
Connection Requirements from Outside the Cluster..

Host Maintenance Requirements

• When applying Windows updates to the Nutanix hosts, the hosts should be restarted one at a time, ensuring that
Nutanix services of the Controller VM on the restarted host come up fully prior to proceeding with the update of

Foundation | Hyper-V Installation Requirements | 45


the next host. This can be accomplished by using Cluster Aware Updating as well as using a Nutanix-provided
script, which can be plugged into the Cluster Aware Update Manager as a pre-update script. This pre-update script
ensures that the Nutanix services are restarted on one host at a time maintaining availability of storage throughout
the update procedure. For more information about cluster-aware updating, see Installing Windows Updates with
Cluster-Aware Updating.

Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the domain policies.

• If you place a host that is managed by SCVMM in maintenance mode, the Controller VM running on the host is
placed in the saved state by default , which might create issues. To properly place a host in the maintenance mode
refer to SCVMM Operation in the Hyper-V Administration for Acropolis Guide.
11
SETTING IPMI STATIC IP ADDRESS
You can assign a static IP address for an IPMI port by resetting the BIOS configuration.

About this task

Note: Do not perform the following procedure for HPE DX series. The label on the HPE DX chassis contains the iLO
MAC address that is also used to perform MAC-based imaging.

To configure a static IP address for the IPMI port on a node, do the following:

Procedure

1. Connect a VGA monitor and USB keyboard to the node.

2. Start the node.

3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.

4. Click the IPMI tab to display the IPMI screen.

5. Select BMC Network Configuration and press the Enter key.

6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the dialog box.

Foundation | Setting IPMI Static IP Address | 47


7. Select Configuration Address Source, press Enter, and then select Static in the dialog box.

8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on that node in the
dialog box.

9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the dialog box.

10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's network gateway in
the dialog box.

11. When all the field entries are updated with the correct values, press the F4 key to save the settings and exit the
BIOS setup mode.
12
TROUBLESHOOTING
This section provides guidance for fixing problems that might occur during a Foundation installation.

• For issues related to IPMI configuration of bare-metal nodes, see Fixing IPMI Configuration Issues on
page 49.
• For issues with the imaging, see Fixing Imaging Issues on page 50.
• For answers to other common questions, see Frequently Asked Questions (FAQ) on page 51.

Fixing IPMI Configuration Issues


About this task
In a bare-metal workflow when the IPMI port configuration fails for one or more nodes in the cluster, or it works but
type detection fails and an error message is displayed notifying the IPMI IP address is unreachable, the installation
process stops before imaging any of the nodes. (Foundation will not proceed with the imaging if an IPMI port
configuration failure is detected, but it will attempt to configure the port address on all nodes before stopping.)
Possible reasons for a failure include the following:

• One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the IPMI screen and
correct the IPMI MAC and IP addresses as needed.
• There is a user name/password mismatch. Go to the IPMI page and correct the IPMI username and password
fields as required.
• One or more nodes are connected to the switch through the wrong network interface. Verify that the first 1 GbE
network interface of each node is connected to the switch (see Setting Up the Network on page 21).
• The Foundation VM is not in the same broadcast domain as the Controller VMs for discovered nodes or the
IPMI interface for added (bare-metal or undiscovered) nodes. This problem typically occurs because (a) a non-
flat switch is used, (b) IP addresses of the node are not in the same subnet as the Foundation VM, and (c) multi-
homing is not configured.

• If all the nodes are in the Foundation VM subnet, go to the Node page and correct the IP addresses as needed.
• If the nodes are in multiple subnets, go to the Cluster page and configure multihoming.
• The IPMI interface is not set to failover. Go to BIOS settings to verify the interface configuration.
To identify and resolve IPMI port configuration problems, do the following:

Foundation | Troubleshooting | 49
Procedure

1. Go to the Block & Node Config screen and review the problematic IP address for the failed nodes (nodes with a
red X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting information. This can help
to diagnose the problem. See the service.log file (in /home/nutanix/foundation/log) and the individual
node log files for more detailed information.

Figure 13: Foundation: IPMI Configuration Error

2. When the issue is resolved, click the Configure IPMI button at the top of the screen.

Figure 14: Configure IPMI Button

3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.

4. When all nodes have green check marks in the IPMI address column, click Image Nodes at the top of the
screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, click Proceed bypass those
nodes and continue to the imaging step for the other nodes. In this case, configure the IPMI port address manually
for each bypassed node (see Setting IPMI Static IP Address on page 47).

Fixing Imaging Issues


About this task
When imaging fails for one or more nodes in the cluster, the progress bar turns red and a red check appears next to
the hypervisor address field for any node that was not imaged successfully. Possible reasons for a failure include the
following:

• A type failure was detected. Check connectivity to the IPMI (bare-metal workflow).
• There are network connectivity issues such as the following:

• The connection is dropping intermittently. If intermittent failures persist, look for conflicting IPs.
• [Hyper-V only] SAMBA service is not running. If Hyper-V displays a warning that it failed to mount the
install share, restart SAMBA with the command "sudo service smb restart".
• Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free up some space by
deleting unwanted ISO images. In addition, a Foundation crash could leave a /tmp/tmp* directory that contains a
copy of an ISO image that you can unmount (if necessary) and delete. Foundation requires 9 GB of free space for
Hyper-V and 3 GB for ESXi or AHV.
• The host boots but returns the error indicating an issue with reaching the Foundation VM. The message varies per
hypervisor. For example, on ESXi you might see a ks.cfg:line 12: "/.pre" script returned with an

Foundation | Troubleshooting | 50
error error message. Ensure that you assign the host an IP address on the same subnet as the Foundation VM or
multihoming is configured. Also check for IP address conflicts.
To identify and resolve imaging problems, do the following:

Procedure

1. See the individual log file for any failed nodes for information about the problem.

• Controller VM location for Foundation logs: ~/data/logs/foundation and ~/data/logs/


foundation.out[.timestamp]
• Bare metal location for Foundation logs: /home/nutanix/foundation/log

2. When the issue is resolved, click the Image Nodes (bare metal workflow) button.

Figure 15: Image Nodes Button (bare-metal)

3. Repeat the preceding steps as necessary to resolve any other imaging errors.
If you are unable to resolve the issue for one or more of the nodes, it is possible to image these nodes one at a time
(Contact Support for help).

Frequently Asked Questions (FAQ)


This section provides answers to some common Foundation questions.

Installation Issues

• How do I deploy a 1-node or 2-node cluster?


Run Foundation just like how you would run it to deploy a 3+ nodes' cluster.
• What steps should I take when I encounter a problem?
Click the appropriate log link on Foundation GUI. Usually the log file provides information about the problem
near the end of the file. If that information (plus the information in this troubleshooting section) is sufficient to
identify and solve the problem, fix the issue and then restart the imaging process.
If you are unable to fix the problem, open a Nutanix Support case. You can do this from the Nutanix Support
portal (https://portal.nutanix.com/#/page/cases/form?targetAction=new). Upload relevant log files as requested.
The log files are in the following locations:

• Standalone (bare-metal) location for Foundation logs: /home/nutanix/foundation/log in your


Foundation VM. This directory contains a service.log file for Foundation-related log messages, a log
file for each node being imaged (named node_cvm_ip_addr.log), a log file for each cluster being created
(named cluster_cluster_name.log, cluster_1.log, and so on), http.access and http.error files
for server-related log messages, debug.log file that records every bit of information Foundation outputs,
and api.log file that records certain requests made to the Foundation API. Logs from past installations are
stored in /home/nutanix/foundation/log/archive. In addition, the state of the current install process
is stored in /home/nutanix/foundation/persisted_config.json. You can download the entire log
archive from the following URL: http://foundation_ip:8000/foundation/log_archive_tar
• Controller VM location for Foundation logs: ~/data/logs/foundation (see preceding content description)
and ~/data/logs/foundation.out[.timestamp], which corresponds to the service.log file.

Foundation | Troubleshooting | 51
• I want to troubleshoot the operating system installation during cluster creation.
Point a VNC console to the hypervisor host IP address of the target node at port 5901.
• I need to restart Foundation on the Controller VM.
To restart Foundation, log on to the Controller VM with SSH and then run the following command:
nutanix@cvm$ pkill foundation && genesis restart

• My installation hangs, and the service log complains about type detection.
Verify that all of your IPMI IPs are reachable through Foundation. (On a rare occasion the IPMI IP assignment
will take some time.) If you see an authentication error, double-check your password. If the problem persists, try
resetting the BMC.
• Installation fails with an error where Foundation cannot ping the configured IPMI IP addresses.
Verify that the LAN interface is set to failover mode in the IPMI settings for each node. You can find this setting
by logging into IPMI and going to Configuration > Network > LAN Interface. Verify that the setting is
Failover (not Dedicate).
• The diagnostic box was checked to run after installation, but that test (diagnostics.py) does not complete
(hangs, fails, times out).
Running this test can result in timeouts or low IOPS if you are using 1GbE port. Connection via 1GbE port may
not provide the performance necessary to run this test at a reasonable speed.
• Foundation seems to be preparing the ISOs properly, but the nodes boot into <previous hypervisor> and the install
hangs.
The boot order for one or more nodes might be set incorrectly with the precedence of the USB over SATA DOM
as the first boot device instead of the CD-ROM. To fix this, boot the nodes into BIOS mode and either select
"restore optimized defaults" (F3 as of BIOS version 3.0.2) or give the CD-ROM boot priority. Reboot the nodes
and retry the installation.
• I have misconfigured the IP addresses in the Foundation configuration page. How long is the timeout for the call
back function, and is there a way to avoid the wait?
The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up the terminal in the
Foundation VM and enter the following commands:
$ sudo /etc/init.d/foundation_service stop
$ cd ~/foundation/
$ mv persisted_config.json persisted_config.json.bak
$ sudo /etc/init.d/foundation_service start
Refresh the Foundation web page. If the nodes are still stuck, reboot them.
• I need to reset a block to the default state.
Using the bare-metal imaging workflow, download the desired Phoenix ISO image for AHV from the support
portal (see https://portal.nutanix.com/#/page/phoenix/list). Boot each node in the block to that ISO and follow the
prompts until the re-imaging process is complete. You should then be able to use Foundation as usual.
• The cluster create step is not working.
If you are installing NOS 3.5 or later, check the service.log file for messages about the problem. Next, check
the relevant cluster log (cluster_X.log) for cluster-specific messages. The cluster create step in Foundation is
not supported for earlier releases and will fail if you are using Foundation to image a pre-3.5 NOS release. You
must create the cluster manually (after imaging) for earlier NOS releases.
• I want to re-image nodes that are part of an existing cluster.
Do a cluster destroy prior to discovery. (Nodes in an existing cluster are ignored during discovery.)

Foundation | Troubleshooting | 52
• My Foundation VM displays an error that it is out of disk space. What can I delete to make room?
Unmount any temporarily-mounted file systems using the following commands:
$ sudo fusermount -u /home/nutanix/foundation/tmp/fuse
$ sudo umount /tmp/tmp*
$ sudo rm -rf /tmp/tmp*
If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.
• I keep seeing the message tar: Exiting with failure status due to previous errors'tar rf /
home/nutanix/foundation/log/archive/log-archive-20140604-131859.tar -C /home/nutanix/
foundation ./persisted_config.json' failed; error ignored.

This is a benign message. Foundation archives persisted configuration file (persisted_config.json) alongside the
logs. Occasionally, there is no configuration file to back up. This is expected, and you may ignore this message
with no ill consequences.
• Imaging fails after changing the language pack.
Do not change the language pack. Only the default English language pack is supported. Changing the language
pack can cause some scripts to fail during Foundation imaging. Even after imaging, character set changes can
cause problems for NOS.
• [ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.
Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it.

Network and Workstation Issues

• I am having trouble installing VirtualBox on my Mac.


Turning off the WiFi can sometimes resolve this problem. For help with VirtualBox issues, see https://
www.virtualbox.org/wiki/End-user_documentation.
There can be a problem when the USB Ethernet adapter is listed as a 10/100 interface instead of a 1 GbE
interface. To support a 1 GbE interface, it is recommended that MacBook Air users connect to the network with a
thunderbolt network adapter rather than a USB network adapter.
• I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to boot the VM on
VirtualBox.
The VM must be configured to expose a 64-bit CPU. For more more information, see https://
forums.virtualbox.org/viewtopic.php?f=8&t=58767.
• I am running the network setup script, but I do not see eth0 when I run ifconfig.
This can happen when you make changes to your VirtualBox network adapters. VirtualBox typically creates
a new interface (eth1, then eth2, and so on) to accommodate your new settings. To fix this, run the following
commands:
$ sudo rm /etc/udev/rules.d/70-persistent-net-rules
$ sudo shutdown -r now
This should reboot your machine and reset your adapter to eth0.
• I have plugged in the Ethernet cables according to the directions and I can reach the IPMI interface, but discovery
is not finding the nodes to image.
Foundation VM must be in the same broadcast domain as the Controller VMs to receive their IPv6 link-local
traffic. If you are performing installation using a flat 1GbE switch, ensure that the 10GbE ports are not plugged in.
If they are, the Controller VMs might choose to direct their traffic over the 10GbE interfaces and the traffic will
never reach the Foundation VM. If you are using a 10GbE switch, ensure that only the IPMI 10/100 port and the
10GbE ports are connected.

Foundation | Troubleshooting | 53
• The switch is dropping my IPMI connections in the middle of imaging.
If your network connection seems to be dropping out in the middle of imaging, try using an unmanaged switch
with Spanning Tree Protocol (STP) disabled.
• Foundation is stalled on the ping home phase.
The ping test will wait up to two minutes per NIC to receive a response, so a long delay in the ping phase indicates
a network connection issue. Check that your 10GbE ports are unplugged and your 1GbE connection can reach
Foundation.
• Can I image nodes connected via 10/100 Mbps switch ports?
A 10/100Mbps switch is not recommended, but it can be used for a few nodes. You may see timeouts. It is highly
recommended that you use a 1GbE or 10GbE switch if possible.

Informational Topics

• How can I determine whether a node was imaged with Foundation or standalone Phoenix?

• A node imaged using standalone Phoenix will have the file /etc/nutanix/foundation_version in it, but
the contents will be “unknown” instead of a valid Foundation version.
• A node imaged using Foundation will have the file /etc/nutanix/foundation_version in it with a valid
Foundation version.
• Does the first boot work when run more than once?
First boot creates a marker failure marker file whenever it fails and a success marker file whenever it succeeds. If
the first boot script needs to be executed again, delete these marker files and just manually execute the script.
• Do the first boot marker files contain anything?
They are just empty files.
• Why might first boot fail?
Possible reasons include the following:

• First boot may take more time than expected, in which case Foundation might time out.
• NIC teaming fails.
• The Controller VM has a kernel panic when it boots.
• Hostd service does not start on time.
• What is the timeout for first boot?
The timeout is 90 minutes. A node may restart several times (requirements from certain driver installations)
during the execution of the first boot script, and this can increase the overall first boot time.
• How does the Foundation process differ on a Dell system?
Foundation uses a different tool called racadm to talk to the IPMI interface of a Dell system, and the files that
have the hardware layout details are different. However, the overall Foundation workflow (series of steps) remains
the same.

Foundation | Troubleshooting | 54
• How does the Foundation service start in the Controller VM-based and standalone versions?

• Standalone: Manually start the Foundation service using foundation_service start (in the ~/
foundation/bin directory).
• Controller VM-based: Genesis service takes care of starting the Foundation service. If the Foundation service
is not already running, use the genesis restart command to start Foundation. If the Foundation service
is already running, a genesis restart will not restart Foundation. You must manually kill the Foundation
service that is running currently before executing genesis restart. The genesis status command lists
the services running currently along with their PIDs.
• Why doesn’t the genesis restart command stop Foundation?
Genesis only restarts services that are required for a cluster to be up and running. Stopping Foundation could
cause failures to current imaging sessions. For example, when expanding a cluster Foundation may be in the
process of imaging a node, which must not be disrupted by restarting Genesis.
• How is the installer VM created?
The Qemu library is part of Phoenix. The qemu command starts the VM by taking a hypervisor ISO and disk
details as input. This command is simply executed on Phoenix to launch the installer VM.
• How do you validate that installation is complete and the node is ready with regards to firstboot?
This can be validated by checking the presence of a first boot success marker file. The marker file varies per
hypervisor:

• ESXi: /bootbank/first_boot.log
• AHV: /root/.firstboot_success
• Hyper-V: D:\markers\firstboot_success
• Does the Repair CVM process re-create partitions?
Repair CVM task only images AOS and recreates the partitions on the SSD. It does not alter any of the data on the
SATADOM, which contains the hypervisor.
• Can I use older Phoenix ISOs for manual imaging?
Use a Phoenix ISO that contains the AOS installation bundle and hypervisor ISO in it. Makefiles has a separate
target for building such a standalone Phoenix ISO.
• What are the pre-checks run when a node is added?

• The hypervisor type and version should match between the existing cluster and the new node.
• The AOS version should match between the existing cluster and the new node.
• Can I get a map of percent completion to step?
No. The percent completion does not have a one-to-one mapping to the step. Percent completion depends on the
different tasks that are executed during imaging.
• Do the log folders contain past imaging session logs?
Yes. All the previous imaging session logs are compressed (on a session basis) and archived in the folder ~/
foundation/log/archive.
• If I have two clusters in my lab, can I use one to do bare-metal imaging on the other?
No. The tools and packages that are required for bare-metal imaging are typically not present in the Controller
VM.

Foundation | Troubleshooting | 55
• How do you add an already imaged node to an existing cluster?
If the AOS version of the node matches or is below the same of the cluster you can use the "Expand Cluster"
option in the Prism web console. This option employs Foundation to image the new node (if required) and then
adds it to the existing cluster. You can also add the node through the nCLI: ncli cluster add-node node-
uuid=<uuid>. The UUID value can be found in the factory_config.json file on the node. Always check
compatibility matrix to verify that the hypervisor and AOS versions are compatible prior to expanding the cluster.
• Is it required to supply IPMI details when using the Controller VM-based Foundation?
It is optional to provide IPMI details in the Controller VM-based Foundation. If IPMI information is provided,
Foundation will attempt to configure the IPMI interface as well.
• Can I use a file share to hold AOS installation bundles and hypervisor ISO files?
AOS installation bundles and hypervisor ISO files can be stored externally. A link to the location must be added
to (as appropriate) ~/foundation/nos or ~/foundation/isos/hypervisor/[esx|kvm|hyperv]/ to the
share location. Foundation will discover files as described in these locations only. As long as the files' location is
described and is accessible from the Foundation using a link, Foundation will be able to leverage the files.
• Where is Foundation located in the Controller VM?
/home/nutanix/foundation
• How can I determine if a particular (standalone) Foundation VM can image a given cluster?
Execute the following command on the Foundation VM and see whether it returns successfully (exit status 0):
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password fru
If this command is successful, the Foundation VM can be used to image the node. This is the command used by
Foundation to get hardware details from the IPMI interface of the node. The exact tool used for communicating to
the SMC IPMI interface is the following:
java -jar SMCIPMITool.jar ipmi_ip username password shell
If this command is able to open a shell, imaging will not fail because of an IPMI issue. Any other errors like
violating minimum requirements will be displayed only after Foundation starts imaging the node.
• How do I determine whether a particular hypervisor ISO will work?
The md5 hash of all qualified hypervisor ISO images are listed in the iso_whitelist.json file, which is
located in ~/foundation/config/. The latest version of the iso_whitelist.json file is available from the
Nutanix support portal (see Hypervisor ISO Images on page 38).
• How does Foundation mount an ISO over IPMI?

• For SMC, Foundation uses the following commands:


cd foundation/lib/bin/smcipmitool
java -jar SMCIPMITool.jar ipmi_ip ipmi_username ipmi_password shell
vmwa dev2iso <path to iso file>

The java command starts a shell with access to the remote IPMI interface. The vmwa command mounts the
ISO file virtually over IPMI. Foundation then opens another terminal and uses the following commands to set
the first boot device to CD-ROM and restarts the node.
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password chassis bootdev cdrom
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password chassis power reset

• For Dell, Foundation uses the following commands:


racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage -d

Foundation | Troubleshooting | 56
racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage -c -
l nfs_share_path_to_iso_file -u nutanix –p nutanix/4uracadm -r ipmi_ip -u ipmi_username -
p ipmi_password remoteimage –s
racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage –s
racadm -r ipmi_ip -u ipmi_username -p ipmi_password config -g cfgServerInfo -o
cfgServerBootOnce 1
racadm -r ipmi_ip -u ipmi_username -p ipmi_password config -g cfgServerInfo -o
cfgServerFirstBootDevice vCD-DVD
The node can be rebooted using the following commands:
racadm -r ipmi_ip -u ipmi_username -p ipmi_password serveraction powerdown
racadm -r ipmi_ip -u ipmi_username -p ipmi_password serveraction powerup

• Does Phoenix download process rely on using IPv4 address?


Yes, Phoenix accesses IPv4 IP addresses.
• Is there an integrity check for the files Phoenix downloads?
Yes. The md5 checksums of the files to be downloaded (AOS and hypervisor ISO) are passed to Phoenix through
a configuration file. (The HTTP path to the configuration file is passed as command line input.) Phoenix verifies
the md5 checksums of the files after downloading and retries the download if a value mismatch is detected.
• Pynfs, what is it?
It is a Python implementation of NFS share used during the initial days. It is still used on platforms with a 16 GB
DOM.
• Is there a reason for using port 8000?
No specific reason.

Foundation | Troubleshooting | 57
Appendix

A
A APPENDIX: SINGLE-NODE
CONFIGURATION (PHOENIX)
To configure a single node, to reinstall the hypervisor after you replace a hypervisor boot drive, or to install
or repair a Nutanix Controller VM, use Phoenix, which is an ISO installer.
For more information about using Phoenix, see KB 5591. To use Phoenix, contact Nutanix Support.

Warning:

• Nutanix does not support the use of Phoenix to reimage or reinstall AOS with the Action titled "Install
CVM" on a node that is already part of a cluster as it can lead to data loss.
• Use of Phoenix to repair the AOS software on a node with the Action titled "Repair CVM" must be
done only with the direct assistance of Nutanix Support.
• Use of Phoenix to recover a node after a hypervisor boot disk failure is not necessary usually. To
see how this recovery is automated through Prism for your platform model and AOS version, see
the Hardware Replacement Documentation.
COPYRIGHT
Copyright 2020 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or other
jurisdictions. All other brand and product names mentioned herein are for identification purposes only and may be
trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft patents with
respect to anything other than the file server implementation portion of the binaries for this software, including no
licenses or any other rights in any hardware or any devices or software that are used to communicate with or in
connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere Client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

Foundation |
Interface Target Username Password

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

IPMI web interface or ipmitool Nutanix node ADMIN ADMIN

SSH client or console Acropolis OpenStack Services root admin


VM (Nutanix OVM)

SSH client or console Xtract VM nutanix nutanix/4u

SSH client or console Xplorer VM nutanix nutanix/4u

Version
Last modified: October 1, 2020 (2020-10-01T14:33:05+05:30)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy